Logic Model Checking of Time-Periodic Real-Time Systems
NASA Technical Reports Server (NTRS)
Florian, Mihai; Gamble, Ed; Holzmann, Gerard
2012-01-01
In this paper we report on the work we performed to extend the logic model checker SPIN with built-in support for the verification of periodic, real-time embedded software systems, as commonly used in aircraft, automobiles, and spacecraft. We first extended the SPIN verification algorithms to model priority based scheduling policies. Next, we added a library to support the modeling of periodic tasks. This library was used in a recent application of the SPIN model checker to verify the engine control software of an automobile, to study the feasibility of software triggers for unintended acceleration events.
Combining Static Analysis and Model Checking for Software Analysis
NASA Technical Reports Server (NTRS)
Brat, Guillaume; Visser, Willem; Clancy, Daniel (Technical Monitor)
2003-01-01
We present an iterative technique in which model checking and static analysis are combined to verify large software systems. The role of the static analysis is to compute partial order information which the model checker uses to reduce the state space. During exploration, the model checker also computes aliasing information that it gives to the static analyzer which can then refine its analysis. The result of this refined analysis is then fed back to the model checker which updates its partial order reduction. At each step of this iterative process, the static analysis computes optimistic information which results in an unsafe reduction of the state space. However we show that the process converges to a fired point at which time the partial order information is safe and the whole state space is explored.
Formal Verification Toolkit for Requirements and Early Design Stages
NASA Technical Reports Server (NTRS)
Badger, Julia M.; Miller, Sheena Judson
2011-01-01
Efficient flight software development from natural language requirements needs an effective way to test designs earlier in the software design cycle. A method to automatically derive logical safety constraints and the design state space from natural language requirements is described. The constraints can then be checked using a logical consistency checker and also be used in a symbolic model checker to verify the early design of the system. This method was used to verify a hybrid control design for the suit ports on NASA Johnson Space Center's Space Exploration Vehicle against safety requirements.
Software Certification for Temporal Properties With Affordable Tool Qualification
NASA Technical Reports Server (NTRS)
Xia, Songtao; DiVito, Benedetto L.
2005-01-01
It has been recognized that a framework based on proof-carrying code (also called semantic-based software certification in its community) could be used as a candidate software certification process for the avionics industry. To meet this goal, tools in the "trust base" of a proof-carrying code system must be qualified by regulatory authorities. A family of semantic-based software certification approaches is described, each different in expressive power, level of automation and trust base. Of particular interest is the so-called abstraction-carrying code, which can certify temporal properties. When a pure abstraction-carrying code method is used in the context of industrial software certification, the fact that the trust base includes a model checker would incur a high qualification cost. This position paper proposes a hybrid of abstraction-based and proof-based certification methods so that the model checker used by a client can be significantly simplified, thereby leading to lower cost in tool qualification.
"Antelope": a hybrid-logic model checker for branching-time Boolean GRN analysis
2011-01-01
Background In Thomas' formalism for modeling gene regulatory networks (GRNs), branching time, where a state can have more than one possible future, plays a prominent role. By representing a certain degree of unpredictability, branching time can model several important phenomena, such as (a) asynchrony, (b) incompletely specified behavior, and (c) interaction with the environment. Introducing more than one possible future for a state, however, creates a difficulty for ordinary simulators, because infinitely many paths may appear, limiting ordinary simulators to statistical conclusions. Model checkers for branching time, by contrast, are able to prove properties in the presence of infinitely many paths. Results We have developed Antelope ("Analysis of Networks through TEmporal-LOgic sPEcifications", http://turing.iimas.unam.mx:8080/AntelopeWEB/), a model checker for analyzing and constructing Boolean GRNs. Currently, software systems for Boolean GRNs use branching time almost exclusively for asynchrony. Antelope, by contrast, also uses branching time for incompletely specified behavior and environment interaction. We show the usefulness of modeling these two phenomena in the development of a Boolean GRN of the Arabidopsis thaliana root stem cell niche. There are two obstacles to a direct approach when applying model checking to Boolean GRN analysis. First, ordinary model checkers normally only verify whether or not a given set of model states has a given property. In comparison, a model checker for Boolean GRNs is preferable if it reports the set of states having a desired property. Second, for efficiency, the expressiveness of many model checkers is limited, resulting in the inability to express some interesting properties of Boolean GRNs. Antelope tries to overcome these two drawbacks: Apart from reporting the set of all states having a given property, our model checker can express, at the expense of efficiency, some properties that ordinary model checkers (e.g., NuSMV) cannot. This additional expressiveness is achieved by employing a logic extending the standard Computation-Tree Logic (CTL) with hybrid-logic operators. Conclusions We illustrate the advantages of Antelope when (a) modeling incomplete networks and environment interaction, (b) exhibiting the set of all states having a given property, and (c) representing Boolean GRN properties with hybrid CTL. PMID:22192526
Spelling: Computerised Feedback for Self-Correction
ERIC Educational Resources Information Center
Lawley, Jim
2016-01-01
Research has shown that any assumption that L2 learners of English do well to rely on the feedback provided by generic spell checkers (for example, the MS Word spell checker) is misplaced. Efforts to develop spell checkers specifically for L2 learners have focused on training software to offer more appropriate suggestion lists for replacing…
Introduction of Virtualization Technology to Multi-Process Model Checking
NASA Technical Reports Server (NTRS)
Leungwattanakit, Watcharin; Artho, Cyrille; Hagiya, Masami; Tanabe, Yoshinori; Yamamoto, Mitsuharu
2009-01-01
Model checkers find failures in software by exploring every possible execution schedule. Java PathFinder (JPF), a Java model checker, has been extended recently to cover networked applications by caching data transferred in a communication channel. A target process is executed by JPF, whereas its peer process runs on a regular virtual machine outside. However, non-deterministic target programs may produce different output data in each schedule, causing the cache to restart the peer process to handle the different set of data. Virtualization tools could help us restore previous states of peers, eliminating peer restart. This paper proposes the application of virtualization technology to networked model checking, concentrating on JPF.
FipsOrtho: A Spell Checker for Learners of French
ERIC Educational Resources Information Center
L'Haire, Sebastien
2007-01-01
This paper presents FipsOrtho, a spell checker targeted at learners of French, and a corpus of learners' errors which has been gathered to test the system and to get a sample of specific language learners' errors. Spell checkers are a standard feature of many software products, however they are not designed for specific language learners' errors.…
Generalized Symbolic Execution for Model Checking and Testing
NASA Technical Reports Server (NTRS)
Khurshid, Sarfraz; Pasareanu, Corina; Visser, Willem; Kofmeyer, David (Technical Monitor)
2003-01-01
Modern software systems, which often are concurrent and manipulate complex data structures must be extremely reliable. We present a novel framework based on symbolic execution, for automated checking of such systems. We provide a two-fold generalization of traditional symbolic execution based approaches: one, we define a program instrumentation, which enables standard model checkers to perform symbolic execution; two, we give a novel symbolic execution algorithm that handles dynamically allocated structures (e.g., lists and trees), method preconditions (e.g., acyclicity of lists), data (e.g., integers and strings) and concurrency. The program instrumentation enables a model checker to automatically explore program heap configurations (using a systematic treatment of aliasing) and manipulate logical formulae on program data values (using a decision procedure). We illustrate two applications of our framework: checking correctness of multi-threaded programs that take inputs from unbounded domains with complex structure and generation of non-isomorphic test inputs that satisfy a testing criterion. Our implementation for Java uses the Java PathFinder model checker.
Efficient Translation of LTL Formulae into Buchi Automata
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Lerda, Flavio
2001-01-01
Model checking is a fully automated technique for checking that a system satisfies a set of required properties. With explicit-state model checkers, properties are typically defined in linear-time temporal logic (LTL), and are translated into B chi automata in order to be checked. This report presents how we have combined and improved existing techniques to obtain an efficient LTL to B chi automata translator. In particular, we optimize the core of existing tableau-based approaches to generate significantly smaller automata. Our approach has been implemented and is being released as part of the Java PathFinder software (JPF), an explicit state model checker under development at the NASA Ames Research Center.
Parallel Software Model Checking
2015-01-08
checker. This project will explore this strategy to parallelize the generalized PDR algorithm for software model checking. It belongs to TF1 due to its ... focus on formal verification . Generalized PDR. Generalized Property Driven Rechability (GPDR) i is an algorithm for solving HORN-SMT reachability...subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 08
A Flexible Statechart-to-Model-Checker Translator
NASA Technical Reports Server (NTRS)
Rouquette, Nicolas; Dunphy, Julia; Feather, Martin S.
2000-01-01
Many current-day software design tools offer some variant of statechart notation for system specification. We, like others, have built an automatic translator from (a subset of) statecharts to a model checker, for use to validate behavioral requirements. Our translator is designed to be flexible. This allows us to quickly adjust the translator to variants of statechart semantics, including problem-specific notational conventions that designers employ. Our system demonstration will be of interest to the following two communities: (1) Potential end-users: Our demonstration will show translation from statecharts created in a commercial UML tool (Rational Rose) to Promela, the input language of Holzmann's model checker SPIN. The translation is accomplished automatically. To accommodate the major variants of statechart semantics, our tool offers user-selectable choices among semantic alternatives. Options for customized semantic variants are also made available. The net result is an easy-to-use tool that operates on a wide range of statechart diagrams to automate the pathway to model-checking input. (2) Other researchers: Our translator embodies, in one tool, ideas and approaches drawn from several sources. Solutions to the major challenges of statechart-to-model-checker translation (e.g., determining which transition(s) will fire, handling of concurrent activities) are retired in a uniform, fully mechanized, setting. The way in which the underlying architecture of the translator itself facilitates flexible and customizable translation will also be evident.
Java PathFinder: A Translator From Java to Promela
NASA Technical Reports Server (NTRS)
Havelund, Klaus
1999-01-01
JAVA PATHFINDER, JPF, is a prototype translator from JAVA to PROMELA, the modeling language of the SPIN model checker. JPF is a product of a major effort by the Automated Software Engineering group at NASA Ames to make model checking technology part of the software process. Experience has shown that severe bugs can be found in final code using this technique, and that automated translation from a programming language to a modeling language like PROMELA can help reducing the effort required.
Formal methods for test case generation
NASA Technical Reports Server (NTRS)
Rushby, John (Inventor); De Moura, Leonardo Mendonga (Inventor); Hamon, Gregoire (Inventor)
2011-01-01
The invention relates to the use of model checkers to generate efficient test sets for hardware and software systems. The method provides for extending existing tests to reach new coverage targets; searching *to* some or all of the uncovered targets in parallel; searching in parallel *from* some or all of the states reached in previous tests; and slicing the model relative to the current set of coverage targets. The invention provides efficient test case generation and test set formation. Deep regions of the state space can be reached within allotted time and memory. The approach has been applied to use of the model checkers of SRI's SAL system and to model-based designs developed in Stateflow. Stateflow models achieving complete state and transition coverage in a single test case are reported.
Random Testing and Model Checking: Building a Common Framework for Nondeterministic Exploration
NASA Technical Reports Server (NTRS)
Groce, Alex; Joshi, Rajeev
2008-01-01
Two popular forms of dynamic analysis, random testing and explicit-state software model checking, are perhaps best viewed as search strategies for exploring the state spaces introduced by nondeterminism in program inputs. We present an approach that enables this nondeterminism to be expressed in the SPIN model checker's PROMELA language, and then lets users generate either model checkers or random testers from a single harness for a tested C program. Our approach makes it easy to compare model checking and random testing for models with precisely the same input ranges and probabilities and allows us to mix random testing with model checking's exhaustive exploration of non-determinism. The PROMELA language, as intended in its design, serves as a convenient notation for expressing nondeterminism and mixing random choices with nondeterministic choices. We present and discuss a comparison of random testing and model checking. The results derive from using our framework to test a C program with an effectively infinite state space, a module in JPL's next Mars rover mission. More generally, we show how the ability of the SPIN model checker to call C code can be used to extend SPIN's features, and hope to inspire others to use the same methods to implement dynamic analyses that can make use of efficient state storage, matching, and backtracking.
Design, Development, and Automated Verification of an Integrity-Protected Hypervisor
2012-07-16
mechanism for implementing software virtualization. Since hypervisors execute at a very high privilege level, they must be secure. A fundamental security...using the CBMC model checker. CBMC verified XMHF?s implementation ? about 4700 lines of C code ? in about 80 seconds using less than 2GB of RAM. 15...Hypervisors are a popular mechanism for implementing software virtualization. Since hypervisors execute at a very high privilege level, they must be
Practical Application of Model Checking in Software Verification
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Skakkebaek, Jens Ulrik
1999-01-01
This paper presents our experiences in applying the JAVA PATHFINDER (J(sub PF)), a recently developed JAVA to SPIN translator, in the finding of synchronization bugs in a Chinese Chess game server application written in JAVA. We give an overview of J(sub PF) and the subset of JAVA that it supports and describe the abstraction and verification of the game server. Finally, we analyze the results of the effort. We argue that abstraction by under-approximation is necessary for abstracting sufficiently smaller models for verification purposes; that user guidance is crucial for effective abstraction; and that current model checkers do not conveniently support the computational models of software in general and JAVA in particular.
Using software security analysis to verify the secure socket layer (SSL) protocol
NASA Technical Reports Server (NTRS)
Powell, John D.
2004-01-01
nal Aeronautics and Space Administration (NASA) have tens of thousands of networked computer systems and applications. Software Security vulnerabilities present risks such as lost or corrupted data, information the3, and unavailability of critical systems. These risks represent potentially enormous costs to NASA. The NASA Code Q research initiative 'Reducing Software Security Risk (RSSR) Trough an Integrated Approach '' offers, among its capabilities, formal verification of software security properties, through the use of model based verification (MBV) to address software security risks. [1,2,3,4,5,6] MBV is a formal approach to software assurance that combines analysis of software, via abstract models, with technology, such as model checkers, that provide automation of the mechanical portions of the analysis process. This paper will discuss: The need for formal analysis to assure software systems with respect to software and why testing alone cannot provide it. The means by which MBV with a Flexible Modeling Framework (FMF) accomplishes the necessary analysis task. An example of FMF style MBV in the verification of properties over the Secure Socket Layer (SSL) communication protocol as a demonstration.
Application of Lightweight Formal Methods to Software Security
NASA Technical Reports Server (NTRS)
Gilliam, David P.; Powell, John D.; Bishop, Matt
2005-01-01
Formal specification and verification of security has proven a challenging task. There is no single method that has proven feasible. Instead, an integrated approach which combines several formal techniques can increase the confidence in the verification of software security properties. Such an approach which species security properties in a library that can be reused by 2 instruments and their methodologies developed for the National Aeronautics and Space Administration (NASA) at the Jet Propulsion Laboratory (JPL) are described herein The Flexible Modeling Framework (FMF) is a model based verijkation instrument that uses Promela and the SPIN model checker. The Property Based Tester (PBT) uses TASPEC and a Text Execution Monitor (TEM). They are used to reduce vulnerabilities and unwanted exposures in software during the development and maintenance life cycles.
Style and Usage Software: Mentor, not Judge.
ERIC Educational Resources Information Center
Smye, Randy
Computer software style and usage checkers can encourage students' recursive revision strategies. For example, HOMER is based on the revision pedagogy presented in Richard Lanham's "Revising Prose," while Grammatik II focuses on readability, passive voice, and possibly misused words or phrases. Writer's Workbench "Style" (a UNIX program) provides…
Marco-Ruiz, Luis; Bønes, Erlend; de la Asunción, Estela; Gabarron, Elia; Aviles-Solis, Juan Carlos; Lee, Eunji; Traver, Vicente; Sato, Keiichi; Bellika, Johan G
2017-10-01
Symptom checkers are software tools that allow users to submit a set of symptoms and receive advice related to them in the form of a diagnosis list, health information or triage. The heterogeneity of their potential users and the number of different components in their user interfaces can make testing with end-users unaffordable. We designed and executed a two-phase method to test the respiratory diseases module of the symptom checker Erdusyk. Phase I consisted of an online test with a large sample of users (n=53). In Phase I, users evaluated the system remotely and completed a questionnaire based on the Technology Acceptance Model. Principal Component Analysis was used to correlate each section of the interface with the questionnaire responses, thus identifying which areas of the user interface presented significant contributions to the technology acceptance. In the second phase, the think-aloud procedure was executed with a small number of samples (n=15), focusing on the areas with significant contributions to analyze the reasons for such contributions. Our method was used effectively to optimize the testing of symptom checker user interfaces. The method allowed kept the cost of testing at reasonable levels by restricting the use of the think-aloud procedure while still assuring a high amount of coverage. The main barriers detected in Erdusyk were related to problems understanding time repetition patterns, the selection of levels in scales to record intensities, navigation, the quantification of some symptom attributes, and the characteristics of the symptoms. Copyright © 2017 Elsevier Inc. All rights reserved.
From Verified Models to Verifiable Code
NASA Technical Reports Server (NTRS)
Lensink, Leonard; Munoz, Cesar A.; Goodloe, Alwyn E.
2009-01-01
Declarative specifications of digital systems often contain parts that can be automatically translated into executable code. Automated code generation may reduce or eliminate the kinds of errors typically introduced through manual code writing. For this approach to be effective, the generated code should be reasonably efficient and, more importantly, verifiable. This paper presents a prototype code generator for the Prototype Verification System (PVS) that translates a subset of PVS functional specifications into an intermediate language and subsequently to multiple target programming languages. Several case studies are presented to illustrate the tool's functionality. The generated code can be analyzed by software verification tools such as verification condition generators, static analyzers, and software model-checkers to increase the confidence that the generated code is correct.
Software Model Checking of ARINC-653 Flight Code with MCP
NASA Technical Reports Server (NTRS)
Thompson, Sarah J.; Brat, Guillaume; Venet, Arnaud
2010-01-01
The ARINC-653 standard defines a common interface for Integrated Modular Avionics (IMA) code. In particular, ARINC-653 Part 1 specifies a process- and partition-management API that is analogous to POSIX threads, but with certain extensions and restrictions intended to support the implementation of high reliability flight code. MCP is a software model checker, developed at NASA Ames, that provides capabilities for model checking C and C++ source code. In this paper, we present recent work aimed at implementing extensions to MCP that support ARINC-653, and we discuss the challenges and opportunities that consequentially arise. Providing support for ARINC-653 s time and space partitioning is nontrivial, though there are implicit benefits for partial order reduction possible as a consequence of the API s strict interprocess communication policy.
Property Specification Patterns for intelligence building software
NASA Astrophysics Data System (ADS)
Chun, Seungsu
2018-03-01
In this paper, through the property specification pattern research for Modal MU(μ) logical aspects present a single framework based on the pattern of intelligence building software. In this study, broken down by state property specification pattern classification of Dwyer (S) and action (A) and was subdivided into it again strong (A) and weaknesses (E). Through these means based on a hierarchical pattern classification of the property specification pattern analysis of logical aspects Mu(μ) was applied to the pattern classification of the examples used in the actual model checker. As a result, not only can a more accurate classification than the existing classification systems were easy to create and understand the attributes specified.
Safety Verification of a Fault Tolerant Reconfigurable Autonomous Goal-Based Robotic Control System
NASA Technical Reports Server (NTRS)
Braman, Julia M. B.; Murray, Richard M; Wagner, David A.
2007-01-01
Fault tolerance and safety verification of control systems are essential for the success of autonomous robotic systems. A control architecture called Mission Data System (MDS), developed at the Jet Propulsion Laboratory, takes a goal-based control approach. In this paper, a method for converting goal network control programs into linear hybrid systems is developed. The linear hybrid system can then be verified for safety in the presence of failures using existing symbolic model checkers. An example task is simulated in MDS and successfully verified using HyTech, a symbolic model checking software for linear hybrid systems.
Hand, Kieran S; Cumming, Debbie; Hopkins, Susan; Ewings, Sean; Fox, Andy; Theminimulle, Sandya; Porter, Robert J; Parker, Natalie; Munns, Joanne; Sheikh, Adel; Keyser, Taryn; Puleston, Richard
2017-04-01
The implementation of electronic prescribing and medication administration (EPMA) systems is a priority for hospitals and a potential component of antimicrobial stewardship (AMS). To identify software features within EPMA systems that could potentially facilitate AMS and to survey practising UK infection specialist healthcare professionals in order to assign priority to these software features. A questionnaire was developed using nominal group technique and transmitted via email links through professional networks. The questionnaire collected demographic data, information on priority areas and anticipated impact of EPMA. Responses from different respondent groups were compared using the Mann-Whitney U -test. Responses were received from 164 individuals (142 analysable). Respondents were predominantly specialist infection pharmacists (48%) or medical microbiologists (37%). Of the pharmacists, 59% had experience of EPMA in their hospitals compared with 35% of microbiologists. Pharmacists assigned higher priority to indication prompt ( P < 0.001), allergy checker ( P = 0.003), treatment protocols ( P = 0.003), drug-indication mismatch alerts ( P = 0.031) and prolonged course alerts ( P = 0.041) and lower priority to a dose checker for adults ( P = 0.02) and an interaction checker ( P < 0.05) than microbiologists. A 'soft stop' functionality was rated essential or high priority by 89% of respondents. Potential EPMA software features were expected to have the greatest impact on stewardship, treatment efficacy and patient safety outcomes with lowest impact on Clostridium difficile infection, antimicrobial resistance and drug expenditure. The survey demonstrates key differences in health professionals' opinions of potential healthcare benefits of EPMA, but a consensus of anticipated positive impact on patient safety and AMS. © The Author 2016. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Mehhtz, Peter
2005-01-01
JPF is an explicit state software model checker for Java bytecode. Today, JPF is a swiss army knife for all sort of runtime based verification purposes. This basically means JPF is a Java virtual machine that executes your program not just once (like a normal VM), but theoretically in all possible ways, checking for property violations like deadlocks or unhandled exceptions along all potential execution paths. If it finds an error, JPF reports the whole execution that leads to it. Unlike a normal debugger, JPF keeps track of every step how it got to the defect.
I Love to Rite! Spelling Checkers in the Writing Classroom.
ERIC Educational Resources Information Center
Eiser, Leslie
1986-01-01
Highlights the advantages of word processors and spelling checkers in improving student writing skills. Explains how spelling checkers work and describes the types of available checkers. Also provides lists of Apple, IBM, and Commodore word processors and checkers. (ML)
Formal verification of automated teller machine systems using SPIN
NASA Astrophysics Data System (ADS)
Iqbal, Ikhwan Mohammad; Adzkiya, Dieky; Mukhlash, Imam
2017-08-01
Formal verification is a technique for ensuring the correctness of systems. This work focuses on verifying a model of the Automated Teller Machine (ATM) system against some specifications. We construct the model as a state transition diagram that is suitable for verification. The specifications are expressed as Linear Temporal Logic (LTL) formulas. We use Simple Promela Interpreter (SPIN) model checker to check whether the model satisfies the formula. This model checker accepts models written in Process Meta Language (PROMELA), and its specifications are specified in LTL formulas.
A Fantasy Theme Analysis of Nixon's "Checkers" Speech.
ERIC Educational Resources Information Center
Wells, William T.
1996-01-01
Applies fantasy theme analysis to Richard Nixon's "Checkers" speech. States that three major themes emerge: Nixon as Moral Model, Nixon as the American Dream, and Nixon as Patriot. Points out that each issue responds to allegations of dishonesty that were leveled against him at the time. Argues that Nixon's speech was accepted and…
Spot: A Programming Language for Verified Flight Software
NASA Technical Reports Server (NTRS)
Bocchino, Robert L., Jr.; Gamble, Edward; Gostelow, Kim P.; Some, Raphael R.
2014-01-01
The C programming language is widely used for programming space flight software and other safety-critical real time systems. C, however, is far from ideal for this purpose: as is well known, it is both low-level and unsafe. This paper describes Spot, a language derived from C for programming space flight systems. Spot aims to maintain compatibility with existing C code while improving the language and supporting verification with the SPIN model checker. The major features of Spot include actor-based concurrency, distributed state with message passing and transactional updates, and annotations for testing and verification. Spot also supports domain-specific annotations for managing spacecraft state, e.g., communicating telemetry information to the ground. We describe the motivation and design rationale for Spot, give an overview of the design, provide examples of Spot's capabilities, and discuss the current status of the implementation.
A Synthetic Study on the Resolution of 2D Elastic Full Waveform Inversion
NASA Astrophysics Data System (ADS)
Cui, C.; Wang, Y.
2017-12-01
Gradient based full waveform inversion is an effective method in seismic study, it makes full use of the information given by seismic records and is capable of providing a more accurate model of the interior of the earth at a relatively low computational cost. However, the strong non-linearity of the problem brings about many difficulties in the assessment of its resolution. Synthetic inversions are therefore helpful before an inversion based on real data is made. Checker-board test is a commonly used method, but it is not always reliable due to the significant difference between a checker-board and the true model. Our study aims to provide a basic understanding of the resolution of 2D elastic inversion by examining three main factors that affect the inversion result respectively: 1. The structural characteristic of the model; 2. The level of similarity between the initial model and the true model; 3. The spacial distribution of sources and receivers. We performed about 150 synthetic inversions to demonstrate how each factor contributes to quality of the result, and compared the inversion results with those achieved by checker-board tests. The study can be a useful reference to assess the resolution of an inversion in addition to regular checker-board tests, or to determine whether the seismic data of a specific region is sufficient for a successful inversion.
Generating Test Templates via Automated Theorem Proving
NASA Technical Reports Server (NTRS)
Kancherla, Mani Prasad
1997-01-01
Testing can be used during the software development process to maintain fidelity between evolving specifications, program designs, and code implementations. We use a form of specification-based testing that employs the use of an automated theorem prover to generate test templates. A similar approach was developed using a model checker on state-intensive systems. This method applies to systems with functional rather than state-based behaviors. This approach allows for the use of incomplete specifications to aid in generation of tests for potential failure cases. We illustrate the technique on the cannonical triangle testing problem and discuss its use on analysis of a spacecraft scheduling system.
Software reliability experiments data analysis and investigation
NASA Technical Reports Server (NTRS)
Walker, J. Leslie; Caglayan, Alper K.
1991-01-01
The objectives are to investigate the fundamental reasons which cause independently developed software programs to fail dependently, and to examine fault tolerant software structures which maximize reliability gain in the presence of such dependent failure behavior. The authors used 20 redundant programs from a software reliability experiment to analyze the software errors causing coincident failures, to compare the reliability of N-version and recovery block structures composed of these programs, and to examine the impact of diversity on software reliability using subpopulations of these programs. The results indicate that both conceptually related and unrelated errors can cause coincident failures and that recovery block structures offer more reliability gain than N-version structures if acceptance checks that fail independently from the software components are available. The authors present a theory of general program checkers that have potential application for acceptance tests.
Test-Case Generation using an Explicit State Model Checker Final Report
NASA Technical Reports Server (NTRS)
Heimdahl, Mats P. E.; Gao, Jimin
2003-01-01
In the project 'Test-Case Generation using an Explicit State Model Checker' we have extended an existing tools infrastructure for formal modeling to export Java code so that we can use the NASA Ames tool Java Pathfinder (JPF) for test case generation. We have completed a translator from our source language RSML(exp -e) to Java and conducted initial studies of how JPF can be used as a testing tool. In this final report, we provide a detailed description of the translation approach as implemented in our tools.
A Scenario-Based Protocol Checker for Public-Key Authentication Scheme
NASA Astrophysics Data System (ADS)
Saito, Takamichi
Security protocol provides communication security for the internet. One of the important features of it is authentication with key exchange. Its correctness is a requirement of the whole of the communication security. In this paper, we introduce three attack models realized as their attack scenarios, and provide an authentication-protocol checker for applying three attack-scenarios based on the models. We also utilize it to check two popular security protocols: Secure SHell (SSH) and Secure Socket Layer/Transport Layer Security (SSL/TLS).
Elderly’s Family Life Supplies - Innovative Chinese Checkers Game Board
NASA Astrophysics Data System (ADS)
CHAO, Fanglin
2017-09-01
The product design course for industrial design students was implemented in our university which spans 9 weeks. Throughout the creativity rules and field study, students achieve high standard on problem identification and concept generation. The prototype test with elderly in design projects is helpful to make students with deeper understand user demand, which in turn enhance the concept further. Traditional Chinese checkers are redesigned using special checkers with different height or shape and specific rules to increase user interest and game diversity. Game is more challenging due to location weighting on score calculation to planning its strategies. Redesign Chinese checkers game board include reconfigurable board and several shape checkers. Checkers has 3 parts: standing ring body, the base body, both support the side holding structure. The body shows slightly concave to facilitate the fingers hold. The upper portion of the body is provided with different shapes extension section which can be engaged with base body. Player move the checker to the opposite target area. When one of player moved all the checkers to the opposite target area; they shift to the scoring calculation stage. The participant may develop specific strategy to gain higher score by maximized weighted checkers into its target block regions.
Efficient design of CMOS TSC checkers
NASA Technical Reports Server (NTRS)
Biddappa, Anita; Shamanna, Manjunath K.; Maki, Gary; Whitaker, Sterling
1990-01-01
This paper considers the design of an efficient, robustly testable, CMOS Totally Self-Checking (TSC) Checker for k-out-of-2k codes. Most existing implementations use primitive gates and assume the single stuck-at fault model. The self-testing property has been found to fail for CMOS TSC checkers under the stuck-open fault model due to timing skews and arbitrary delays in the circuit. A new four level design using CMOS primitive gates (NAND, NOR, INVERTERS) is presented. This design retains its properties under the stuck-open fault model. Additionally, this method offers an impressive reduction (greater than 70 percent) in gate count, gate inputs, and test set size when compared to the existing method. This implementation is easily realizable and is based on Anderson's technique. A thorough comparative study has been made on the proposed implementation and Kundu's implementation and the results indicate that the proposed one is better than Kundu's in all respects for k-out-of-2k codes.
Model Checker for Java Programs
NASA Technical Reports Server (NTRS)
Visser, Willem
2007-01-01
Java Pathfinder (JPF) is a verification and testing environment for Java that integrates model checking, program analysis, and testing. JPF consists of a custom-made Java Virtual Machine (JVM) that interprets bytecode, combined with a search interface to allow the complete behavior of a Java program to be analyzed, including interleavings of concurrent programs. JPF is implemented in Java, and its architecture is highly modular to support rapid prototyping of new features. JPF is an explicit-state model checker, because it enumerates all visited states and, therefore, suffers from the state-explosion problem inherent in analyzing large programs. It is suited to analyzing programs less than 10kLOC, but has been successfully applied to finding errors in concurrent programs up to 100kLOC. When an error is found, a trace from the initial state to the error is produced to guide the debugging. JPF works at the bytecode level, meaning that all of Java can be model-checked. By default, the software checks for all runtime errors (uncaught exceptions), assertions violations (supports Java s assert), and deadlocks. JPF uses garbage collection and symmetry reductions of the heap during model checking to reduce state-explosion, as well as dynamic partial order reductions to lower the number of interleavings analyzed. JPF is capable of symbolic execution of Java programs, including symbolic execution of complex data such as linked lists and trees. JPF is extensible as it allows for the creation of listeners that can subscribe to events during searches. The creation of dedicated code to be executed in place of regular classes is supported and allows users to easily handle native calls and to improve the efficiency of the analysis.
Testing Linear Temporal Logic Formulae on Finite Execution Traces
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Rosu, Grigore; Norvig, Peter (Technical Monitor)
2001-01-01
We present an algorithm for efficiently testing Linear Temporal Logic (LTL) formulae on finite execution traces. The standard models of LTL are infinite traces, reflecting the behavior of reactive and concurrent systems which conceptually may be continuously alive. In most past applications of LTL. theorem provers and model checkers have been used to formally prove that down-scaled models satisfy such LTL specifications. Our goal is instead to use LTL for up-scaled testing of real software applications. Such tests correspond to analyzing the conformance of finite traces against LTL formulae. We first describe what it means for a finite trace to satisfy an LTL property. We then suggest an optimized algorithm based on transforming LTL formulae. The work is done using the Maude rewriting system. which turns out to provide a perfect notation and an efficient rewriting engine for performing these experiments.
Formal Analysis of the Remote Agent Before and After Flight
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Lowry, Mike; Park, SeungJoon; Pecheur, Charles; Penix, John; Visser, Willem; White, Jon L.
2000-01-01
This paper describes two separate efforts that used the SPIN model checker to verify deep space autonomy flight software. The first effort occurred at the beginning of a spiral development process and found five concurrency errors early in the design cycle that the developers acknowledge would not have been found through testing. This effort required a substantial manual modeling effort involving both abstraction and translation from the prototype LISP code to the PROMELA language used by SPIN. This experience and others led to research to address the gap between formal method tools and the development cycle used by software developers. The Java PathFinder tool which directly translates from Java to PROMELA was developed as part of this research, as well as automatic abstraction tools. In 1999 the flight software flew on a space mission, and a deadlock occurred in a sibling subsystem to the one which was the focus of the first verification effort. A second quick-response "cleanroom" verification effort found the concurrency error in a short amount of time. The error was isomorphic to one of the concurrency errors found during the first verification effort. The paper demonstrates that formal methods tools can find concurrency errors that indeed lead to loss of spacecraft functions, even for the complex software required for autonomy. Second, it describes progress in automatic translation and abstraction that eventually will enable formal methods tools to be inserted directly into the aerospace software development cycle.
... emergencies, with appropriate advice given in 80% of cases. Some symptom checkers provided more accurate advice than others. Overall, the checkers tended to be cautious, encouraging users to seek health care when self care would do. “These tools may be useful in patients who are trying ...
Analyzing Tabular and State-Transition Requirements Specifications in PVS
NASA Technical Reports Server (NTRS)
Owre, Sam; Rushby, John; Shankar, Natarajan
1997-01-01
We describe PVS's capabilities for representing tabular specifications of the kind advocated by Parnas and others, and show how PVS's Type Correctness Conditions (TCCs) are used to ensure certain well-formedness properties. We then show how these and other capabilities of PVS can be used to represent the AND/OR tables of Leveson and the Decision Tables of Sherry, and we demonstrate how PVS's TCCs can expose and help isolate errors in the latter. We extend this approach to represent the mode transition tables of the Software Cost Reduction (SCR) method in an attractive manner. We show how PVS can check these tables for well-formedness, and how PVS's model checking capabilities can be used to verify invariants and reachability properties of SCR requirements specifications, and inclusion relations between the behaviors of different specifications. These examples demonstrate how several capabilities of the PVS language and verification system can be used in combination to provide customized support for specific methodologies for documenting and analyzing requirements. Because they use only the standard capabilities of PVS, users can adapt and extend these customizations to suit their own needs. Those developing dedicated tools for individual methodologies may find these constructions in PVS helpful for prototyping purposes, or as a useful adjunct to a dedicated tool when the capabilities of a full theorem prover are required. The examples also illustrate the power and utility of an integrated general-purpose system such as PVS. For example, there was no need to adapt or extend the PVS model checker to make it work with SCR specifications described using the PVS TABLE construct: the model checker is applicable to any transition relation, independently of the PVS language constructs used in its definition.
Addressing Dynamic Issues of Program Model Checking
NASA Technical Reports Server (NTRS)
Lerda, Flavio; Visser, Willem
2001-01-01
Model checking real programs has recently become an active research area. Programs however exhibit two characteristics that make model checking difficult: the complexity of their state and the dynamic nature of many programs. Here we address both these issues within the context of the Java PathFinder (JPF) model checker. Firstly, we will show how the state of a Java program can be encoded efficiently and how this encoding can be exploited to improve model checking. Next we show how to use symmetry reductions to alleviate some of the problems introduced by the dynamic nature of Java programs. Lastly, we show how distributed model checking of a dynamic program can be achieved, and furthermore, how dynamic partitions of the state space can improve model checking. We support all our findings with results from applying these techniques within the JPF model checker.
Does It Work? 555-Timer Checker Leaves No Doubt
ERIC Educational Resources Information Center
Harman, Charles
2009-01-01
This article details the construction and use of the 555-timer checker. The 555-timer checker allows the user to dynamically check a 555-timer, an integrated circuit device. Of its many applications, it provides timing applications that unijunction transistors once performed. (Contains 4 figures and 3 photos.)
My New Teaching Partner? Using the Grammar Checker in Writing Instruction
ERIC Educational Resources Information Center
Potter, Reva; Fuller, Dorothy
2008-01-01
Grammar checkers do not claim to teach grammar; they are tools to bring potential problems to the writer's attention. They also offer only formal and Standard English preferences, limiting the freer expression of some literary forms. Without guidance, students may misuse the checker, become frustrated, and feel discouraged. Users must be…
Formal verification of software-based medical devices considering medical guidelines.
Daw, Zamira; Cleaveland, Rance; Vetter, Marcus
2014-01-01
Software-based devices have increasingly become an important part of several clinical scenarios. Due to their critical impact on human life, medical devices have very strict safety requirements. It is therefore necessary to apply verification methods to ensure that the safety requirements are met. Verification of software-based devices is commonly limited to the verification of their internal elements without considering the interaction that these elements have with other devices as well as the application environment in which they are used. Medical guidelines define clinical procedures, which contain the necessary information to completely verify medical devices. The objective of this work was to incorporate medical guidelines into the verification process in order to increase the reliability of the software-based medical devices. Medical devices are developed using the model-driven method deterministic models for signal processing of embedded systems (DMOSES). This method uses unified modeling language (UML) models as a basis for the development of medical devices. The UML activity diagram is used to describe medical guidelines as workflows. The functionality of the medical devices is abstracted as a set of actions that is modeled within these workflows. In this paper, the UML models are verified using the UPPAAL model-checker. For this purpose, a formalization approach for the UML models using timed automaton (TA) is presented. A set of requirements is verified by the proposed approach for the navigation-guided biopsy. This shows the capability for identifying errors or optimization points both in the workflow and in the system design of the navigation device. In addition to the above, an open source eclipse plug-in was developed for the automated transformation of UML models into TA models that are automatically verified using UPPAAL. The proposed method enables developers to model medical devices and their clinical environment using clinical workflows as one UML diagram. Additionally, the system design can be formally verified automatically.
Salkovskis, Paul M; Millar, Josie; Gregory, James D; Wahl, Karina
2017-03-01
Repeated checking in OCD can be understood from a cognitive perspective as the motivated need to achieve certainty about the outcome of a potentially risky action, leading to the application of Elevated Evidence Requirements (EER) and overuse of subjective criteria. Twenty-four obsessional checkers, 22 anxious controls, and 26 non-clinical controls were interviewed about and rated recent episodes where they felt (a) they needed to check and (b) checked mainly out of habit (i.e. not obsessionally). Both subjective and objective criteria were rated as significantly more important in obsessional checkers than in controls; obsessional checkers also used more criteria overall for the termination of the check, and rated more criteria as "extremely important" than the control groups. The termination of the check was rated as more effortful for obsessional checkers than for the comparison groups. Analysis of the interview data was consistent with the ratings. Feelings of "rightness" were associated with the termination of a check for obsessional checkers but not for controls. Results were consistent with the proposal that the use of "just right feelings" to terminate checking are related to EER.
Monitoring Programs Using Rewriting
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Rosu, Grigore; Lan, Sonie (Technical Monitor)
2001-01-01
We present a rewriting algorithm for efficiently testing future time Linear Temporal Logic (LTL) formulae on finite execution traces, The standard models of LTL are infinite traces, reflecting the behavior of reactive and concurrent systems which conceptually may be continuously alive in most past applications of LTL, theorem provers and model checkers have been used to formally prove that down-scaled models satisfy such LTL specifications. Our goal is instead to use LTL for up-scaled testing of real software applications, corresponding to analyzing the conformance of finite traces against LTL formulae. We first describe what it means for a finite trace to satisfy an LTL property end then suggest an optimized algorithm based on transforming LTL formulae. We use the Maude rewriting logic, which turns out to be a good notation and being supported by an efficient rewriting engine for performing these experiments. The work constitutes part of the Java PathExplorer (JPAX) project, the purpose of which is to develop a flexible tool for monitoring Java program executions.
Using Runtime Analysis to Guide Model Checking of Java Programs
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Norvig, Peter (Technical Monitor)
2001-01-01
This paper describes how two runtime analysis algorithms, an existing data race detection algorithm and a new deadlock detection algorithm, have been implemented to analyze Java programs. Runtime analysis is based on the idea of executing the program once. and observing the generated run to extract various kinds of information. This information can then be used to predict whether other different runs may violate some properties of interest, in addition of course to demonstrate whether the generated run itself violates such properties. These runtime analyses can be performed stand-alone to generate a set of warnings. It is furthermore demonstrated how these warnings can be used to guide a model checker, thereby reducing the search space. The described techniques have been implemented in the b e grown Java model checker called PathFinder.
FPGA-Based, Self-Checking, Fault-Tolerant Computers
NASA Technical Reports Server (NTRS)
Some, Raphael; Rennels, David
2004-01-01
A proposed computer architecture would exploit the capabilities of commercially available field-programmable gate arrays (FPGAs) to enable computers to detect and recover from bit errors. The main purpose of the proposed architecture is to enable fault-tolerant computing in the presence of single-event upsets (SEUs). [An SEU is a spurious bit flip (also called a soft error) caused by a single impact of ionizing radiation.] The architecture would also enable recovery from some soft errors caused by electrical transients and, to some extent, from intermittent and permanent (hard) errors caused by aging of electronic components. A typical FPGA of the current generation contains one or more complete processor cores, memories, and highspeed serial input/output (I/O) channels, making it possible to shrink a board-level processor node to a single integrated-circuit chip. Custom, highly efficient microcontrollers, general-purpose computers, custom I/O processors, and signal processors can be rapidly and efficiently implemented by use of FPGAs. Unfortunately, FPGAs are susceptible to SEUs. Prior efforts to mitigate the effects of SEUs have yielded solutions that degrade performance of the system and require support from external hardware and software. In comparison with other fault-tolerant- computing architectures (e.g., triple modular redundancy), the proposed architecture could be implemented with less circuitry and lower power demand. Moreover, the fault-tolerant computing functions would require only minimal support from circuitry outside the central processing units (CPUs) of computers, would not require any software support, and would be largely transparent to software and to other computer hardware. There would be two types of modules: a self-checking processor module and a memory system (see figure). The self-checking processor module would be implemented on a single FPGA and would be capable of detecting its own internal errors. It would contain two CPUs executing identical programs in lock step, with comparison of their outputs to detect errors. It would also contain various cache local memory circuits, communication circuits, and configurable special-purpose processors that would use self-checking checkers. (The basic principle of the self-checking checker method is to utilize logic circuitry that generates error signals whenever there is an error in either the checker or the circuit being checked.) The memory system would comprise a main memory and a hardware-controlled check-pointing system (CPS) based on a buffer memory denoted the recovery cache. The main memory would contain random-access memory (RAM) chips and FPGAs that would, in addition to everything else, implement double-error-detecting and single-error-correcting memory functions to enable recovery from single-bit errors.
USAFA/8086 - A State of the Art Microprocessor System. Volume II. Software Documentation.
1980-06-01
34 /* THE THREE FOLLOWING STRUCTU2RES APE NECESSARY TO MLLLO ThE OPERATING SYTEM TO HAVE INDIRECT ACCESS TL 41EMOP *. 13- ** _3_ -- - , -- K k"" , PijPL...FILLJ$ ERROR$ ONE IOPB CHECKER LINK$IN$ DIR$IN$ ABM $IN$ DATA$IN$ OUT OUT OUT OUT DisK86 MODUlE FiGuRE 8. DATA TRANSFER UTILITIE.S 62 In addition to...SUPPORT ROUTINES. Table 3 shows the functions of these routines. TABLE 3. SUPPORT PROCEDURES ROUTINE FUNCTION ABM $ZERO Makes a given sector on a
Model Checking Failed Conjectures in Theorem Proving: A Case Study
NASA Technical Reports Server (NTRS)
Pike, Lee; Miner, Paul; Torres-Pomales, Wilfredo
2004-01-01
Interactive mechanical theorem proving can provide high assurance of correct design, but it can also be a slow iterative process. Much time is spent determining why a proof of a conjecture is not forthcoming. In some cases, the conjecture is false and in others, the attempted proof is insufficient. In this case study, we use the SAL family of model checkers to generate a concrete counterexample to an unproven conjecture specified in the mechanical theorem prover, PVS. The focus of our case study is the ROBUS Interactive Consistency Protocol. We combine the use of a mechanical theorem prover and a model checker to expose a subtle flaw in the protocol that occurs under a particular scenario of faults and processor states. Uncovering the flaw allows us to mend the protocol and complete its general verification in PVS.
SimCheck: An Expressive Type System for Simulink
NASA Technical Reports Server (NTRS)
Roy, Pritam; Shankar, Natarajan
2010-01-01
MATLAB Simulink is a member of a class of visual languages that are used for modeling and simulating physical and cyber-physical systems. A Simulink model consists of blocks with input and output ports connected using links that carry signals. We extend the type system of Simulink with annotations and dimensions/units associated with ports and links. These types can capture invariants on signals as well as relations between signals. We define a type-checker that checks the wellformedness of Simulink blocks with respect to these type annotations. The type checker generates proof obligations that are solved by SRI's Yices solver for satisfiability modulo theories (SMT). This translation can be used to detect type errors, demonstrate counterexamples, generate test cases, or prove the absence of type errors. Our work is an initial step toward the symbolic analysis of MATLAB Simulink models.
Model Checking JAVA Programs Using Java Pathfinder
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Pressburger, Thomas
2000-01-01
This paper describes a translator called JAVA PATHFINDER from JAVA to PROMELA, the "programming language" of the SPIN model checker. The purpose is to establish a framework for verification and debugging of JAVA programs based on model checking. This work should be seen in a broader attempt to make formal methods applicable "in the loop" of programming within NASA's areas such as space, aviation, and robotics. Our main goal is to create automated formal methods such that programmers themselves can apply these in their daily work (in the loop) without the need for specialists to manually reformulate a program into a different notation in order to analyze the program. This work is a continuation of an effort to formally verify, using SPIN, a multi-threaded operating system programmed in Lisp for the Deep-Space 1 spacecraft, and of previous work in applying existing model checkers and theorem provers to real applications.
Learning Game Evaluation Functions with a Compound Linear Machine.
1980-03-01
Comparison to Non-Learning Shannon Type Programs . . . 50 Comparison to Samuel’s Shannon Type Checker Program . 52 Comparison to an Advice-Taking Shannon...examples of programs or algorithms that play games. The most significant of these is usually held to be A. Samuel’s checker playing program because it is...his checker playing program (GRC, 1978:54-72). Another related study nl,. { .. . performed for the Air Force recommends researching computerized
Software Quality Control at Belle II
NASA Astrophysics Data System (ADS)
Ritter, M.; Kuhr, T.; Hauth, T.; Gebard, T.; Kristof, M.; Pulvermacher, C.;
2017-10-01
Over the last seven years the software stack of the next generation B factory experiment Belle II has grown to over one million lines of C++ and Python code, counting only the part included in offline software releases. There are several thousand commits to the central repository by about 100 individual developers per year. To keep a coherent software stack of high quality that it can be sustained and used efficiently for data acquisition, simulation, reconstruction, and analysis over the lifetime of the Belle II experiment is a challenge. A set of tools is employed to monitor the quality of the software and provide fast feedback to the developers. They are integrated in a machinery that is controlled by a buildbot master and automates the quality checks. The tools include different compilers, cppcheck, the clang static analyzer, valgrind memcheck, doxygen, a geometry overlap checker, a check for missing or extra library links, unit tests, steering file level tests, a sophisticated high-level validation suite, and an issue tracker. The technological development infrastructure is complemented by organizational means to coordinate the development.
Crowdsourcing quality control for Dark Energy Survey images
Melchior, P.
2016-07-01
We have developed a crowdsourcing web application for image quality control employed by the Dark Energy Survey. Dubbed the "DES exposure checker", it renders science-grade images directly to a web browser and allows users to mark problematic features from a set of predefined classes. Users can also generate custom labels and thus help identify previously unknown problem classes. User reports are fed back to hardware and software experts to help mitigate and eliminate recognized issues. We report on the implementation of the application and our experience with its over 100 users, the majority of which are professional or prospective astronomersmore » but not data management experts. We discuss aspects of user training and engagement, and demonstrate how problem reports have been pivotal to rapidly correct artifacts which would likely have been too subtle or infrequent to be recognized otherwise. We conclude with a number of important lessons learned, suggest possible improvements, and recommend this collective exploratory approach for future astronomical surveys or other extensive data sets with a sufficiently large user base. We also release open-source code of the web application and host an online demo versionat http://des-exp-checker.pmelchior.net« less
Crowdsourcing quality control for Dark Energy Survey images
NASA Astrophysics Data System (ADS)
Melchior, P.; Sheldon, E.; Drlica-Wagner, A.; Rykoff, E. S.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Benoit-Lévy, A.; Brooks, D.; Buckley-Geer, E.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Crocce, M.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Doel, P.; Evrard, A. E.; Finley, D. A.; Flaugher, B.; Frieman, J.; Gaztanaga, E.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Honscheid, K.; James, D. J.; Jarvis, M.; Kuehn, K.; Li, T. S.; Maia, M. A. G.; March, M.; Marshall, J. L.; Nord, B.; Ogando, R.; Plazas, A. A.; Romer, A. K.; Sanchez, E.; Scarpine, V.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Vikram, V.; Walker, A. R.; Wester, W.; Zhang, Y.
2016-07-01
We have developed a crowdsourcing web application for image quality control employed by the Dark Energy Survey. Dubbed the "DES exposure checker", it renders science-grade images directly to a web browser and allows users to mark problematic features from a set of predefined classes. Users can also generate custom labels and thus help identify previously unknown problem classes. User reports are fed back to hardware and software experts to help mitigate and eliminate recognized issues. We report on the implementation of the application and our experience with its over 100 users, the majority of which are professional or prospective astronomers but not data management experts. We discuss aspects of user training and engagement, and demonstrate how problem reports have been pivotal to rapidly correct artifacts which would likely have been too subtle or infrequent to be recognized otherwise. We conclude with a number of important lessons learned, suggest possible improvements, and recommend this collective exploratory approach for future astronomical surveys or other extensive data sets with a sufficiently large user base. We also release open-source code of the web application and host an online demo version at http://des-exp-checker.pmelchior.net.
Crowdsourcing quality control for Dark Energy Survey images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melchior, P.
We have developed a crowdsourcing web application for image quality control employed by the Dark Energy Survey. Dubbed the "DES exposure checker", it renders science-grade images directly to a web browser and allows users to mark problematic features from a set of predefined classes. Users can also generate custom labels and thus help identify previously unknown problem classes. User reports are fed back to hardware and software experts to help mitigate and eliminate recognized issues. We report on the implementation of the application and our experience with its over 100 users, the majority of which are professional or prospective astronomersmore » but not data management experts. We discuss aspects of user training and engagement, and demonstrate how problem reports have been pivotal to rapidly correct artifacts which would likely have been too subtle or infrequent to be recognized otherwise. We conclude with a number of important lessons learned, suggest possible improvements, and recommend this collective exploratory approach for future astronomical surveys or other extensive data sets with a sufficiently large user base. We also release open-source code of the web application and host an online demo versionat http://des-exp-checker.pmelchior.net« less
Gershuny, B S; Sher, K J; Rossy, L; Bishop, A K
2000-03-01
To better understand relations between personality and anxiety in general, and personality differences between compulsive checkers and nonchecking anxious individuals in particular, the current study was conducted. Participants included a nonclinical undergraduate sample of 36 compulsive checkers, 33 nonchecking anxious controls and 33 nonchecking nonanxious controls who were compared on five basic personality dimensions: emotional stability, extraversion, agreeableness, conscientiousness and intellect (Goldberg, 1992). Results indicated that a combined group of all anxious individuals was less extraverted and less emotionally stable than nonchecking/nonanxious controls. Results further indicated that compulsive checkers were less emotionally stable and more conscientious than nonchecking anxious controls. The implications of these findings, as well as the impact of the order of personality item presentation, are considered and discussed.
The neXtProt peptide uniqueness checker: a tool for the proteomics community.
Schaeffer, Mathieu; Gateau, Alain; Teixeira, Daniel; Michel, Pierre-André; Zahn-Zabal, Monique; Lane, Lydie
2017-11-01
The neXtProt peptide uniqueness checker allows scientists to define which peptides can be used to validate the existence of human proteins, i.e. map uniquely versus multiply to human protein sequences taking into account isobaric substitutions, alternative splicing and single amino acid variants. The pepx program is available at https://github.com/calipho-sib/pepx and can be launched from the command line or through a cgi web interface. Indexing requires a sequence file in FASTA format. The peptide uniqueness checker tool is freely available on the web at https://www.nextprot.org/tools/peptide-uniqueness-checker and from the neXtProt API at https://api.nextprot.org/. lydie.lane@sib.swiss. © The Author(s) 2017. Published by Oxford University Press.
Spacecraft command verification: The AI solution
NASA Technical Reports Server (NTRS)
Fesq, Lorraine M.; Stephan, Amy; Smith, Brian K.
1990-01-01
Recently, a knowledge-based approach was used to develop a system called the Command Constraint Checker (CCC) for TRW. CCC was created to automate the process of verifying spacecraft command sequences. To check command files by hand for timing and sequencing errors is a time-consuming and error-prone task. Conventional software solutions were rejected when it was estimated that it would require 36 man-months to build an automated tool to check constraints by conventional methods. Using rule-based representation to model the various timing and sequencing constraints of the spacecraft, CCC was developed and tested in only three months. By applying artificial intelligence techniques, CCC designers were able to demonstrate the viability of AI as a tool to transform difficult problems into easily managed tasks. The design considerations used in developing CCC are discussed and the potential impact of this system on future satellite programs is examined.
NASA Astrophysics Data System (ADS)
Ghosh, Amal K.
2010-09-01
The parity generators and the checkers are the most important circuits in communication systems. With the development of multi-valued logic (MVL), the proposed system with parity generators and checkers is the most required using the recently developed optoelectronic technology in the modified trinary number (MTN) system. This system also meets up the tremendous needs of speeds by exploiting the savart plates and spatial light modulators (SLM) in the optical tree architecture (OTA).
HiVy automated translation of stateflow designs for model checking verification
NASA Technical Reports Server (NTRS)
Pingree, Paula
2003-01-01
tool set enables model checking of finite state machines designs. This is acheived by translating state-chart specifications into the input language of the Spin model checker. An abstract syntax of hierarchical sequential automata (HSA) is provided as an intermediate format tool set.
Practical Formal Verification of Diagnosability of Large Models via Symbolic Model Checking
NASA Technical Reports Server (NTRS)
Cavada, Roberto; Pecheur, Charles
2003-01-01
This document reports on the activities carried out during a four-week visit of Roberto Cavada at the NASA Ames Research Center. The main goal was to test the practical applicability of the framework proposed, where a diagnosability problem is reduced to a Symbolic Model Checking problem. Section 2 contains a brief explanation of major techniques currently used in Symbolic Model Checking, and how these techniques can be tuned in order to obtain good performances when using Model Checking tools. Diagnosability is performed on large and structured models of real plants. Section 3 describes how these plants are modeled, and how models can be simplified to improve the performance of Symbolic Model Checkers. Section 4 reports scalability results. Three test cases are briefly presented, and several parameters and techniques have been applied on those test cases in order to produce comparison tables. Furthermore, comparison between several Model Checkers is reported. Section 5 summarizes the application of diagnosability verification to a real application. Several properties have been tested, and results have been highlighted. Finally, section 6 draws some conclusions, and outlines future lines of research.
John Rogers: "Checkers Up at the Farm."
ERIC Educational Resources Information Center
Zeller, Terry
1987-01-01
Based on John Rogers' 1887 painted plaster sculpture called "Checkers Up at the Farm," this lesson seeks to introduce primary-level students to the idea of sculpture in the round and how sculpture can communicate ideas, emotions, and values. (JDH)
... TestingRead Article >>Plasma Viral Load TestingInsulin TherapyRead Article >>Insulin Therapy Visit our interactive symptom checker Visit our interactive symptom checker Get Started Related ArticlesPlasma Viral Load TestingRead ... TherapyRead Article >>Drugs, Procedures & DevicesInsulin TherapyThe goal of ...
Joint and Soft Tissue Injections
... TestingRead Article >>Plasma Viral Load TestingInsulin TherapyRead Article >>Insulin Therapy Visit our interactive symptom checker Visit our interactive symptom checker Get Started Related ArticlesPlasma Viral Load TestingRead ... TherapyRead Article >>Drugs, Procedures & DevicesInsulin TherapyThe goal of ...
Differences in neuropsychological performance between subtypes of obsessive-compulsive disorder.
Nedeljkovic, Maja; Kyrios, Michael; Moulding, Richard; Doron, Guy; Wainwright, Kylie; Pantelis, Chris; Purcell, Rosemary; Maruff, Paul
2009-03-01
Neuropsychological studies have suggested that frontal-striatal dysfunction plays a role in obsessive-compulsive disorder (OCD), although findings have been inconsistent, possibly due to heterogeneity within the disorder and methodological issues. The purpose of the present study was therefore to compare the neuropsychological performance of different subtypes of OCD and matched non-clinical controls (NCs) on the Cambridge Automated Neuropsychological Test Battery (CANTAB). Fifty-nine OCD patients and 59 non-clinical controls completed selected tests from CANTAB examining executive function, visual memory and attentional-set shifting. Depression, anxiety and OCD symptoms were also assessed. From 59 OCD patients, four subtypes were identified: (i) washers; (ii) checkers; (iii) obsessionals; and (iv) mixed symptom profile. Comparisons between washers, checkers, obsessionals and NCs indicated few differences, although checkers were generally found to exhibit poorer performance on spatial working memory, while obsessionals performed poorly on the spatial recognition task. Both checkers and the mixed subgroups showed slowed initial movement on the Stockings of Cambridge planning task and poorer pattern recognition relative to NCs. Overall the results suggested greater impairments in performance on neuropsychological tasks in checkers relative to other subtypes, although the observed effects were small and the conclusions limited by the small subtype samples. Future research will need to account for factors that influence neuropsychological performance in OCD subtypes.
AIDE - Advanced Intrusion Detection Environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Cathy L.
2013-04-28
Would you like to know when someone has dropped an undesirable executable binary on our system? What about something less malicious such as a software installation by a user? What about the user who decides to install a newer version of mod_perl or PHP on your web server without letting you know beforehand? Or even something as simple as when an undocumented config file change is made by another member of the admin group? Do you even want to know about all the changes that happen on a daily basis on your server? The purpose of an intrusion detection systemmore » (IDS) is to detect unauthorized, possibly malicious activity. The purpose of a host-based IDS, or file integrity checker, is check for unauthorized changes to key system files, binaries, libraries, and directories on the system. AIDE is an Open Source file and directory integrity checker. AIDE will let you know when a file or directory has been added, deleted, modified. It is included with the Red Hat Enterprise 6. It is available for other Linux distros. This is a case study describing the process of configuring AIDE on an out of the box RHEL6 installation. Its goal is to illustrate the thinking and the process by which a useful AIDE configuration is built.« less
Model Checking Real Time Java Using Java PathFinder
NASA Technical Reports Server (NTRS)
Lindstrom, Gary; Mehlitz, Peter C.; Visser, Willem
2005-01-01
The Real Time Specification for Java (RTSJ) is an augmentation of Java for real time applications of various degrees of hardness. The central features of RTSJ are real time threads; user defined schedulers; asynchronous events, handlers, and control transfers; a priority inheritance based default scheduler; non-heap memory areas such as immortal and scoped, and non-heap real time threads whose execution is not impeded by garbage collection. The Robust Software Systems group at NASA Ames Research Center has JAVA PATHFINDER (JPF) under development, a Java model checker. JPF at its core is a state exploring JVM which can examine alternative paths in a Java program (e.g., via backtracking) by trying all nondeterministic choices, including thread scheduling order. This paper describes our implementation of an RTSJ profile (subset) in JPF, including requirements, design decisions, and current implementation status. Two examples are analyzed: jobs on a multiprogramming operating system, and a complex resource contention example involving autonomous vehicles crossing an intersection. The utility of JPF in finding logic and timing errors is illustrated, and the remaining challenges in supporting all of RTSJ are assessed.
Nixon's Checkers: A Rhetoric-Communication Criticism.
ERIC Educational Resources Information Center
Kneupper, Charles W.; Mabry, Edward A.
Richard Nixon's "Checkers" speech, a response to charges brought against the "Nixon fund," was primarily an effort to explain the behavior of Eisenhower's 1952 presidential-campaign staff. The effectiveness of this speech was largely due to Nixon's self-disclosure within the context of the speech's narrative mode. In…
Food Marketing: Cashier-Checker. Teacher's Guide. Competency Based Curriculum.
ERIC Educational Resources Information Center
Froelich, Larry; And Others
This teacher's guide is designed to accompany the Competency Based Cashier-Checker Curriculum student materials--see note. Contents include a listing of reference materials, tool and equipment lists, copy of the table of contents for student competency sheets, teacher's suggestions, and answer keys for information sheets and exercises.…
The "Checkers" Speech and Televised Political Communication.
ERIC Educational Resources Information Center
Flaningam, Carl
Richard Nixon's 1952 "Checkers" speech was an innovative use of television for political communication. Like television news itself, the campaign fund crisis behind the speech can be thought of in the same terms as other television melodrama, with the speech serving as its climactic episode. The speech adapted well to television because…
Fontanesi, Luca; Vargiolu, Manuela; Scotti, Emilio; Latorre, Rocco; Faussone Pellegrini, Maria Simonetta; Mazzoni, Maurizio; Asti, Martina; Chiocchetti, Roberto; Romeo, Giovanni; Clavenzani, Paolo; De Giorgio, Roberto
2014-01-01
The English spotting coat color locus in rabbits, also known as Dominant white spotting locus, is determined by an incompletely dominant allele (En). Rabbits homozygous for the recessive wild-type allele (en/en) are self-colored, heterozygous En/en rabbits are normally spotted, and homozygous En/En animals are almost completely white. Compared to vital en/en and En/en rabbits, En/En animals are subvital because of a dilated ("mega") cecum and ascending colon. In this study, we investigated the role of the KIT gene as a candidate for the English spotting locus in Checkered Giant rabbits and characterized the abnormalities affecting enteric neurons and c-kit positive interstitial cells of Cajal (ICC) in the megacolon of En/En rabbits. Twenty-one litters were obtained by crossing three Checkered Giant bucks (En/en) with nine Checkered Giant (En/en) and two en/en does, producing a total of 138 F1 and backcrossed rabbits. Resequencing all coding exons and portions of non-coding regions of the KIT gene in 28 rabbits of different breeds identified 98 polymorphisms. A single nucleotide polymorphism genotyped in all F1 families showed complete cosegregation with the English spotting coat color phenotype (θ=0.00 LOD =75.56). KIT gene expression in cecum and colon specimens of En/En (pathological) rabbits was 5-10% of that of en/en (control) rabbits. En/En rabbits showed reduced and altered c-kit immunolabelled ICC compared to en/en controls. Morphometric data on whole mounts of the ascending colon showed a significant decrease of HuC/D (P<0.05) and substance P (P<0.01) immunoreactive neurons in En/En vs. en/en. Electron microscopy analysis showed neuronal and ICC abnormalities in En/En tissues. The En/En rabbit model shows neuro-ICC changes reminiscent of the human non-aganglionic megacolon. This rabbit model may provide a better understanding of the molecular abnormalities underlying conditions associated with non-aganglionic megacolon.
Fontanesi, Luca; Vargiolu, Manuela; Scotti, Emilio; Latorre, Rocco; Faussone Pellegrini, Maria Simonetta; Mazzoni, Maurizio; Asti, Martina; Chiocchetti, Roberto; Romeo, Giovanni; Clavenzani, Paolo; De Giorgio, Roberto
2014-01-01
The English spotting coat color locus in rabbits, also known as Dominant white spotting locus, is determined by an incompletely dominant allele (En). Rabbits homozygous for the recessive wild-type allele (en/en) are self-colored, heterozygous En/en rabbits are normally spotted, and homozygous En/En animals are almost completely white. Compared to vital en/en and En/en rabbits, En/En animals are subvital because of a dilated (“mega”) cecum and ascending colon. In this study, we investigated the role of the KIT gene as a candidate for the English spotting locus in Checkered Giant rabbits and characterized the abnormalities affecting enteric neurons and c-kit positive interstitial cells of Cajal (ICC) in the megacolon of En/En rabbits. Twenty-one litters were obtained by crossing three Checkered Giant bucks (En/en) with nine Checkered Giant (En/en) and two en/en does, producing a total of 138 F1 and backcrossed rabbits. Resequencing all coding exons and portions of non-coding regions of the KIT gene in 28 rabbits of different breeds identified 98 polymorphisms. A single nucleotide polymorphism genotyped in all F1 families showed complete cosegregation with the English spotting coat color phenotype (θ = 0.00 LOD = 75.56). KIT gene expression in cecum and colon specimens of En/En (pathological) rabbits was 5–10% of that of en/en (control) rabbits. En/En rabbits showed reduced and altered c-kit immunolabelled ICC compared to en/en controls. Morphometric data on whole mounts of the ascending colon showed a significant decrease of HuC/D (P<0.05) and substance P (P<0.01) immunoreactive neurons in En/En vs. en/en. Electron microscopy analysis showed neuronal and ICC abnormalities in En/En tissues. The En/En rabbit model shows neuro-ICC changes reminiscent of the human non-aganglionic megacolon. This rabbit model may provide a better understanding of the molecular abnormalities underlying conditions associated with non-aganglionic megacolon. PMID:24736498
Integrated Formal Analysis of Timed-Triggered Ethernet
NASA Technical Reports Server (NTRS)
Dutertre, Bruno; Shankar, Nstarajan; Owre, Sam
2012-01-01
We present new results related to the verification of the Timed-Triggered Ethernet (TTE) clock synchronization protocol. This work extends previous verification of TTE based on model checking. We identify a suboptimal design choice in a compression function used in clock synchronization, and propose an improvement. We compare the original design and the improved definition using the SAL model checker.
Lammers, Youri; Peelen, Tamara; Vos, Rutger A; Gravendeel, Barbara
2014-02-06
Mixtures of internationally traded organic substances can contain parts of species protected by the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). These mixtures often raise the suspicion of border control and customs offices, which can lead to confiscation, for example in the case of Traditional Chinese medicines (TCMs). High-throughput sequencing of DNA barcoding markers obtained from such samples provides insight into species constituents of mixtures, but manual cross-referencing of results against the CITES appendices is labor intensive. Matching DNA barcodes against NCBI GenBank using BLAST may yield misleading results both as false positives, due to incorrectly annotated sequences, and false negatives, due to spurious taxonomic re-assignment. Incongruence between the taxonomies of CITES and NCBI GenBank can result in erroneous estimates of illegal trade. The HTS barcode checker pipeline is an application for automated processing of sets of 'next generation' barcode sequences to determine whether these contain DNA barcodes obtained from species listed on the CITES appendices. This analytical pipeline builds upon and extends existing open-source applications for BLAST matching against the NCBI GenBank reference database and for taxonomic name reconciliation. In a single operation, reads are converted into taxonomic identifications matched with names on the CITES appendices. By inclusion of a blacklist and additional names databases, the HTS barcode checker pipeline prevents false positives and resolves taxonomic heterogeneity. The HTS barcode checker pipeline can detect and correctly identify DNA barcodes of CITES-protected species from reads obtained from TCM samples in just a few minutes. The pipeline facilitates and improves molecular monitoring of trade in endangered species, and can aid in safeguarding these species from extinction in the wild. The HTS barcode checker pipeline is available at https://github.com/naturalis/HTS-barcode-checker.
2014-01-01
Background Mixtures of internationally traded organic substances can contain parts of species protected by the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). These mixtures often raise the suspicion of border control and customs offices, which can lead to confiscation, for example in the case of Traditional Chinese medicines (TCMs). High-throughput sequencing of DNA barcoding markers obtained from such samples provides insight into species constituents of mixtures, but manual cross-referencing of results against the CITES appendices is labor intensive. Matching DNA barcodes against NCBI GenBank using BLAST may yield misleading results both as false positives, due to incorrectly annotated sequences, and false negatives, due to spurious taxonomic re-assignment. Incongruence between the taxonomies of CITES and NCBI GenBank can result in erroneous estimates of illegal trade. Results The HTS barcode checker pipeline is an application for automated processing of sets of 'next generation’ barcode sequences to determine whether these contain DNA barcodes obtained from species listed on the CITES appendices. This analytical pipeline builds upon and extends existing open-source applications for BLAST matching against the NCBI GenBank reference database and for taxonomic name reconciliation. In a single operation, reads are converted into taxonomic identifications matched with names on the CITES appendices. By inclusion of a blacklist and additional names databases, the HTS barcode checker pipeline prevents false positives and resolves taxonomic heterogeneity. Conclusions The HTS barcode checker pipeline can detect and correctly identify DNA barcodes of CITES-protected species from reads obtained from TCM samples in just a few minutes. The pipeline facilitates and improves molecular monitoring of trade in endangered species, and can aid in safeguarding these species from extinction in the wild. The HTS barcode checker pipeline is available at https://github.com/naturalis/HTS-barcode-checker. PMID:24502833
Formal Verification of a Power Controller Using the Real-Time Model Checker UPPAAL
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Larsen, Kim Guldstrand; Skou, Arne
1999-01-01
A real-time system for power-down control in audio/video components is modeled and verified using the real-time model checker UPPAAL. The system is supposed to reside in an audio/video component and control (read from and write to) links to neighbor audio/video components such as TV, VCR and remote-control. In particular, the system is responsible for the powering up and down of the component in between the arrival of data, and in order to do so in a safe way without loss of data, it is essential that no link interrupts are lost. Hence, a component system is a multitasking system with hard real-time requirements, and we present techniques for modeling time consumption in such a multitasked, prioritized system. The work has been carried out in a collaboration between Aalborg University and the audio/video company B&O. By modeling the system, 3 design errors were identified and corrected, and the following verification confirmed the validity of the design but also revealed the necessity for an upper limit of the interrupt frequency. The resulting design has been implemented and it is going to be incorporated as part of a new product line.
Collaborative Platform for DFM
2007-12-20
generation litho hotspot checkers have also been implemented in automated hotspot fixers that can automatically fix designs by making small changes...processing side (ex. new CMP models, etch models, litho models) and on the circuit side (ex. Process aware circuit analysis or yield optimization...Since final gate CD is a function of not only litho , but Post Exposure Bake, ashing, and etch, the processing module can be augmented with more
2015-09-01
Detectability ...............................................................................................37 Figure 20. Excel VBA Codes for Checker...National Vulnerability Database OS Operating System SQL Structured Query Language VC Verification Condition VBA Visual Basic for Applications...checks each of these assertions for detectability by Daikon. The checker is an Excel Visual Basic for Applications ( VBA ) script that checks the
Apprendre l'orthographe avec un correcteur orthographique (Learning Spelling with a Spell-Checker?)?
ERIC Educational Resources Information Center
Desmarais, Lise
1998-01-01
Reports a study with 27 adults, both native French-speakers and native English-speakers, on the effectiveness of using a spell-checker as the core element to teach French spelling. The method used authentic materials, individualized monitoring, screen and hard-copy text reading, and content sequencing based on errors. The approach generated…
Propel: Tools and Methods for Practical Source Code Model Checking
NASA Technical Reports Server (NTRS)
Mansouri-Samani, Massoud; Mehlitz, Peter; Markosian, Lawrence; OMalley, Owen; Martin, Dale; Moore, Lantz; Penix, John; Visser, Willem
2003-01-01
The work reported here is an overview and snapshot of a project to develop practical model checking tools for in-the-loop verification of NASA s mission-critical, multithreaded programs in Java and C++. Our strategy is to develop and evaluate both a design concept that enables the application of model checking technology to C++ and Java, and a model checking toolset for C++ and Java. The design concept and the associated model checking toolset is called Propel. It builds upon the Java PathFinder (JPF) tool, an explicit state model checker for Java applications developed by the Automated Software Engineering group at NASA Ames Research Center. The design concept that we are developing is Design for Verification (D4V). This is an adaption of existing best design practices that has the desired side-effect of enhancing verifiability by improving modularity and decreasing accidental complexity. D4V, we believe, enhances the applicability of a variety of V&V approaches; we are developing the concept in the context of model checking. The model checking toolset, Propel, is based on extending JPF to handle C++. Our principal tasks in developing the toolset are to build a translator from C++ to Java, productize JPF, and evaluate the toolset in the context of D4V. Through all these tasks we are testing Propel capabilities on customer applications.
Improving Systematic Constraint-driven Analysis Using Incremental and Parallel Techniques
2012-05-01
and modeling latency of a cloud based subsystem. Members of my research group provided useful comments and ideas on my work in group meetings and...122 5.7.1 One structurally complex argument . . . . . . . . . . . . . . 122 5.7.2 Multiple independent arguments...Subject Tools . . . . . . . . . . . . . . . . . 131 6.1.1.1 JPF — Model Checker . . . . . . . . . . . . . . . . 131 6.1.1.2 Alloy — Using a SAT
The Pattern and Range of Movement of a Checkered Beetle Predator Relative to its Bark Beetle Prey
James T. Cronin; John D. Reeve; Richard Wilkens; Peter Turchin
2000-01-01
Theoretical studies of predator-prey population dynamics have increasingly centered on the role of space and the movement of organisms. Yet empirical studies have been slow to follow suit. Herein, we quantified the long range movement of a checkered beetle Thanasimus dublus, which is an important predator of a pernicious forest pest the southern...
ERIC Educational Resources Information Center
Lin, Po-Han; Liu, Tzu-Chien; Paas, Fred
2017-01-01
Computer-based spell checkers help to correct misspells instantly. Almost all the word processing devices are now equipped with a spell-check function that either automatically corrects errors or provides a list of intended words. However, it is not clear on how the reliance on this convenient technological solution affects spelling learning.…
A Methodology for Formal Hardware Verification, with Application to Microprocessors.
1993-08-29
concurrent programming lan- guages. Proceedings of the NATO Advanced Study Institute on Logics and Models of Concurrent Systems ( Colle - sur - Loup , France, 8-19...restricted class of formu- las . Bose and Fisher [26] developed a symbolic model checker based on a Cosmos switch-level model. Their modeling approach...verification using SDVS-the method and a case study. 17th Anuual Microprogramming Workshop (New Orleans, LA , 30 October-2 November 1984). Published as
Formal Methods Tool Qualification
NASA Technical Reports Server (NTRS)
Wagner, Lucas G.; Cofer, Darren; Slind, Konrad; Tinelli, Cesare; Mebsout, Alain
2017-01-01
Formal methods tools have been shown to be effective at finding defects in safety-critical digital systems including avionics systems. The publication of DO-178C and the accompanying formal methods supplement DO-333 allows applicants to obtain certification credit for the use of formal methods without providing justification for them as an alternative method. This project conducted an extensive study of existing formal methods tools, identifying obstacles to their qualification and proposing mitigations for those obstacles. Further, it interprets the qualification guidance for existing formal methods tools and provides case study examples for open source tools. This project also investigates the feasibility of verifying formal methods tools by generating proof certificates which capture proof of the formal methods tool's claim, which can be checked by an independent, proof certificate checking tool. Finally, the project investigates the feasibility of qualifying this proof certificate checker, in the DO-330 framework, in lieu of qualifying the model checker itself.
ERIC Educational Resources Information Center
Figueredo, Lauren; Varnhagen, Connie K.
2005-01-01
We investigated expectations regarding a writer's responsibility to proofread text for spelling errors when using a word processor. Undergraduate students read an essay and completed a questionnaire regarding their perceptions of the author and the quality of the essay. They then manipulated type of spelling error (no error, homophone error,…
Press to Test: Shop-Built BJT Checker Is Easy to Use
ERIC Educational Resources Information Center
Harman, Charles
2008-01-01
Whether a student or an instructor in an electronics lab, having the means to check the operation of a bipolar junction transistor (BJT) at the proto-board stage is a blessing. Most students do not have the experience or knowledge that it takes to recognize whether or not a BJT is operating. With this handy BJT checker, a student or the instructor…
Is This Op-Amp Any Good?: Lab-Built Checker Removes All Doubt!
ERIC Educational Resources Information Center
Harman, Charles
2007-01-01
Electronics instructors and students find it very helpful to be able to check an operational amplifier at the proto-board stage. Most students lack the experience or knowledge that it takes to recognize whether an op-amp is operating normally or not. This article discusses a handy op-amp checker that allows one to check and/or test op-amps at the…
Verifying AI Plan Models: Even the Best Laid Plans Need to be Verified
NASA Technical Reports Server (NTRS)
Smith, Margaret; Cucullu, Gordon; Holzmann, Gerard; Smith, Benjamin
2004-01-01
This viewgraph presentation reviews work on model checking, and specifically the SPIN model checker. The goal of this work is to retire a significant class of risks associated with the use of Artificial Intelligence (Al) Planners on Missions. This effort must provide tangible testing results to a mission using Al technology. It is hoped that the work should be possible to leverage the technique and tools throughout NASA
Wang, Feifan; Gong, Zibo; Hu, Xiaoyong; Yang, Xiaoyu; Yang, Hong; Gong, Qihuang
2016-01-01
The nanoscale chip-integrated all-optical logic parity checker is an essential core component for optical computing systems and ultrahigh-speed ultrawide-band information processing chips. Unfortunately, little experimental progress has been made in development of these devices to date because of material bottleneck limitations and a lack of effective realization mechanisms. Here, we report a simple and efficient strategy for direct realization of nanoscale chip-integrated all-optical logic parity checkers in integrated plasmonic circuits in the optical communication range. The proposed parity checker consists of two-level cascaded exclusive-OR (XOR) logic gates that are realized based on the linear interference of surface plasmon polaritons propagating in the plasmonic waveguides. The parity of the number of logic 1s in the incident four-bit logic signals is determined, and the output signal is given the logic state 0 for even parity (and 1 for odd parity). Compared with previous reports, the overall device feature size is reduced by more than two orders of magnitude, while ultralow energy consumption is maintained. This work raises the possibility of realization of large-scale integrated information processing chips based on integrated plasmonic circuits, and also provides a way to overcome the intrinsic limitations of serious surface plasmon polariton losses for on-chip integration applications. PMID:27073154
Wang, Feifan; Gong, Zibo; Hu, Xiaoyong; Yang, Xiaoyu; Yang, Hong; Gong, Qihuang
2016-04-13
The nanoscale chip-integrated all-optical logic parity checker is an essential core component for optical computing systems and ultrahigh-speed ultrawide-band information processing chips. Unfortunately, little experimental progress has been made in development of these devices to date because of material bottleneck limitations and a lack of effective realization mechanisms. Here, we report a simple and efficient strategy for direct realization of nanoscale chip-integrated all-optical logic parity checkers in integrated plasmonic circuits in the optical communication range. The proposed parity checker consists of two-level cascaded exclusive-OR (XOR) logic gates that are realized based on the linear interference of surface plasmon polaritons propagating in the plasmonic waveguides. The parity of the number of logic 1s in the incident four-bit logic signals is determined, and the output signal is given the logic state 0 for even parity (and 1 for odd parity). Compared with previous reports, the overall device feature size is reduced by more than two orders of magnitude, while ultralow energy consumption is maintained. This work raises the possibility of realization of large-scale integrated information processing chips based on integrated plasmonic circuits, and also provides a way to overcome the intrinsic limitations of serious surface plasmon polariton losses for on-chip integration applications.
Kuchinke, W; Wiegelmann, S; Verplancke, P; Ohmann, C
2006-01-01
Our objectives were to analyze the possibility of an exchange of an entire clinical study between two different and independent study software solutions. The question addressed was whether a software-independent transfer of study metadata can be performed without programming efforts and with software routinely used for clinical research. Study metadata was transferred with ODM standard (CDISC). Study software systems employed were MACRO (InferMed) and XTrial (XClinical). For the Proof of Concept, a test study was created with MACRO and exported as ODM. For modification and validation of the ODM export file XML-Spy (Altova) and ODM-Checker (XML4Pharma) were used. Through exchange of a complete clinical study between two different study software solutions, a Proof of Concept of the technical feasibility of a system-independent metadata exchange was conducted successfully. The interchange of study metadata between two different systems at different centers was performed with minimal expenditure. A small number of mistakes had to be corrected in order to generate a syntactically correct ODM file and a "vendor extension" had to be inserted. After these modifications, XTrial exhibited the study, including all data fields, correctly. However, the optical appearance of both CRFs (case report forms) was different. ODM can be used as an exchange format for clinical studies between different study software. Thus, new forms of cooperation through exchange of metadata seem possible, for example the joint creation of electronic study protocols or CRFs at different research centers. Although the ODM standard represents a clinical study completely, it contains no information about the representation of data fields in CRFs.
Play It Again, Sam! Adapting Common Games into Multimedia Models Used for Student Reviews.
ERIC Educational Resources Information Center
Metcalf, Karen K.; Barlow, Amy; Hudson, Lisa; Jones, Elizabeth; Lyons, Dennis; Piersall, James; Munfus, Laureen
1998-01-01
Provides guidelines on how to adapt common games such as checkers, tic tac toe, obstacle courses, and memory joggers into interactive games in multimedia courseware. Emphasizes creating generic games that can be recycled and used for multiple topics to save development time and keep costs low. Discusses topic themes, game structure, and…
Seismic Travel Time Tomography in Modeling Low Velocity Anomalies between the Boreholes
NASA Astrophysics Data System (ADS)
Octova, A.; Sule, R.
2018-04-01
Travel time cross-hole seismic tomography is applied to describing the structure of the subsurface. The sources are placed at one borehole and some receivers are placed in the others. First arrival travel time data that received by each receiver is used as the input data in seismic tomography method. This research is devided into three steps. The first step is reconstructing the synthetic model based on field parameters. Field parameters are divided into 24 receivers and 45 receivers. The second step is applying inversion process for the field data that consists of five pairs bore holes. The last step is testing quality of tomogram with resolution test. Data processing using FAST software produces an explicit shape and resemble the initial model reconstruction of synthetic model with 45 receivers. The tomography processing in field data indicates cavities in several place between the bore holes. Cavities are identified on BH2A-BH1, BH4A-BH2A and BH4A-BH5 with elongated and rounded structure. In resolution tests using a checker-board, anomalies still can be identified up to 2 meter x 2 meter size. Travel time cross-hole seismic tomography analysis proves this mothod is very good to describing subsurface structure and boundary layer. Size and anomalies position can be recognized and interpreted easily.
ERIC Educational Resources Information Center
San Mateo County Office of Education, Redwood City, CA. Career Preparation Centers.
This eleventh of fifteen sets of Adult Competency Education (ACE) Competency Based Job Descriptions in the ACE kit contains job descriptions for Typist I, Grocery Checker, File Clerk, Receptionist; Bank Teller; and Clerk, General Office. Each begins with a fact sheet that includes this information: occupational title, D.O.T. code, ACE number,…
Chopin, Joshua; Kumar, Pankaj; Miklavcic, Stanley J
2018-01-01
One of the main challenges associated with image-based field phenotyping is the variability of illumination. During a single day's imaging session, or between different sessions on different days, the sun moves in and out of cloud cover and has varying intensity. How is one to know from consecutive images alone if a plant has become darker over time, or if the weather conditions have simply changed from clear to overcast? This is a significant problem to address as colour is an important phenotypic trait that can be measured automatically from images. In this work we use an industry standard colour checker to balance the colour in images within and across every day of a field trial conducted over four months in 2016. By ensuring that the colour checker is present in every image we are afforded a 'ground truth' to correct for varying illumination conditions across images. We employ a least squares approach to fit a quadratic model for correcting RGB values of an image in such a way that the observed values of the colour checker tiles align with their true values after the transformation. The proposed method is successful in reducing the error between observed and reference colour chart values in all images. Furthermore, the standard deviation of mean canopy colour across multiple days is reduced significantly after colour correction is applied. Finally, we use a number of examples to demonstrate the usefulness of accurate colour measurements in recording phenotypic traits and analysing variation among varieties and treatments.
NASA Astrophysics Data System (ADS)
Kumar, Santosh; Chanderkanta; Amphawan, Angela
2016-04-01
Parity is an extra bit which is used to add in digital information to detect error at the receiver end. It can be even and odd parity. In case of even parity, the number of one's will be even included the parity and reverse in the case of odd parity. The circuit which is used to generate the parity at the transmitter side, called the parity generator and the circuit which is used to detect the parity at receiver side is called as parity checker. In this paper, an even and odd parity generator and checker circuits are designed using electro-optic effect inside lithium niobate based Mach-Zehnder Interferometers (MZIs). The MZIs structures collectively show powerful capability in switching an input optical signal to a desired output port from a collection of output ports. The paper constitutes a mathematical description of the proposed device and thereafter simulation using MATLAB. The study is verified using beam propagation method (BPM).
Spectral imaging using consumer-level devices and kernel-based regression.
Heikkinen, Ville; Cámara, Clara; Hirvonen, Tapani; Penttinen, Niko
2016-06-01
Hyperspectral reflectance factor image estimations were performed in the 400-700 nm wavelength range using a portable consumer-level laptop display as an adjustable light source for a trichromatic camera. Targets of interest were ColorChecker Classic samples, Munsell Matte samples, geometrically challenging tempera icon paintings from the turn of the 20th century, and human hands. Measurements and simulations were performed using Nikon D80 RGB camera and Dell Vostro 2520 laptop screen as a light source. Estimations were performed without spectral characteristics of the devices and by emphasizing simplicity for training sets and estimation model optimization. Spectral and color error images are shown for the estimations using line-scanned hyperspectral images as the ground truth. Estimations were performed using kernel-based regression models via a first-degree inhomogeneous polynomial kernel and a Matérn kernel, where in the latter case the median heuristic approach for model optimization and link function for bounded estimation were evaluated. Results suggest modest requirements for a training set and show that all estimation models have markedly improved accuracy with respect to the DE00 color distance (up to 99% for paintings and hands) and the Pearson distance (up to 98% for paintings and 99% for hands) from a weak training set (Digital ColorChecker SG) case when small representative training data were used in the estimation.
Model checking for linear temporal logic: An efficient implementation
NASA Technical Reports Server (NTRS)
Sherman, Rivi; Pnueli, Amir
1990-01-01
This report provides evidence to support the claim that model checking for linear temporal logic (LTL) is practically efficient. Two implementations of a linear temporal logic model checker is described. One is based on transforming the model checking problem into a satisfiability problem; the other checks an LTL formula for a finite model by computing the cross-product of the finite state transition graph of the program with a structure containing all possible models for the property. An experiment was done with a set of mutual exclusion algorithms and tested safety and liveness under fairness for these algorithms.
2009-01-01
Background The study of biological networks has led to the development of increasingly large and detailed models. Computer tools are essential for the simulation of the dynamical behavior of the networks from the model. However, as the size of the models grows, it becomes infeasible to manually verify the predictions against experimental data or identify interesting features in a large number of simulation traces. Formal verification based on temporal logic and model checking provides promising methods to automate and scale the analysis of the models. However, a framework that tightly integrates modeling and simulation tools with model checkers is currently missing, on both the conceptual and the implementational level. Results We have developed a generic and modular web service, based on a service-oriented architecture, for integrating the modeling and formal verification of genetic regulatory networks. The architecture has been implemented in the context of the qualitative modeling and simulation tool GNA and the model checkers NUSMV and CADP. GNA has been extended with a verification module for the specification and checking of biological properties. The verification module also allows the display and visual inspection of the verification results. Conclusions The practical use of the proposed web service is illustrated by means of a scenario involving the analysis of a qualitative model of the carbon starvation response in E. coli. The service-oriented architecture allows modelers to define the model and proceed with the specification and formal verification of the biological properties by means of a unified graphical user interface. This guarantees a transparent access to formal verification technology for modelers of genetic regulatory networks. PMID:20042075
Component-Oriented Behavior Extraction for Autonomic System Design
NASA Technical Reports Server (NTRS)
Bakera, Marco; Wagner, Christian; Margaria,Tiziana; Hinchey, Mike; Vassev, Emil; Steffen, Bernhard
2009-01-01
Rich and multifaceted domain specific specification languages like the Autonomic System Specification Language (ASSL) help to design reliable systems with self-healing capabilities. The GEAR game-based Model Checker has been used successfully to investigate properties of the ESA Exo- Mars Rover in depth. We show here how to enable GEAR s game-based verification techniques for ASSL via systematic model extraction from a behavioral subset of the language, and illustrate it on a description of the Voyager II space mission.
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Munoz, Cesar A.; Siminiceanu, Radu I.
2007-01-01
This paper describes a translator from a new planning language named the Abstract Plan Preparation Language (APPL) to the Symbolic Analysis Laboratory (SAL) model checker. This translator has been developed in support of the Spacecraft Autonomy for Vehicles and Habitats (SAVH) project sponsored by the Exploration Technology Development Program, which is seeking to mature autonomy technology for the vehicles and operations centers of Project Constellation.
ANSYS duplicate finite-element checker routine
NASA Technical Reports Server (NTRS)
Ortega, R.
1995-01-01
An ANSYS finite-element code routine to check for duplicated elements within the volume of a three-dimensional (3D) finite-element mesh was developed. The routine developed is used for checking floating elements within a mesh, identically duplicated elements, and intersecting elements with a common face. A space shuttle main engine alternate turbopump development high pressure oxidizer turbopump finite-element model check using the developed subroutine is discussed. Finally, recommendations are provided for duplicate element checking of 3D finite-element models.
Evolution of Control Programs for a Swarm of Autonomous Unmanned Aerial Vehicles
2004-03-01
Game of Checkers,” IBM Journal of Research and Development , 3 (3):210–229 (July 1959). 91. Serway , R. A. Physics for Scientists and Engineers (Fourth...incomplete physics model is used for this research. Ignoring mass and the effects of gravity and friction greatly simplifies the model. At the same...of work on GAs [9, 60, 53]. The seminal work in GAs is the 1975 book Adaptation in Natural and Artificial Systems by John H. Holland [41]. In a GA
A High-Level Language for Modeling Algorithms and Their Properties
NASA Astrophysics Data System (ADS)
Akhtar, Sabina; Merz, Stephan; Quinson, Martin
Designers of concurrent and distributed algorithms usually express them using pseudo-code. In contrast, most verification techniques are based on more mathematically-oriented formalisms such as state transition systems. This conceptual gap contributes to hinder the use of formal verification techniques. Leslie Lamport introduced PlusCal, a high-level algorithmic language that has the "look and feel" of pseudo-code, but is equipped with a precise semantics and includes a high-level expression language based on set theory. PlusCal models can be compiled to TLA + and verified using the model checker tlc.
A UMLS-based spell checker for natural language processing in vaccine safety.
Tolentino, Herman D; Matters, Michael D; Walop, Wikke; Law, Barbara; Tong, Wesley; Liu, Fang; Fontelo, Paul; Kohl, Katrin; Payne, Daniel C
2007-02-12
The Institute of Medicine has identified patient safety as a key goal for health care in the United States. Detecting vaccine adverse events is an important public health activity that contributes to patient safety. Reports about adverse events following immunization (AEFI) from surveillance systems contain free-text components that can be analyzed using natural language processing. To extract Unified Medical Language System (UMLS) concepts from free text and classify AEFI reports based on concepts they contain, we first needed to clean the text by expanding abbreviations and shortcuts and correcting spelling errors. Our objective in this paper was to create a UMLS-based spelling error correction tool as a first step in the natural language processing (NLP) pipeline for AEFI reports. We developed spell checking algorithms using open source tools. We used de-identified AEFI surveillance reports to create free-text data sets for analysis. After expansion of abbreviated clinical terms and shortcuts, we performed spelling correction in four steps: (1) error detection, (2) word list generation, (3) word list disambiguation and (4) error correction. We then measured the performance of the resulting spell checker by comparing it to manual correction. We used 12,056 words to train the spell checker and tested its performance on 8,131 words. During testing, sensitivity, specificity, and positive predictive value (PPV) for the spell checker were 74% (95% CI: 74-75), 100% (95% CI: 100-100), and 47% (95% CI: 46%-48%), respectively. We created a prototype spell checker that can be used to process AEFI reports. We used the UMLS Specialist Lexicon as the primary source of dictionary terms and the WordNet lexicon as a secondary source. We used the UMLS as a domain-specific source of dictionary terms to compare potentially misspelled words in the corpus. The prototype sensitivity was comparable to currently available tools, but the specificity was much superior. The slow processing speed may be improved by trimming it down to the most useful component algorithms. Other investigators may find the methods we developed useful for cleaning text using lexicons specific to their area of interest.
A UMLS-based spell checker for natural language processing in vaccine safety
Tolentino, Herman D; Matters, Michael D; Walop, Wikke; Law, Barbara; Tong, Wesley; Liu, Fang; Fontelo, Paul; Kohl, Katrin; Payne, Daniel C
2007-01-01
Background The Institute of Medicine has identified patient safety as a key goal for health care in the United States. Detecting vaccine adverse events is an important public health activity that contributes to patient safety. Reports about adverse events following immunization (AEFI) from surveillance systems contain free-text components that can be analyzed using natural language processing. To extract Unified Medical Language System (UMLS) concepts from free text and classify AEFI reports based on concepts they contain, we first needed to clean the text by expanding abbreviations and shortcuts and correcting spelling errors. Our objective in this paper was to create a UMLS-based spelling error correction tool as a first step in the natural language processing (NLP) pipeline for AEFI reports. Methods We developed spell checking algorithms using open source tools. We used de-identified AEFI surveillance reports to create free-text data sets for analysis. After expansion of abbreviated clinical terms and shortcuts, we performed spelling correction in four steps: (1) error detection, (2) word list generation, (3) word list disambiguation and (4) error correction. We then measured the performance of the resulting spell checker by comparing it to manual correction. Results We used 12,056 words to train the spell checker and tested its performance on 8,131 words. During testing, sensitivity, specificity, and positive predictive value (PPV) for the spell checker were 74% (95% CI: 74–75), 100% (95% CI: 100–100), and 47% (95% CI: 46%–48%), respectively. Conclusion We created a prototype spell checker that can be used to process AEFI reports. We used the UMLS Specialist Lexicon as the primary source of dictionary terms and the WordNet lexicon as a secondary source. We used the UMLS as a domain-specific source of dictionary terms to compare potentially misspelled words in the corpus. The prototype sensitivity was comparable to currently available tools, but the specificity was much superior. The slow processing speed may be improved by trimming it down to the most useful component algorithms. Other investigators may find the methods we developed useful for cleaning text using lexicons specific to their area of interest. PMID:17295907
Model Checking - My 27-Year Quest to Overcome the State Explosion Problem
NASA Technical Reports Server (NTRS)
Clarke, Ed
2009-01-01
Model Checking is an automatic verification technique for state-transition systems that are finite=state or that have finite-state abstractions. In the early 1980 s in a series of joint papers with my graduate students E.A. Emerson and A.P. Sistla, we proposed that Model Checking could be used for verifying concurrent systems and gave algorithms for this purpose. At roughly the same time, Joseph Sifakis and his student J.P. Queille at the University of Grenoble independently developed a similar technique. Model Checking has been used successfully to reason about computer hardware and communication protocols and is beginning to be used for verifying computer software. Specifications are written in temporal logic, which is particularly valuable for expressing concurrency properties. An intelligent, exhaustive search is used to determine if the specification is true or not. If the specification is not true, the Model Checker will produce a counterexample execution trace that shows why the specification does not hold. This feature is extremely useful for finding obscure errors in complex systems. The main disadvantage of Model Checking is the state-explosion problem, which can occur if the system under verification has many processes or complex data structures. Although the state-explosion problem is inevitable in worst case, over the past 27 years considerable progress has been made on the problem for certain classes of state-transition systems that occur often in practice. In this talk, I will describe what Model Checking is, how it works, and the main techniques that have been developed for combating the state explosion problem.
Locator-Checker-Scaler Object Tracking Using Spatially Ordered and Weighted Patch Descriptor.
Kim, Han-Ul; Kim, Chang-Su
2017-08-01
In this paper, we propose a simple yet effective object descriptor and a novel tracking algorithm to track a target object accurately. For the object description, we divide the bounding box of a target object into multiple patches and describe them with color and gradient histograms. Then, we determine the foreground weight of each patch to alleviate the impacts of background information in the bounding box. To this end, we perform random walk with restart (RWR) simulation. We then concatenate the weighted patch descriptors to yield the spatially ordered and weighted patch (SOWP) descriptor. For the object tracking, we incorporate the proposed SOWP descriptor into a novel tracking algorithm, which has three components: locator, checker, and scaler (LCS). The locator and the scaler estimate the center location and the size of a target, respectively. The checker determines whether it is safe to adjust the target scale in a current frame. These three components cooperate with one another to achieve robust tracking. Experimental results demonstrate that the proposed LCS tracker achieves excellent performance on recent benchmarks.
Model Checking the Remote Agent Planner
NASA Technical Reports Server (NTRS)
Khatib, Lina; Muscettola, Nicola; Havelund, Klaus; Norvig, Peter (Technical Monitor)
2001-01-01
This work tackles the problem of using Model Checking for the purpose of verifying the HSTS (Scheduling Testbed System) planning system. HSTS is the planner and scheduler of the remote agent autonomous control system deployed in Deep Space One (DS1). Model Checking allows for the verification of domain models as well as planning entries. We have chosen the real-time model checker UPPAAL for this work. We start by motivating our work in the introduction. Then we give a brief description of HSTS and UPPAAL. After that, we give a sketch for the mapping of HSTS models into UPPAAL and we present samples of plan model properties one may want to verify.
Detecting Payload Attacks on Programmable Logic Controllers (PLCs)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Huan
Programmable logic controllers (PLCs) play critical roles in industrial control systems (ICS). Providing hardware peripherals and firmware support for control programs (i.e., a PLC’s “payload”) written in languages such as ladder logic, PLCs directly receive sensor readings and control ICS physical processes. An attacker with access to PLC development software (e.g., by compromising an engineering workstation) can modify the payload program and cause severe physical damages to the ICS. To protect critical ICS infrastructure, we propose to model runtime behaviors of legitimate PLC payload program and use runtime behavior monitoring in PLC firmware to detect payload attacks. By monitoring themore » I/O access patterns, network access patterns, as well as payload program timing characteristics, our proposed firmware-level detection mechanism can detect abnormal runtime behaviors of malicious PLC payload. Using our proof-of-concept implementation, we evaluate the memory and execution time overhead of implementing our proposed method and find that it is feasible to incorporate our method into existing PLC firmware. In addition, our evaluation results show that a wide variety of payload attacks can be effectively detected by our proposed approach. The proposed firmware-level payload attack detection scheme complements existing bumpin- the-wire solutions (e.g., external temporal-logic-based model checkers) in that it can detect payload attacks that violate realtime requirements of ICS operations and does not require any additional apparatus.« less
Sandia Advanced MEMS Design Tools v. 3.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yarberry, Victor R.; Allen, James J.; Lantz, Jeffrey W.
This is a major revision to the Sandia Advanced MEMS Design Tools. It replaces all previous versions. New features in this version: Revised to support AutoCAD 2014 and 2015 This CD contains an integrated set of electronic files that: a) Describe the SUMMiT V fabrication process b) Provide enabling educational information (including pictures, videos, technical information) c) Facilitate the process of designing MEMS with the SUMMiT process (prototype file, Design Rule Checker, Standard Parts Library) d) Facilitate the process of having MEMS fabricated at Sandia National Laboratories e) Facilitate the process of having post-fabrication services performed. While there exists somemore » files on the CD that are used in conjunction with software package AutoCAD, these files are not intended for use independent of the CD. Note that the customer must purchase his/her own copy of AutoCAD to use with these files.« less
2010-11-01
that a program is bug free. Also, and this is an important issue in getting people 29 to use them, static checkers tend to have false positives...enormous variety of non-standard dialects took a huge amount of work to get what they describe as full version-specific bug compatibility.) Model...coincident detection by T-cells. In- cidentally, this is why it is enough to get rid of T-cells that bind self-peptides without similarly culling B-cells
Authoring and verification of clinical guidelines: a model driven approach.
Pérez, Beatriz; Porres, Ivan
2010-08-01
The goal of this research is to provide a framework to enable authoring and verification of clinical guidelines. The framework is part of a larger research project aimed at improving the representation, quality and application of clinical guidelines in daily clinical practice. The verification process of a guideline is based on (1) model checking techniques to verify guidelines against semantic errors and inconsistencies in their definition, (2) combined with Model Driven Development (MDD) techniques, which enable us to automatically process manually created guideline specifications and temporal-logic statements to be checked and verified regarding these specifications, making the verification process faster and cost-effective. Particularly, we use UML statecharts to represent the dynamics of guidelines and, based on this manually defined guideline specifications, we use a MDD-based tool chain to automatically process them to generate the input model of a model checker. The model checker takes the resulted model together with the specific guideline requirements, and verifies whether the guideline fulfils such properties. The overall framework has been implemented as an Eclipse plug-in named GBDSSGenerator which, particularly, starting from the UML statechart representing a guideline, allows the verification of the guideline against specific requirements. Additionally, we have established a pattern-based approach for defining commonly occurring types of requirements in guidelines. We have successfully validated our overall approach by verifying properties in different clinical guidelines resulting in the detection of some inconsistencies in their definition. The proposed framework allows (1) the authoring and (2) the verification of clinical guidelines against specific requirements defined based on a set of property specification patterns, enabling non-experts to easily write formal specifications and thus easing the verification process. Copyright 2010 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Smith, Edwyn D.
1991-01-01
Two silicon CMOS application specific integrated circuits (ASICs), a data generation chip, and a data checker chip were designed. The conversion of the data generator circuitry into a pair of CMOS ASIC chips using the 1.2 micron standard cell library is documented. The logic design of the data checker is discussed. The functions of the control circuitry is described. An accurate estimate of timing relationships is essential to make sure that the logic design performs correctly under practical conditions. Timing and delay information are examined.
Handheld detector using NIR for bottled liquid explosives
NASA Astrophysics Data System (ADS)
Itozaki, Hideo; Sato-Akaba, Hideo
2014-10-01
A handheld bottle checker for detection of liquid explosives is developed using near infrared technology. In order to make it compact, a LED light was used as a light source and a novel circuit board was developed for the device control instead of using a PC. This enables low power consumption and this handheld detector can be powered by a Li-ion battery without an AC power supply. This checker works well to analyze liquids, even using limited bandwidth of NIR by the LED. It is expected that it can be applied not only to airport security but also to wider applications because of its compactness and portability.
Friman, Patrick C
2010-01-01
At last, the field of applied behavior analysis has a beautifully crafted, true textbook that can proudly stand cover to cover and spine to spine beside any of the expensive, imposing, and ornately designed textbooks used by college instructors who teach courses in conventional areas of education or psychology. In this review, I fully laud this development, credit Cooper, Heron, and Heward for making it happen, argue that it signifies a checkered flag for students and professors, and recommend the book for classes in applied behavior analysis everywhere. Subsequently, I review its chapters, each of which could easily stand alone as publications in their own right. Finally, I supply a cautionary note, a yellow flag to accompany the well-earned checkered flag, by pointing out that, as is true with all general textbooks on applied behavior analysis, a major portion of the references involves research on persons who occupy only a tail of the normal distribution. To attain the mainstream role Skinner envisioned and most (if not all) behavior analysts desire, the field will have to increase its focus on persons who reside under the dome of that distribution.
A rule-based approach to model checking of UML state machines
NASA Astrophysics Data System (ADS)
Grobelna, Iwona; Grobelny, Michał; Stefanowicz, Łukasz
2016-12-01
In the paper a new approach to formal verification of control process specification expressed by means of UML state machines in version 2.x is proposed. In contrast to other approaches from the literature, we use the abstract and universal rule-based logical model suitable both for model checking (using the nuXmv model checker), but also for logical synthesis in form of rapid prototyping. Hence, a prototype implementation in hardware description language VHDL can be obtained that fully reflects the primary, already formally verified specification in form of UML state machines. Presented approach allows to increase the assurance that implemented system meets the user-defined requirements.
UTP and Temporal Logic Model Checking
NASA Astrophysics Data System (ADS)
Anderson, Hugh; Ciobanu, Gabriel; Freitas, Leo
In this paper we give an additional perspective to the formal verification of programs through temporal logic model checking, which uses Hoare and He Unifying Theories of Programming (UTP). Our perspective emphasizes the use of UTP designs, an alphabetised relational calculus expressed as a pre/post condition pair of relations, to verify state or temporal assertions about programs. The temporal model checking relation is derived from a satisfaction relation between the model and its properties. The contribution of this paper is that it shows a UTP perspective to temporal logic model checking. The approach includes the notion of efficiency found in traditional model checkers, which reduced a state explosion problem through the use of efficient data structures
The PlusCal Algorithm Language
NASA Astrophysics Data System (ADS)
Lamport, Leslie
Algorithms are different from programs and should not be described with programming languages. The only simple alternative to programming languages has been pseudo-code. PlusCal is an algorithm language that can be used right now to replace pseudo-code, for both sequential and concurrent algorithms. It is based on the TLA + specification language, and a PlusCal algorithm is automatically translated to a TLA + specification that can be checked with the TLC model checker and reasoned about formally.
2007-04-16
velocity of the fluid mesh, P is the relative pressure, xr is the position vector, τ is the deviatoric stress tensor, D is the rate of deformation...corresponds to a slip factor of zero. The slip factor determines how much of the fluid and structure forces are mutually exchanged. Equations 22 and 23...updated from last to first. viii.Average the fluid pressure (This step eliminates the pressure checker-boarding effect and allows use of equal
Facial skin color measurement based on camera colorimetric characterization
NASA Astrophysics Data System (ADS)
Yang, Boquan; Zhou, Changhe; Wang, Shaoqing; Fan, Xin; Li, Chao
2016-10-01
The objective measurement of facial skin color and its variance is of great significance as much information can be obtained from it. In this paper, we developed a new skin color measurement procedure which includes following parts: first, a new skin tone color checker made of pantone Skin Tone Color Checker was designed for camera colorimetric characterization; second, the chromaticity of light source was estimated via a new scene illumination estimation method considering several previous algorithms; third, chromatic adaption was used to convert the input facial image into output facial image which appears taken under canonical light; finally the validity and accuracy of our method was verified by comparing the results obtained by our procedure with these by spectrophotometer.
Model Checking Satellite Operational Procedures
NASA Astrophysics Data System (ADS)
Cavaliere, Federico; Mari, Federico; Melatti, Igor; Minei, Giovanni; Salvo, Ivano; Tronci, Enrico; Verzino, Giovanni; Yushtein, Yuri
2011-08-01
We present a model checking approach for the automatic verification of satellite operational procedures (OPs). Building a model for a complex system as a satellite is a hard task. We overcome this obstruction by using a suitable simulator (SIMSAT) for the satellite. Our approach aims at improving OP quality assurance by automatic exhaustive exploration of all possible simulation scenarios. Moreover, our solution decreases OP verification costs by using a model checker (CMurphi) to automatically drive the simulator. We model OPs as user-executed programs observing the simulator telemetries and sending telecommands to the simulator. In order to assess feasibility of our approach we present experimental results on a simple meaningful scenario. Our results show that we can save up to 90% of verification time.
Precise and Efficient Static Array Bound Checking for Large Embedded C Programs
NASA Technical Reports Server (NTRS)
Venet, Arnaud
2004-01-01
In this paper we describe the design and implementation of a static array-bound checker for a family of embedded programs: the flight control software of recent Mars missions. These codes are large (up to 250 KLOC), pointer intensive, heavily multithreaded and written in an object-oriented style, which makes their analysis very challenging. We designed a tool called C Global Surveyor (CGS) that can analyze the largest code in a couple of hours with a precision of 80%. The scalability and precision of the analyzer are achieved by using an incremental framework in which a pointer analysis and a numerical analysis of array indices mutually refine each other. CGS has been designed so that it can distribute the analysis over several processors in a cluster of machines. To the best of our knowledge this is the first distributed implementation of static analysis algorithms. Throughout the paper we will discuss the scalability setbacks that we encountered during the construction of the tool and their impact on the initial design decisions.
A Prototype Embedding of Bluespec System Verilog in the PVS Theorem Prover
NASA Technical Reports Server (NTRS)
Richards, Dominic; Lester, David
2010-01-01
Bluespec SystemVerilog (BSV) is a Hardware Description Language based on the guarded action model of concurrency. It has an elegant semantics, which makes it well suited for formal reasoning. To date, a number of BSV designs have been verified with hand proofs, but little work has been conducted on the application of automated reasoning. We present a prototype shallow embedding of BSV in the PVS theorem prover. Our embedding is compatible with the PVS model checker, which can automatically prove an important class of theorems, and can also be used in conjunction with the powerful proof strategies of PVS to verify a broader class of properties than can be achieved with model checking alone.
Model Checking with Edge-Valued Decision Diagrams
NASA Technical Reports Server (NTRS)
Roux, Pierre; Siminiceanu, Radu I.
2010-01-01
We describe an algebra of Edge-Valued Decision Diagrams (EVMDDs) to encode arithmetic functions and its implementation in a model checking library. We provide efficient algorithms for manipulating EVMDDs and review the theoretical time complexity of these algorithms for all basic arithmetic and relational operators. We also demonstrate that the time complexity of the generic recursive algorithm for applying a binary operator on EVMDDs is no worse than that of Multi- Terminal Decision Diagrams. We have implemented a new symbolic model checker with the intention to represent in one formalism the best techniques available at the moment across a spectrum of existing tools. Compared to the CUDD package, our tool is several orders of magnitude faster
NASA Technical Reports Server (NTRS)
Bolton, Matthew L.; Bass, Ellen J.
2009-01-01
Both the human factors engineering (HFE) and formal methods communities are concerned with finding and eliminating problems with safety-critical systems. This work discusses a modeling effort that leveraged methods from both fields to use model checking with HFE practices to perform formal verification of a human-interactive system. Despite the use of a seemingly simple target system, a patient controlled analgesia pump, the initial model proved to be difficult for the model checker to verify in a reasonable amount of time. This resulted in a number of model revisions that affected the HFE architectural, representativeness, and understandability goals of the effort. If formal methods are to meet the needs of the HFE community, additional modeling tools and technological developments are necessary.
Towards a Certified Lightweight Array Bound Checker for Java Bytecode
NASA Technical Reports Server (NTRS)
Pichardie, David
2009-01-01
Dynamic array bound checks are crucial elements for the security of a Java Virtual Machines. These dynamic checks are however expensive and several static analysis techniques have been proposed to eliminate explicit bounds checks. Such analyses require advanced numerical and symbolic manipulations that 1) penalize bytecode loading or dynamic compilation, 2) complexify the trusted computing base. Following the Foundational Proof Carrying Code methodology, our goal is to provide a lightweight bytecode verifier for eliminating array bound checks that is both efficient and trustable. In this work, we define a generic relational program analysis for an imperative, stackoriented byte code language with procedures, arrays and global variables and instantiate it with a relational abstract domain as polyhedra. The analysis has automatic inference of loop invariants and method pre-/post-conditions, and efficient checking of analysis results by a simple checker. Invariants, which can be large, can be specialized for proving a safety policy using an automatic pruning technique which reduces their size. The result of the analysis can be checked efficiently by annotating the program with parts of the invariant together with certificates of polyhedral inclusions. The resulting checker is sufficiently simple to be entirely certified within the Coq proof assistant for a simple fragment of the Java bytecode language. During the talk, we will also report on our ongoing effort to scale this approach for the full sequential JVM.
Kate's Model Verification Tools
NASA Technical Reports Server (NTRS)
Morgan, Steve
1991-01-01
Kennedy Space Center's Knowledge-based Autonomous Test Engineer (KATE) is capable of monitoring electromechanical systems, diagnosing their errors, and even repairing them when they crash. A survey of KATE's developer/modelers revealed that they were already using a sophisticated set of productivity enhancing tools. They did request five more, however, and those make up the body of the information presented here: (1) a transfer function code fitter; (2) a FORTRAN-Lisp translator; (3) three existing structural consistency checkers to aid in syntax checking their modeled device frames; (4) an automated procedure for calibrating knowledge base admittances to protect KATE's hardware mockups from inadvertent hand valve twiddling; and (5) three alternatives for the 'pseudo object', a programming patch that currently apprises KATE's modeling devices of their operational environments.
Parallel State Space Construction for a Model Checking Based on Maximality Semantics
NASA Astrophysics Data System (ADS)
El Abidine Bouneb, Zine; Saīdouni, Djamel Eddine
2009-03-01
The main limiting factor of the model checker integrated in the concurrency verification environment FOCOVE [1, 2], which use the maximality based labeled transition system (noted MLTS) as a true concurrency model[3, 4], is currently the amount of available physical memory. Many techniques have been developed to reduce the size of a state space. An interesting technique among them is the alpha equivalence reduction. Distributed memory execution environment offers yet another choice. The main contribution of the paper is to show that the parallel state space construction algorithm proposed in [5], which is based on interleaving semantics using LTS as semantic model, may be adapted easily to the distributed implementation of the alpha equivalence reduction for the maximality based labeled transition systems.
Throat Problems (Symptom Checker)
... Kids and Teens Pregnancy and Childbirth Women Men Seniors Your Health Resources Healthcare Management End-of-Life Issues Insurance & Bills Self Care Working With Your Doctor Drugs, Procedures & Devices Over-the- ...
Efficient model checking of network authentication protocol based on SPIN
NASA Astrophysics Data System (ADS)
Tan, Zhi-hua; Zhang, Da-fang; Miao, Li; Zhao, Dan
2013-03-01
Model checking is a very useful technique for verifying the network authentication protocols. In order to improve the efficiency of modeling and verification on the protocols with the model checking technology, this paper first proposes a universal formalization description method of the protocol. Combined with the model checker SPIN, the method can expediently verify the properties of the protocol. By some modeling simplified strategies, this paper can model several protocols efficiently, and reduce the states space of the model. Compared with the previous literature, this paper achieves higher degree of automation, and better efficiency of verification. Finally based on the method described in the paper, we model and verify the Privacy and Key Management (PKM) authentication protocol. The experimental results show that the method of model checking is effective, which is useful for the other authentication protocols.
Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder
NASA Technical Reports Server (NTRS)
Staats, Matt
2009-01-01
We present work on a prototype tool based on the JavaPathfinder (JPF) model checker for automatically generating tests satisfying the MC/DC code coverage criterion. Using the Eclipse IDE, developers and testers can quickly instrument Java source code with JPF annotations covering all MC/DC coverage obligations, and JPF can then be used to automatically generate tests that satisfy these obligations. The prototype extension to JPF enables various tasks useful in automatic test generation to be performed, such as test suite reduction and execution of generated tests.
Implementing a Rule-Based Contract Compliance Checker
NASA Astrophysics Data System (ADS)
Strano, Massimo; Molina-Jimenez, Carlos; Shrivastava, Santosh
The paper describes the design and implementation of an independent, third party contract monitoring service called Contract Compliance Checker (CCC). The CCC is provided with the specification of the contract in force, and is capable of observing and logging the relevant business-to-business (B2B) interaction events, in order to determine whether the actions of the business partners are consistent with the contract. A contract specification language called EROP (for Events, Rights, Obligations and Prohibitions) for the CCC has been developed based on business rules, that provides constructs to specify what rights, obligation and prohibitions become active and inactive after the occurrence of events related to the execution of business operations. The system has been designed to work with B2B industry standards such as ebXML and RosettaNet.
Lower Back Pain Symptom Checker Flowchart
... Kids and Teens Pregnancy and Childbirth Women Men Seniors Your Health Resources Healthcare Management End-of-Life Issues Insurance & Bills Self Care Working With Your Doctor Drugs, Procedures & Devices Over-the- ...
... Toddlers Kids and Teens Pregnancy and Childbirth Women Men Seniors Your Health Resources Healthcare Management End-of- ... chart for more information about genital problems in men. Our trusted Symptom Checker is written and reviewed ...
NASA Astrophysics Data System (ADS)
Druken, K. A.; Trenham, C. E.; Wang, J.; Bastrakova, I.; Evans, B. J. K.; Wyborn, L. A.; Ip, A. I.; Poudjom Djomani, Y.
2016-12-01
The National Computational Infrastructure (NCI) hosts one of Australia's largest repositories (10+ PBytes) of research data, colocated with a petascale High Performance Computer and a highly integrated research cloud. Key to maximizing benefit of NCI's collections and computational capabilities is ensuring seamless interoperable access to these datasets. This presents considerable data management challenges across the diverse range of geoscience data; spanning disciplines where netCDF-CF is commonly utilized (e.g., climate, weather, remote-sensing), through to the geophysics and seismology fields that employ more traditional domain- and study-specific data formats. These data are stored in a variety of gridded, irregularly spaced (i.e., trajectories, point clouds, profiles), and raster image structures. They often have diverse coordinate projections and resolutions, thus complicating the task of comparison and inter-discipline analysis. Nevertheless, much can be learned from the netCDF-CF model that has long served the climate community, providing a common data structure for the atmospheric, ocean and cryospheric sciences. We are extending the application of the existing Climate and Forecast (CF) metadata conventions to NCI's broader geoscience data collections. We present simple implementations that can significantly improve interoperability of the research collections, particularly in the case of line survey data. NCI has developed a compliance checker to assist with the data quality across all hosted netCDF-CF collections. The tool is an extension to one of the main existing CF Convention checkers, that we have modified to incorporate the Attribute Convention for Data Discovery (ACDD) and ISO19115 standards, and to perform parallelised checks over collections of files, ensuring compliance and consistency across the NCI data collections as a whole. It is complemented by a checker that also verifies functionality against a range of scientific analysis, programming, and data visualisation tools. By design, these tests are not necessarily domain-specific, and demonstrate that verified data is accessible to end-users, thus allowing for seamless interoperability with other datasets across a wide range of fields.
Neck Swelling (Symptom Checker)
... Long-term Abdominal Pain (Stomach Pain), Short-term Ankle Problems Breast Problems in Men Breast Problems in Women Chest Pain in Infants and Children Chest Pain, Acute Chest Pain, Chronic Cold and Flu Cough Diarrhea ...
... Exercise at Every AgeTics and Tourette SyndromeRead Article >>Tics and Tourette Syndrome Visit our interactive symptom checker ... age. There is a place for exercise at…Tics and Tourette SyndromeRead Article >>Exercise and FitnessTics and ...
... Exercise at Every AgeTics and Tourette SyndromeRead Article >>Tics and Tourette Syndrome Visit our interactive symptom checker ... age. There is a place for exercise at…Tics and Tourette SyndromeRead Article >>Exercise and FitnessTics and ...
... Exercise at Every AgeTics and Tourette SyndromeRead Article >>Tics and Tourette Syndrome Visit our interactive symptom checker ... age. There is a place for exercise at…Tics and Tourette SyndromeRead Article >>Exercise and FitnessTics and ...
Eye Problems: Symptom Checker Flowchart
... Long-term Abdominal Pain (Stomach Pain), Short-term Ankle Problems Breast Problems in Men Breast Problems in Women Chest Pain in Infants and Children Chest Pain, Acute Chest Pain, Chronic Cold and Flu Cough Diarrhea ...
Shoulder Problems: Symptom Checker Flowchart
... Long-term Abdominal Pain (Stomach Pain), Short-term Ankle Problems Breast Problems in Men Breast Problems in Women Chest Pain in Infants and Children Chest Pain, Acute Chest Pain, Chronic Cold and Flu Cough Diarrhea ...
What Can Union Do with Its Towering, 16-Sided Victorian Masterpiece?
ERIC Educational Resources Information Center
Biemiller, Lawrence
1987-01-01
Union College's Victorian-style Nott Memorial has mysterious and intriguing architectural features, a checkered history, and serious problems of neglect and underutilization. The college must resolve its future soon. (MSE)
Cull, Felicia; O'Connor, Constance M; Suski, Cory D; Shultz, Aaron D; Danylchuk, Andy J; Cooke, Steven J
2015-04-01
Individual variation in the endocrine stress response has been linked to survival and performance in a variety of species. Here, we evaluate the relationship between the endocrine stress response and anti-predator behaviors in wild checkered puffers (Sphoeroides testudineus) captured at Eleuthera Island, Bahamas. The checkered puffer has a unique and easily measurable predator avoidance strategy, which is to inflate or 'puff' to deter potential predators. In this study, we measured baseline and stress-induced circulating glucocorticoid levels, as well as bite force, a performance measure that is relevant to both feeding and predator defence, and 'puff' performance. We found that puff performance and bite force were consistent within individuals, but generally decreased following a standardized stressor. Larger puffers were able to generate a higher bite force, and larger puffers were able to maintain a more robust puff performance following a standardized stressor relative to smaller puffers. In terms of the relationship between the glucocorticoid stress response and performance metrics, we found no relationship between post-stress glucocorticoid levels and either puff performance or bite force. However, we did find that baseline glucocorticoid levels predicted the ability of a puffer to maintain a robust puff response following a repeated stressor, and this relationship was more pronounced in larger individuals. Our work provides a novel example of how baseline glucocorticoids can predict a fitness-related anti-predator behavior. Copyright © 2015 Elsevier Inc. All rights reserved.
Slicing AADL Specifications for Model Checking
NASA Technical Reports Server (NTRS)
Odenbrett, Maximilian; Nguyen, Viet Yen; Noll, Thomas
2010-01-01
To combat the state-space explosion problem in model checking larger systems, abstraction techniques can be employed. Here, methods that operate on the system specification before constructing its state space are preferable to those that try to minimize the resulting transition system as they generally reduce peak memory requirements. We sketch a slicing algorithm for system specifications written in (a variant of) the Architecture Analysis and Design Language (AADL). Given a specification and a property to be verified, it automatically removes those parts of the specification that are irrelevant for model checking the property, thus reducing the size of the corresponding transition system. The applicability and effectiveness of our approach is demonstrated by analyzing the state-space reduction for an example, employing a translator from AADL to Promela, the input language of the SPIN model checker.
Test Input Generation for Red-Black Trees using Abstraction
NASA Technical Reports Server (NTRS)
Visser, Willem; Pasareanu, Corina S.; Pelanek, Radek
2005-01-01
We consider the problem of test input generation for code that manipulates complex data structures. Test inputs are sequences of method calls from the data structure interface. We describe test input generation techniques that rely on state matching to avoid generation of redundant tests. Exhaustive techniques use explicit state model checking to explore all the possible test sequences up to predefined input sizes. Lossy techniques rely on abstraction mappings to compute and store abstract versions of the concrete states; they explore under-approximations of all the possible test sequences. We have implemented the techniques on top of the Java PathFinder model checker and we evaluate them using a Java implementation of red-black trees.
The ANMLite Language and Logic for Specifying Planning Problems
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Siminiceanu, Radu I.; Munoz, Cesar A.
2007-01-01
We present the basic concepts of the ANMLite planning language. We discuss various aspects of specifying a plan in terms of constraints and checking the existence of a solution with the help of a model checker. The constructs of the ANMLite language have been kept as simple as possible in order to reduce complexity and simplify the verification problem. We illustrate the language with a specification of the space shuttle crew activity model that was constructed under the Spacecraft Autonomy for Vehicles and Habitats (SAVH) project. The main purpose of this study was to explore the implications of choosing a robust logic behind the specification of constraints, rather than simply proposing a new planning language.
Expert system validation in prolog
NASA Technical Reports Server (NTRS)
Stock, Todd; Stachowitz, Rolf; Chang, Chin-Liang; Combs, Jacqueline
1988-01-01
An overview of the Expert System Validation Assistant (EVA) is being implemented in Prolog at the Lockheed AI Center. Prolog was chosen to facilitate rapid prototyping of the structure and logic checkers and since February 1987, we have implemented code to check for irrelevance, subsumption, duplication, deadends, unreachability, and cycles. The architecture chosen is extremely flexible and expansible, yet concise and complementary with the normal interactive style of Prolog. The foundation of the system is in the connection graph representation. Rules and facts are modeled as nodes in the graph and arcs indicate common patterns between rules. The basic activity of the validation system is then a traversal of the connection graph, searching for various patterns the system recognizes as erroneous. To aid in specifying these patterns, a metalanguage is developed, providing the user with the basic facilities required to reason about the expert system. Using the metalanguage, the user can, for example, give the Prolog inference engine the goal of finding inconsistent conclusions among the rules, and Prolog will search the graph intantiations which can match the definition of inconsistency. Examples of code for some of the checkers are provided and the algorithms explained. Technical highlights include automatic construction of a connection graph, demonstration of the use of metalanguage, the A* algorithm modified to detect all unique cycles, general-purpose stacks in Prolog, and a general-purpose database browser with pattern completion.
... Every AgeDiabetesRead Article >>DiabetesTics and Tourette SyndromeRead Article >>Tics and Tourette Syndrome Visit our interactive symptom checker ... t make or use the hormone insulin properly.Tics and Tourette SyndromeRead Article >>Family HealthTics and Tourette ...
... Exercise at Every AgeTics and Tourette SyndromeRead Article >>Tics and Tourette Syndrome Visit our interactive symptom checker ... age. There is a place for exercise at…Tics and Tourette SyndromeRead Article >>Exercise and FitnessTics and ...
... Every AgeDiabetesRead Article >>DiabetesTics and Tourette SyndromeRead Article >>Tics and Tourette Syndrome Visit our interactive symptom checker ... t make or use the hormone insulin properly.Tics and Tourette SyndromeRead Article >>Family HealthTics and Tourette ...
Our Commitment to Reliable Health and Medical Information
... 000 visitors world-wide per day. HONcode Toolbar: search engine and checker of the certification status Automatically checks ... HONcode status when browsing health web sites. The search engine indexes only HONcode-certified sites. HONcodeHunt currently includes ...
Grinding patterns in migraine patients with sleep bruxism: a case-controlled study.
Kato, Momoko; Saruta, Juri; Takeuchi, Mifumi; Sugimoto, Masahiro; Kamata, Yohei; Shimizu, Tomoko; To, Masahiro; Fuchida, Shinya; Igarashi, Hisaka; Kawata, Toshitsugu; Tsukinoki, Keiichi
2016-11-01
Details on grinding patterns and types of contact during sleep bruxism in association with migraine headache have not yet been elucidated. This study compared the characteristics of sleep bruxism between patients with migraine and controls. The study included 80 female patients who had been diagnosed with migraine and 52 women with no history of migraine. Grinding patterns were measured using the BruxChecker® (Scheu Dental, Iserlohn, Germany). There was a significant difference between the two groups in the distribution of grinding patterns at the laterotrusive side (p < 0.001). When the anterior teeth and premolar and molar regions in the two groups were compared, the proportion of the grinding area at all sites was significantly larger in the migraine group than in the control group (p < 0.001). The BruxChecker® showed that there was substantial grinding over a large area among migraine patients, particularly in the molar region.
Method and apparatus for checking fire detectors
NASA Technical Reports Server (NTRS)
Clawson, G. T. (Inventor)
1974-01-01
A fire detector checking method and device are disclosed for nondestructively verifying the operation of installed fire detectors of the type which operate on the principle of detecting the rate of temperature rise of the ambient air to sound an alarm and/or which sound an alarm when the temperature of the ambient air reaches a preset level. The fire alarm checker uses the principle of effecting a controlled simulated alarm condition to ascertain wheather or not the detector will respond. The checker comprises a hand-held instrument employing a controlled heat source, e.g., an electric lamp having a variable input, for heating at a controlled rate an enclosed mass of air in a first compartment, which air mass is then disposed about the fire detector to be checked. A second compartment of the device houses an electronic circuit to sense and adjust the temperature level and heating rate of the heat source.
Work-related symptoms and checkstand configuration: an experimental study.
Harber, P; Bloswick, D; Luo, J; Beck, J; Greer, D; Peña, L F
1993-07-01
Supermarket checkers are known to be at risk of upper-extremity cumulative trauma disorders. Forty-two experienced checkers checked a standard "market basket" of items on an experimental checkstand. The counter height could be adjusted (high = 35.5, low = 31.5 inches), and the pre-scan queuing area length (between conveyor belt and laser scanner) could be set to "near" or "far" lengths. Each subject scanned under the high-near, high-far, low-near, and low-far conditions in random order. Seven ordinal symptom scales were used to describe comfort. Analysis showed that both counter height and queuing length had significant effects on symptoms. Furthermore, the height of the subject affected the degree and direction of the impact of the checkstand configuration differences. The study suggests that optimization of design may be experimentally evaluated, that modification of postural as well as frequency loading may be beneficial, and that adjustability for the individual may be advisable.
Conversion of LARSYS III.1 to an IBM 370 computer
NASA Technical Reports Server (NTRS)
Williams, G. N.; Leggett, J.; Hascall, G. A.
1975-01-01
A software system for processing multispectral aircraft or satellite data (LARSYS) was designed and written at the Laboratory for Applications of Remote Sensing at Purdue University. This system, being implemented on an IBM 360/67 computer utilizing the Cambridge Monitor System, is of an interactive nature. TAMU LARSYS maintains the essential capabilities of Purdue's LARSYS. The machine configuration for which it has been converted is an IBM-compatible Amdahl 470V/6 computer utilizing the time sharing option of the currently implemented OS/VS2 Operating System. Due to TSO limitations, the NASA-JSC deliverable TAMU LARSYS is comprised of two parts. Part one is a TSO Control Card Checker for LARSYS control cards, and part two is a batch version of LARSYS. Used together, they afford most of the capabilities of the original LARSYS III.1. Additionally, two programs have been written by TAMU to support LARSYS processing. The first is an ERTS-to-MIST conversion program used to convert ERTS data to the LARSYS input form, the MIST tape. The second is a system runtable code which maintains tape/file location information for the MIST data sets.
Design of a fault tolerant airborne digital computer. Volume 1: Architecture
NASA Technical Reports Server (NTRS)
Wensley, J. H.; Levitt, K. N.; Green, M. W.; Goldberg, J.; Neumann, P. G.
1973-01-01
This volume is concerned with the architecture of a fault tolerant digital computer for an advanced commercial aircraft. All of the computations of the aircraft, including those presently carried out by analogue techniques, are to be carried out in this digital computer. Among the important qualities of the computer are the following: (1) The capacity is to be matched to the aircraft environment. (2) The reliability is to be selectively matched to the criticality and deadline requirements of each of the computations. (3) The system is to be readily expandable. contractible, and (4) The design is to appropriate to post 1975 technology. Three candidate architectures are discussed and assessed in terms of the above qualities. Of the three candidates, a newly conceived architecture, Software Implemented Fault Tolerance (SIFT), provides the best match to the above qualities. In addition SIFT is particularly simple and believable. The other candidates, Bus Checker System (BUCS), also newly conceived in this project, and the Hopkins multiprocessor are potentially more efficient than SIFT in the use of redundancy, but otherwise are not as attractive.
NASA Technical Reports Server (NTRS)
Holzmann, Gerard J.; Joshi, Rajeev; Groce, Alex
2008-01-01
Reportedly, supercomputer designer Seymour Cray once said that he would sooner use two strong oxen to plow a field than a thousand chickens. Although this is undoubtedly wise when it comes to plowing a field, it is not so clear for other types of tasks. Model checking problems are of the proverbial "search the needle in a haystack" type. Such problems can often be parallelized easily. Alas, none of the usual divide and conquer methods can be used to parallelize the working of a model checker. Given that it has become easier than ever to gain access to large numbers of computers to perform even routine tasks it is becoming more and more attractive to find alternate ways to use these resources to speed up model checking tasks. This paper describes one such method, called swarm verification.
Farooqui, Riffat; Hoor, Talea; Karim, Nasim; Muneer, Mehtab
2018-01-01
To identify and evaluate the frequency, severity, mechanism and common pairs of drug-drug interactions (DDIs) in prescriptions by consultants in medicine outpatient department. This cross sectional descriptive study was done by Pharmacology department of Bahria University Medical & Dental College (BUMDC) in medicine outpatient department (OPD) of a private hospital in Karachi from December 2015 to January 2016. A total of 220 prescriptions written by consultants were collected. Medications given with patient's diagnosis were recorded. Drugs were analyzed for interactions by utilizing Medscape drug interaction checker, drugs.com checker and stockley`s drug interactions index. Two hundred eleven prescriptions were selected while remaining were excluded from the study because of unavailability of the prescribed drugs in the drug interaction checkers. In 211 prescriptions, two common diagnoses were diabetes mellitus (28.43%) and hypertension (27.96%). A total of 978 medications were given. Mean number of medications per prescription was 4.6. A total of 369 drug-drug interactions were identified in 211 prescriptions (175%). They were serious 4.33%, significant 66.12% and minor 29.53%. Pharmacokinetic and pharmacodynamic interactions were 37.94% and 51.21% respectively while 10.84% had unknown mechanism. Number wise common pairs of DDIs were Omeprazole-Losartan (S), Gabapentine- Acetaminophen (M), Losartan-Diclofenac (S). The frequency of DDIs is found to be too high in prescriptions of consultants from medicine OPD of a private hospital in Karachi. Significant drug-drug interactions were more and mostly caused by Pharmacodynamic mechanism. Number wise evaluation showed three common pairs of drugs involved in interactions.
Panatto, Donatella; Domnich, Alexander; Gasparini, Roberto; Bonanni, Paolo; Icardi, Giancarlo; Amicizia, Daniela; Arata, Lucia; Bragazzi, Nicola Luigi; Signori, Alessio; Landa, Paolo; Bechini, Angela; Boccalini, Sara
2016-04-02
Given the growing use and great potential of mobile apps, this project aimed to develop and implement a user-friendly app to increase laypeople's knowledge and awareness of invasive pneumococcal disease (IPD). Despite the heavy burden of IPD, the documented low awareness of IPD among both laypeople and healthcare professionals and far from optimal pneumococcal vaccination coverage, no app specifically targeting IPD has been developed so far. The app was designed to be maximally functional and conceived in accordance with user-centered design. Its content, layout and usability were discussed and formally tested during several workshops that involved the principal stakeholders, including experts in IPD and information technology and potential end-users. Following several workshops, it was decided that, in order to make the app more interactive, its core should be a personal "checker" of the risk of contracting IPD and a user-friendly risk-communication strategy. The checker was populated with risk factors identified through both Italian and international official guidelines. Formal evaluation of the app revealed its good readability and usability properties. A sister web site with the same content was created to achieve higher population exposure. Seven months after being launched in a price- and registration-free modality, the app, named "Pneumo Rischio," averaged 20.9 new users/day and 1.3 sessions/user. The first in-field results suggest that "Pneumo Rischio" is a promising tool for increasing the population's awareness of IPD and its prevention through a user-friendly risk checker.
Farooqui, Riffat; Hoor, Talea; Karim, Nasim; Muneer, Mehtab
2018-01-01
Objective: To identify and evaluate the frequency, severity, mechanism and common pairs of drug-drug interactions (DDIs) in prescriptions by consultants in medicine outpatient department. Methods: This cross sectional descriptive study was done by Pharmacology department of Bahria University Medical & Dental College (BUMDC) in medicine outpatient department (OPD) of a private hospital in Karachi from December 2015 to January 2016. A total of 220 prescriptions written by consultants were collected. Medications given with patient's diagnosis were recorded. Drugs were analyzed for interactions by utilizing Medscape drug interaction checker, drugs.com checker and stockley`s drug interactions index. Two hundred eleven prescriptions were selected while remaining were excluded from the study because of unavailability of the prescribed drugs in the drug interaction checkers. Results: In 211 prescriptions, two common diagnoses were diabetes mellitus (28.43%) and hypertension (27.96%). A total of 978 medications were given. Mean number of medications per prescription was 4.6. A total of 369 drug-drug interactions were identified in 211 prescriptions (175%). They were serious 4.33%, significant 66.12% and minor 29.53%. Pharmacokinetic and pharmacodynamic interactions were 37.94% and 51.21% respectively while 10.84% had unknown mechanism. Number wise common pairs of DDIs were Omeprazole-Losartan (S), Gabapentine- Acetaminophen (M), Losartan-Diclofenac (S). Conclusion: The frequency of DDIs is found to be too high in prescriptions of consultants from medicine OPD of a private hospital in Karachi. Significant drug-drug interactions were more and mostly caused by Pharmacodynamic mechanism. Number wise evaluation showed three common pairs of drugs involved in interactions. PMID:29643896
NASA Technical Reports Server (NTRS)
Havelund, Klaus
1999-01-01
The JAVA PATHFINDER, JPF, is a translator from a subset of JAVA 1.0 to PROMELA, the programming language of the SPIN model checker. The purpose of JPF is to establish a framework for verification and debugging of JAVA programming based on model checking. The main goal is to automate program verification such that a programmer can apply it in the daily work without the need for a specialist to manually reformulate a program into a different notation in order to analyze the program. The system is especially suited for analyzing multi-threaded JAVA applications, where normal testing usually falls short. The system can find deadlocks and violations of boolean assertions stated by the programmer in a special assertion language. This document explains how to Use JPF.
NASA Astrophysics Data System (ADS)
Haran, T. M.; Brodzik, M. J.; Nordgren, B.; Estilow, T.; Scott, D. J.
2015-12-01
An increasing number of new Earth science datasets are being producedby data providers in self-describing, machine-independent file formatsincluding Hierarchical Data Format version 5 (HDF5) and NetworkCommon Data Form version 4 (netCDF-4). Furthermore data providers maybe producing netCDF-4 files that follow the conventions for Climateand Forecast metadata version 1.6 (CF 1.6) which, for datasets mappedto a projected raster grid covering all or a portion of the earth,includes the Coordinate Reference System (CRS) used to define howlatitude and longitude are mapped to grid coordinates, i.e. columnsand rows, and vice versa. One problem that users may encounter is thattheir preferred visualization and analysis tool may not yet includesupport for one of these newer formats. Moreover, data distributorssuch as NASA's NSIDC DAAC may not yet include support for on-the-flyconversion of data files for all data sets produced in a new format toa preferred older distributed format.There do exist open source solutions to this dilemma in the form ofsoftware packages that can translate files in one of the new formatsto one of the preferred formats. However these software packagesrequire that the file to be translated conform to the specificationsof its respective format. Although an online CF-Convention compliancechecker is available from cfconventions.org, a recent NSIDC userservices incident described here in detail involved an NSIDC-supporteddata set that passed the (then current) CF Checker Version 2.0.6, butwas in fact lacking two variables necessary for conformance. Thisproblem was not detected until GDAL, a software package which reliedon the missing variables, was employed by a user in an attempt totranslate the data into a different file format, namely GeoTIFF.This incident indicates that testing a candidate data product with oneor more software products written to accept the advertised conventionsis proposed as a practice which improves interoperability. Differencesbetween data file contents and software package expectations areexposed, affording an opportunity to improve conformance of software,data or both. The incident can also serve as a demonstration that dataproviders, distributors, and users can work together to improve dataproduct quality and interoperability.
Pinus ponderosa : A checkered past obscured four species
Ann Willyard; David S. Gernandt; Kevin Potter; Valerie Hipkins; Paula E. Marquardt; Mary Frances Mahalovich; Stephen K. Langer; Frank W. Telewski; Blake Cooper; Connor Douglas; Kristen Finch; Hassani H. Karemera; Julia Lefler; Payton Lea; Austin Wofford
2016-01-01
PREMISE OF THE STUDY: Molecular genetic evidence can help delineate taxa in species complexes that lack diagnostic morphological characters. Pinus ponderosa (Pinaceae; subsection Ponderosae ) is recognized as a problematic taxon: plastid phylogenies of exemplars were paraphyletic, and mitochondrial phylogeography suggested at...
Assume-Guarantee Abstraction Refinement Meets Hybrid Systems
NASA Technical Reports Server (NTRS)
Bogomolov, Sergiy; Frehse, Goran; Greitschus, Marius; Grosu, Radu; Pasareanu, Corina S.; Podelski, Andreas; Strump, Thomas
2014-01-01
Compositional verification techniques in the assume- guarantee style have been successfully applied to transition systems to efficiently reduce the search space by leveraging the compositional nature of the systems under consideration. We adapt these techniques to the domain of hybrid systems with affine dynamics. To build assumptions we introduce an abstraction based on location merging. We integrate the assume-guarantee style analysis with automatic abstraction refinement. We have implemented our approach in the symbolic hybrid model checker SpaceEx. The evaluation shows its practical potential. To the best of our knowledge, this is the first work combining assume-guarantee reasoning with automatic abstraction-refinement in the context of hybrid automata.
Algebraic model checking for Boolean gene regulatory networks.
Tran, Quoc-Nam
2011-01-01
We present a computational method in which modular and Groebner bases (GB) computation in Boolean rings are used for solving problems in Boolean gene regulatory networks (BN). In contrast to other known algebraic approaches, the degree of intermediate polynomials during the calculation of Groebner bases using our method will never grow resulting in a significant improvement in running time and memory space consumption. We also show how calculation in temporal logic for model checking can be done by means of our direct and efficient Groebner basis computation in Boolean rings. We present our experimental results in finding attractors and control strategies of Boolean networks to illustrate our theoretical arguments. The results are promising. Our algebraic approach is more efficient than the state-of-the-art model checker NuSMV on BNs. More importantly, our approach finds all solutions for the BN problems.
Model Checking Artificial Intelligence Based Planners: Even the Best Laid Plans Must Be Verified
NASA Technical Reports Server (NTRS)
Smith, Margaret H.; Holzmann, Gerard J.; Cucullu, Gordon C., III; Smith, Benjamin D.
2005-01-01
Automated planning systems (APS) are gaining acceptance for use on NASA missions as evidenced by APS flown On missions such as Orbiter and Deep Space 1 both of which were commanded by onboard planning systems. The planning system takes high level goals and expands them onboard into a detailed of action fiat the spacecraft executes. The system must be verified to ensure that the automatically generated plans achieve the goals as expected and do not generate actions that would harm the spacecraft or mission. These systems are typically tested using empirical methods. Formal methods, such as model checking, offer exhaustive or measurable test coverage which leads to much greater confidence in correctness. This paper describes a formal method based on the SPIN model checker. This method guarantees that possible plans meet certain desirable properties. We express the input model in Promela, the language of SPIN and express the properties of desirable plans formally.
Playing checkers: detection and eye hand coordination in simulated prosthetic vision
NASA Astrophysics Data System (ADS)
Dagnelie, Gislin; Walter, Matthias; Yang, Liancheng
2006-09-01
In order to assess the potential for visual inspection and eye hand coordination without tactile feedback under conditions that may be available to future retinal prosthesis wearers, we studied the ability of sighted individuals to act upon pixelized visual information at very low resolution, equivalent to 20/2400 visual acuity. Live images from a head-mounted camera were low-pass filtered and presented in a raster of 6 × 10 circular Gaussian dots. Subjects could either freely move their gaze across the raster (free-viewing condition) or the raster position was locked to the subject's gaze by means of video-based pupil tracking (gaze-locked condition). Four normally sighted and one severely visually impaired subject with moderate nystagmus participated in a series of four experiments. Subjects' task was to count 1 to 16 white fields randomly distributed across an otherwise black checkerboard (counting task) or to place a black checker on each of the white fields (placing task). We found that all subjects were capable of learning both tasks after varying amounts of practice, both in the free-viewing and in the gaze-locked conditions. Normally sighted subjects all reached very similar performance levels independent of the condition. The practiced performance level of the visually impaired subject in the free-viewing condition was indistinguishable from that of the normally sighted subjects, but required approximately twice the amount of time to place checkers in the gaze-locked condition; this difference is most likely attributable to this subject's nystagmus. Thus, if early retinal prosthesis wearers can achieve crude form vision, then on the basis of these results they too should be able to perform simple eye hand coordination tasks without tactile feedback.
Pech, Daniel; Vidal-Martínez, Víctor M; Aguirre-Macedo, M Leopoldina; Gold-Bouchot, Gerardo; Herrera-Silveira, Jorge; Zapata-Pérez, Omar; Marcogliese, David J
2009-03-15
The suitability of using helminth communities as bioindicators of environmental quality of the Yucatan coastal lagoons status was tested on the checkered puffer (Spheroides testudineus) in four coastal lagoons along the Yucatan coast. The concentration of chemical pollutants in sediments, water quality parameters, helminth infracommunity characteristics, as well as fish physiological biomarkers, including EROD (7-ethoxyresorufin-O-deethylase) and catalase activities, were measured. Results from sediment analyses demonstrated the presence of hydrocarbons, organochlorine pesticides and polychlorinated biphenyls at varying concentrations, some of which exceeded the Probability Effect Level (PEL). Significant negative associations among organochlorine pesticides, infracommunity characteristics and fish physiological responses were observed in most of the lagoons. Results suggest that EROD activity and parasite infracommunity characteristics could be useful tools to evaluate the effects of chemical pollutants on the fish host and in the environment. Importantly, certain parasites appear to influence biomarker measurements, indicating that parasites should be considered in ecotoxicological studies.
22 CFR Appendix E to Part 62 - Unskilled Occupations
Code of Federal Regulations, 2011 CFR
2011-04-01
... Cleaners (10) Chauffeurs and Taxicab Drivers (11) Cleaners, Hotel and Motel (12) Clerks, General (13) Clerks, Hotel (14) Clerks and Checkers, Grocery Stores (15) Clerk Typist (16) Cooks, Short Order (17... Operators (21) Floorworkers (22) Groundskeepers (23) Guards (24) Helpers, any industry (25) Hotel Cleaners...
What Schools Are Doing. A Roundup of New and Unusual School Practices
ERIC Educational Resources Information Center
Nation's Schools, 1972
1972-01-01
Describes a teen-run cafeteria, a program of giving away obsolete texts, a short term investment plan (a programed approach to cash-flow budgeting), an emergency credit'' plan whereby teachers can acquire credit hours outside school, and an automated attendance checker system. (DN)
Student Online Plagiarism: How Do We Respond?
ERIC Educational Resources Information Center
Scanlon, Patrick M.
2003-01-01
The perception that Internet plagiarism by university students is on the rise has alarmed college teachers, leading to the adoption of electronic plagiarism checkers, among other responses. Although some recent studies suggest that estimates of online plagiarism may be exaggerated, cause for concern remains. This article reviews quantitative…
Choosing the Right Database Management Program.
ERIC Educational Resources Information Center
Vockell, Edward L.; Kopenec, Donald
1989-01-01
Provides a comparison of four database management programs commonly used in schools: AppleWorks, the DOS 3.3 and ProDOS versions of PFS, and MECC's Data Handler. Topics discussed include information storage, spelling checkers, editing functions, search strategies, graphs, printout formats, library applications, and HyperCard. (LRW)
Magel, Jennifer M T; Pleizier, Naomi; Wilson, Alexander D M; Shultz, Aaron D; Vera Chang, Marilyn N; Moon, Thomas W; Cooke, Steven J
2017-01-01
As human populations continue to expand, increases in coastal development have led to the alteration of much of the world's mangrove habitat, creating problems for the multitude of species that inhabit these unique ecosystems. Habitat alteration often leads to changes in habitat complexity and predation risk, which may serve as additional stressors for those species that rely on mangroves for protection from predators. However, few studies have been conducted to date to assess the effects of these specific stressors on glucocorticoid (GC) stress hormone levels in wild fish populations. Using the checkered puffer as a model, our study sought to examine the effects of physical habitat complexity and predator environment on baseline and acute stress-induced GC levels. This was accomplished by examining changes in glucose and cortisol concentrations of fish placed in artificial environments for short periods (several hours) where substrate type and the presence of mangrove roots and predator cues were manipulated. Our results suggest that baseline and stress-induced GC levels are not significantly influenced by changes in physical habitat complexity or the predator environment using the experimental protocol that we applied. Although more research is required, the current study suggests that checkered puffers may be capable of withstanding changes in habitat complexity and increases in predation risk without experiencing adverse GC-mediated physiological effects, possibly as a result of the puffers' unique morphological and chemical defenses that help them to avoid predation in the wild. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Powell, John D.; Owens, David; Menzies, Tim
2004-01-01
The difficulty of how to test large systems, such as the one on board a NASA robotic remote explorer (RRE) vehicle, is fundamentally a search issue: the global state space representing all possible has yet to be solved, even after many decades of work. Randomized algorithms have been known to outperform their deterministic counterparts for search problems representing a wide range of applications. In the case study presented here, the LURCH randomized algorithm proved to be adequate to the task of testing a NASA RRE vehicle. LURCH found all the errors found by an earlier analysis of a more complete method (SPIN). Our empirical results are that LURCH can scale to much larger models than standard model checkers like SMV and SPIN. Further, the LURCH analysis was simpler than the SPIN analysis. The simplicity and scalability of LURCH are two compelling reasons for experimenting further with this tool.
ERIC Educational Resources Information Center
Szekely, George
2000-01-01
Explores children's fascination with creating their own unique games as an art form. Focuses on different games, such as chess, checkers, pogs, and monopoly. States that observing children playing games offers a firsthand lesson in how children create. Discusses what it means to be an art teacher who promotes creative play with games. (CMK)
Federal Workplace Literacy Project. Internal Evaluation Report.
ERIC Educational Resources Information Center
Matuszak, David J.
This report describes the following components of the Nestle Workplace Literacy Project: six job task analyses, curricula for six workplace basic skills training programs, delivery of courses using these curricula, and evaluation of the process. These six job categories were targeted for training: forklift loader/checker, BB's processing systems…
ERIC Educational Resources Information Center
Straumanis, Joan
A major problem in teaching symbolic logic is that of providing individualized and early feedback to students who are learning to do proofs. To overcome this difficulty, a computer program was developed which functions as a line-by-line proof checker in Sentential Calculus. The program, DEMON, first evaluates any statement supplied by the student…
Device-Enabled Authorization in the Grey System
2005-02-01
proof checker. Journal of Automated Reasoning 31(3-4):231–260, 2003. [7] D. Balfanz , D. Dean, and M. Spreitzer. A security infrastructure for...distributed Java applications. In Proceedings of the 21st IEEE Symposium on Security and Privacy, May 2002. [8] D. Balfanz and E. Felten. Hand-held computers
Hoosier Lawmaker? Vouchers, ALEC Legislative Puppets, and Indiana's Abdication of Democracy
ERIC Educational Resources Information Center
Shaffer, Michael B.; Ellis, John G.; Swensson, Jeff
2018-01-01
"Getting poor kids out of failing schools" sounds like an altruistic cause most Americans support. However, one policy mechanism utilized to achieve that result, parental choice vouchers, has a checkered past. This descriptive analysis explores the policy-bubble created when state legislators eschewed their constitutional responsibility…
Improving ESL Writing Using an Online Formulaic Sequence Word-Combination Checker
ERIC Educational Resources Information Center
Grami, G. M. A.; Alkazemi, B. Y.
2016-01-01
Writing correct English sentences can be challenging. Furthermore, writing correct formulaic sequences can be especially difficult because accepted combinations do not follow clear rules governing which words appear together in a sequence. One solution is to provide examples of correct usage accompanied by statistical feedback from web-based…
Computer Language Settings and Canadian Spellings
ERIC Educational Resources Information Center
Shuttleworth, Roger
2011-01-01
The language settings used on personal computers interact with the spell-checker in Microsoft Word, which directly affects the flagging of spellings that are deemed incorrect. This study examined the language settings of personal computers owned by a group of Canadian university students. Of 21 computers examined, only eight had their Windows…
Troublemaker: The Education of Chester Finn
ERIC Educational Resources Information Center
Finn, Chester E., Jr.
2008-01-01
"Troublemaker," the memoir of "Education Next" senior editor and veteran education reformer Chester E. "Checker" Finn Jr., weaves into the chronicle of Finn's life and career the broader history of education reform, in which he has played a vital and sometimes rambunctious role. Currently president of the Thomas B.…
Writing Conferences Using the Microcomputer.
ERIC Educational Resources Information Center
Pufahl, John
1986-01-01
Describes a teaching strategy using Apple IIe computers in a sequence of individual conferences. Includes asking questions while scrolling through the paper, showing students how to elaborate ideas by entering suggested changes and prompts in capital letters during the conference, and using a spelling checker to prompt revision (e.g., by compiling…
Compositional schedulability analysis of real-time actor-based systems.
Jaghoori, Mohammad Mahdi; de Boer, Frank; Longuet, Delphine; Chothia, Tom; Sirjani, Marjan
2017-01-01
We present an extension of the actor model with real-time, including deadlines associated with messages, and explicit application-level scheduling policies, e.g.,"earliest deadline first" which can be associated with individual actors. Schedulability analysis in this setting amounts to checking whether, given a scheduling policy for each actor, every task is processed within its designated deadline. To check schedulability, we introduce a compositional automata-theoretic approach, based on maximal use of model checking combined with testing. Behavioral interfaces define what an actor expects from the environment, and the deadlines for messages given these assumptions. We use model checking to verify that actors match their behavioral interfaces. We extend timed automata refinement with the notion of deadlines and use it to define compatibility of actor environments with the behavioral interfaces. Model checking of compatibility is computationally hard, so we propose a special testing process. We show that the analyses are decidable and automate the process using the Uppaal model checker.
Specification and Verification of Web Applications in Rewriting Logic
NASA Astrophysics Data System (ADS)
Alpuente, María; Ballis, Demis; Romero, Daniel
This paper presents a Rewriting Logic framework that formalizes the interactions between Web servers and Web browsers through a communicating protocol abstracting HTTP. The proposed framework includes a scripting language that is powerful enough to model the dynamics of complex Web applications by encompassing the main features of the most popular Web scripting languages (e.g. PHP, ASP, Java Servlets). We also provide a detailed characterization of browser actions (e.g. forward/backward navigation, page refresh, and new window/tab openings) via rewrite rules, and show how our models can be naturally model-checked by using the Linear Temporal Logic of Rewriting (LTLR), which is a Linear Temporal Logic specifically designed for model-checking rewrite theories. Our formalization is particularly suitable for verification purposes, since it allows one to perform in-depth analyses of many subtle aspects related to Web interaction. Finally, the framework has been completely implemented in Maude, and we report on some successful experiments that we conducted by using the Maude LTLR model-checker.
Experimental Evaluation of a Planning Language Suitable for Formal Verification
NASA Technical Reports Server (NTRS)
Butler, Rick W.; Munoz, Cesar A.; Siminiceanu, Radu I.
2008-01-01
The marriage of model checking and planning faces two seemingly diverging alternatives: the need for a planning language expressive enough to capture the complexity of real-life applications, as opposed to a language simple, yet robust enough to be amenable to exhaustive verification and validation techniques. In an attempt to reconcile these differences, we have designed an abstract plan description language, ANMLite, inspired from the Action Notation Modeling Language (ANML) [17]. We present the basic concepts of the ANMLite language as well as an automatic translator from ANMLite to the model checker SAL (Symbolic Analysis Laboratory) [7]. We discuss various aspects of specifying a plan in terms of constraints and explore the implications of choosing a robust logic behind the specification of constraints, rather than simply propose a new planning language. Additionally, we provide an initial assessment of the efficiency of model checking to search for solutions of planning problems. To this end, we design a basic test benchmark and study the scalability of the generated SAL models in terms of plan complexity.
Immunotherapy for glioblastoma: playing chess, not checkers.
Jackson, Christopher M; Lim, Michael
2018-04-24
Patients with glioblastoma (GBM) exhibit a complex state of immune dysfunction involving multiple mechanisms of local, regional, and systemic immune suppression and tolerance. These pathways are now being identified and their relative contributions explored. Delineating how these pathways are interrelated is paramount to effectively implementing immunotherapy for GBM. Copyright ©2018, American Association for Cancer Research.
Checking the Grammar Checker: Integrating Grammar Instruction with Writing.
ERIC Educational Resources Information Center
McAlexander, Patricia J.
2000-01-01
Notes Rei Noguchi's recommendation of integrating grammar instruction with writing instruction and teaching only the most vital terms and the most frequently made errors. Presents a project that provides a review of the grammar lessons, applies many grammar rules specifically to the students' writing, and teaches students the effective use of the…
Food Marketing: Cashier-Checker. Student Material. Competency Based Curriculum.
ERIC Educational Resources Information Center
Froelich, Larry; And Others
This curriculum for food marketing (cashier-checking) is designed to provide entry-level employment skills. It is organized into 13 units which contain one to ten competencies. A student competency sheet provided for each competency is organized into this format: unit and competency number and name, learning steps, learning activities, and…
The Use of Board Games in Child Psychotherapy
ERIC Educational Resources Information Center
Oren, Ayala
2008-01-01
Playing checkers, football or more recently, computer games, is an important part of the latency child's culture. The ability to play games demands a level of emotional development similar to that needed to cope with the emotional/developmental demands characteristic of latency. A game shared by the therapist and child provides a picture of the…
NASA Astrophysics Data System (ADS)
MacDonald, J. A.; Shahrestani, S.; Weis, J. S.
2009-09-01
Behaviors, activity budgets, and spatial locations of reef-associated schoolmaster snapper ( Lutjanus apodus) and non-reef-associated checkered puffer ( Sphoeroides testudineus) were cataloged in mangrove forests in Caribbean Honduras to see how and where they spent their time and whether this changed as they grew. For schoolmasters, swimming was the most common behavior, while checkered puffers spent the majority of their time resting. Both remained completely within (as opposed to outside) the mangrove roots and in the lower half of the water column most of the time. However, as the size of the fish increased there was a clear decrease in the time spent both within the root system and closer to the substrate; the larger fish spent more time higher up in the water column and outside the root system. This was observed in both the schoolmaster and the puffer; the schoolmaster subsequently moves to reefs while the puffer does not. Coupled with limited feeding, the results suggest a primarily protective function for mangroves.
Vilallonga, Gabriel D.; de Almeida, Antônio-Carlos G.; Ribeiro, Kelison T.; Campos, Sergio V. A.
2018-01-01
The sodium–potassium pump (Na+/K+ pump) is crucial for cell physiology. Despite great advances in the understanding of this ionic pumping system, its mechanism is not completely understood. We propose the use of a statistical model checker to investigate palytoxin (PTX)-induced Na+/K+ pump channels. We modelled a system of reactions representing transitions between the conformational substates of the channel with parameters, concentrations of the substates and reaction rates extracted from simulations reported in the literature, based on electrophysiological recordings in a whole-cell configuration. The model was implemented using the UPPAAL-SMC platform. Comparing simulations and probabilistic queries from stochastic system semantics with experimental data, it was possible to propose additional reactions to reproduce the single-channel dynamic. The probabilistic analyses and simulations suggest that the PTX-induced Na+/K+ pump channel functions as a diprotomeric complex in which protein–protein interactions increase the affinity of the Na+/K+ pump for PTX. PMID:29657808
Finding Feasible Abstract Counter-Examples
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.; Dwyer, Matthew B.; Visser, Willem; Clancy, Daniel (Technical Monitor)
2002-01-01
A strength of model checking is its ability to automate the detection of subtle system errors and produce traces that exhibit those errors. Given the high computational cost of model checking most researchers advocate the use of aggressive property-preserving abstractions. Unfortunately, the more aggressively a system is abstracted the more infeasible behavior it will have. Thus, while abstraction enables efficient model checking it also threatens the usefulness of model checking as a defect detection tool, since it may be difficult to determine whether a counter-example is feasible and hence worth developer time to analyze. We have explored several strategies for addressing this problem by extending an explicit-state model checker, Java PathFinder (JPF), to search for and analyze counter-examples in the presence of abstractions. We demonstrate that these techniques effectively preserve the defect detection ability of model checking in the presence of aggressive abstraction by applying them to check properties of several abstracted multi-threaded Java programs. These new capabilities are not specific to JPF and can be easily adapted to other model checking frameworks; we describe how this was done for the Bandera toolset.
Enabling model checking for collaborative process analysis: from BPMN to `Network of Timed Automata'
NASA Astrophysics Data System (ADS)
Mallek, Sihem; Daclin, Nicolas; Chapurlat, Vincent; Vallespir, Bruno
2015-04-01
Interoperability is a prerequisite for partners involved in performing collaboration. As a consequence, the lack of interoperability is now considered a major obstacle. The research work presented in this paper aims to develop an approach that allows specifying and verifying a set of interoperability requirements to be satisfied by each partner in the collaborative process prior to process implementation. To enable the verification of these interoperability requirements, it is necessary first and foremost to generate a model of the targeted collaborative process; for this research effort, the standardised language BPMN 2.0 is used. Afterwards, a verification technique must be introduced, and model checking is the preferred option herein. This paper focuses on application of the model checker UPPAAL in order to verify interoperability requirements for the given collaborative process model. At first, this step entails translating the collaborative process model from BPMN into a UPPAAL modelling language called 'Network of Timed Automata'. Second, it becomes necessary to formalise interoperability requirements into properties with the dedicated UPPAAL language, i.e. the temporal logic TCTL.
TechWriter: An Evolving System for Writing Assistance for Advanced Learners of English
ERIC Educational Resources Information Center
Napolitano, Diane M.; Stent, Amanda
2009-01-01
Writing assistance systems, from simple spelling checkers to more complex grammar and readability analyzers, can be helpful aids to nonnative writers of English. However, many writing assistance systems have two disadvantages. First, they are not designed to encourage skills learning and independence in their users; instead, users may begin to use…
Checker Takes the Guesswork out of Diode Identification
ERIC Educational Resources Information Center
Harman, Charles
2011-01-01
At technical colleges and secondary-level tech schools, students enrolled in basic electronics labs who have learned about diodes that do rectification are used to seeing power diodes like the 1N4001. When the students are introduced to low-power zener diodes and signal diodes, component identification gets more complex. If the small zeners are…
Don't Lose Your Marbles!: Game Project Teaches Introductory Manufacturing Skills
ERIC Educational Resources Information Center
Kapur, Arjun; Carter, Horlin; Dillon, Dave
2006-01-01
This article describes a lab activity conducted in an introductory manufacturing class. In this good, simple, mass-production project, the students designed and produced a small game composed of a piece of plywood and 14 glass marbles. In appearance, the game is something like Chinese checkers, but it involves jumping over marbles, then removing…
Check that JFET! Easy-to-Build Tester Makes It Simple
ERIC Educational Resources Information Center
Harman, Charles
2008-01-01
This article describes an activity that will allow students to learn how to make a junction field effect transistor (JFET) checker. Most electronics students do not have the experience or knowledge that it takes to recognize whether a JFET is operating normally, and both instructors and students will find having the means to check the operation of…
Dave Moore: Taking Roundabout Path to Perovskite Fast Track | News | NREL
energy of academia is awesome and contagious. It keeps you young to hang around young people and keep learning." Although he'd had a checkered high school academic career prior to stepping on the college ," Moore said. "That's where I first learned about the energy crisis." And that's when he
2013-12-01
First, any subproject that involved an implementation shared some implementation infrastructure with other subprojects. For example, the Plaid backend ...very same language. We followed this advice in Plaid, and we therefore implemented the compiler backend in Plaid (code generation, type checker, Æminim...programming language aimed at enforcing security properties in web and mobile applications [Nistor et al., 2013]. Wyvern therefore provides an excellent
Mountain Plains Learning Experience Guide: Marketing. Course: Cash Register Operation.
ERIC Educational Resources Information Center
Egan, B.
One of thirteen individualized courses included in a marketing curriculum, this course is on the fundamentals of operating a cash register. The course is comprised of four units: (1) Face of Cash Register, (2) Operating a Checkout Station, (3) Checker-Cashier Qualities, and (4) NCR 250 Electronic Cash Register. Each unit begins with a Unit…
To Kill a Messenger; Television News and the Real World.
ERIC Educational Resources Information Center
Small, William
From his vantage point as News Director of CBS News in Washington, the author examines the role of television news in our society and gives an insider's view of the day-to-day process of selecting and presenting news. Highlighting the book are in-depth discussions of past and recent news events. The Nixon "Checkers" speech, John…
Studies Relating to Computer Use of Spelling and Grammar Checkers and Educational Achievement
ERIC Educational Resources Information Center
Radi, Odette Bourjaili
2015-01-01
The content of this paper will focus on both language and computer practices and how school age students develop their literacy skills in the two domains of "language" and "computers." The term literacy is a broad concept that has attracted many interpretations over the years. Some of the concepts raised by the literature apply…
A Preliminary Report on a New Grammar Checker to Help Students of English as a Foreign Language
ERIC Educational Resources Information Center
Lawley, Jim
2004-01-01
Whereas many pre-intermediate and intermediate level students of English as a Foreign Language (EFL) might benefit from receiving detailed feedback on mistakes in their written compositions, there are obvious practical limits to the amount of corrective feedback that teachers in schools and universities can provide. This article briefly describes…
Adding Statistical Machine Translation Adaptation to Computer-Assisted Translation
2013-09-01
are automatically searched and used to suggest possible translations; (2) spell-checkers; (3) glossaries; (4) dictionaries ; (5) alignment and...matching against TMs to propose translations; spell-checking, glossary, and dictionary look-up; support for multiple file formats; regular expressions...on Telecommunications. Tehran, 2012, 822–826. Bertoldi, N.; Federico, M. Domain Adaptation for Statistical Machine Translation with Monolingual
Comprehension Monitoring by Elementary Students: When Does It Occur?
ERIC Educational Resources Information Center
Pace, Ann Jaffe
The effect of passage topic and task demands on elementary school students' monitoring of their own comprehension was examined. Second, fourth, and sixth grade students read a short passage about a well-known event (playing checkers) or one about which they had little existing information (making lye soap). Half of the students in each grade were…
Cerebral oximetry: a replacement for pulse oximetry?
Frost, Elizabeth A M
2012-10-01
Cerebral oximetry has been around for some 3 decades but has had a somewhat checkered history regarding application and reliability. More recently several monitors have been approved in the United States and elsewhere and the technique is emerging as a useful tool for assessing not only adequate cerebral oxygenation but also tissue oxygenation and perfusion in other organs.
Test SCRs and Triacs with a Lab-Built Checker
ERIC Educational Resources Information Center
Harman, Charles
2010-01-01
Students enrolled in advanced electronics courses and/or industrial electronics classes at the high school level and at technical colleges ultimately learn about solid-state switches such as the SCR (silicon controlled rectifier) and the triac. Both the SCR and the triac are in a family of four-layer devices called thyristors. They are both…
NASA Astrophysics Data System (ADS)
Harkness, E. F.; Lim, Y. Y.; Wilson, M. W.; Haq, R.; Zhou, J.; Tate, C.; Maxwell, A. J.; Astley, S. M.; Gilbert, F. J.
2015-03-01
Digital breast tomosynthesis (DBT) addresses limitations of 2-D projection imaging for detection of masses. Microcalcification clusters may be more difficult to appreciate in DBT as individual calcifications within clusters may appear on different slices. This research aims to evaluate the performance of ImageChecker 3D Calc CAD v1.0. Women were recruited as part of the TOMMY trial. From the trial, 169 were included in this study. The DBT images were processed with the computer aided detection (CAD) algorithm. Three consultant radiologists reviewed the images and recorded whether CAD prompts were on or off target. 79/80 (98.8%) malignant cases had a prompt on the area of microcalcification. In these cases, there were 1-15 marks (median 5) with the majority of false prompts (n=326/431) due to benign (68%) and vascular (24%) calcifications. Of 89 normal/benign cases, there were 1-13 prompts (median 3), 27 (30%) had no prompts and the majority of false prompts (n=238) were benign (77%) calcifications. CAD is effective in prompting malignant microcalcification clusters and may overcome the difficulty of detecting clusters in slice images. Although there was a high rate of false prompts, further advances in the software may improve specificity.
Certified In-lined Reference Monitoring on .NET
2006-06-01
Introduction Language -based approaches to computer security have employed two major strategies for enforcing security policies over untrusted programs. • Low...automatically verify IRM’s using a static type-checker. Mobile (MOnitorable BIL with Effects) is an exten- sion of BIL (Baby Intermediate Language ) [15], a...AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES Proceedings of the 2006 Programming Languages and
ERIC Educational Resources Information Center
Thompson, Merlin B.
2015-01-01
The problem with authenticity--the idea of being "true to one's self"--is that its somewhat checkered reputation garners a complete range of favorable and unfavorable reactions. In educational settings, authenticity is lauded as one of the top two traits students desire in their teachers. Yet, authenticity is criticized for its tendency…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-23
... Brentwood School and MacArthur Field as part of the land use agreements for those spaces. A planned future..., the historic Rose Garden, will be completed in fall of 2011. This area, located just across the street from the Domiciliary, will include meditative gardens, tables for chess and checkers, and soothing...
Checker Board Model predicts 5 Generations of Quarks
NASA Astrophysics Data System (ADS)
Lach, Theodore M., II
2002-10-01
The Checker Board Model (CBM) of the nucleus is a 2 Dimensional model that relies on the belief that nature is superbly symmetric and the belief that the synchronization of the 2 outer rotating quarks in the nucleons accounts for magnetic moment of the nucleons and that the magnetic flux from the nucleons couples (weaves) into the checker board array structures and this in addition to electrostatic forces of the rotating and stationary quarks accounts for the apparent strong nuclear force. The symmetry of the He nucleus, one might call it Super Symmetric, helps explain the stable structure of the alpha particle. A semi-classical (relativistic) approach was used to explain the mass of the proton and neutron, along with their magnetic moments and their absolute and relative sizes in terms of the above structure and two newly proposed quarks (1) : the "up" and the "dn" quarks, not to be confused with the lighter u and d quarks in the standard model. Using the prescribed 2D checkerboard arrays where protons go on dark squares and neutrons go on light squares, one is able to recreate all the known nuclei. This exercise came up with structures that could explain the rational for the Halo nuclei and why 9He was so unstable where as 8He was much more stable and 10He is not a bound structure. Since the heavy masses of the "up" and "dn" quark (237.31 MeV and 42.392 MeV respectively) did not fit within the standard model as candidates for u and d, a new model (New Physics) had to be invented that would explain why "up" and "dn" were so heavy. Trial and error resulted in the empirical fitting of these two new quarks into a scheme that placed them between the mass of u / d and the c / s quarks. This new particle physics model predicts that nature has 5 generations not 3. Perhaps it should be called the 5G model. The two new generations come from the "up" and "dn' quark and a much heavier generation of a 42.5 GeV "massive dn" or (Big Bottom, B') and 65 GeV "massive up" or (Big Top, T') quark. This model was uploaded to the lanl web server just before the Aug. 2000 NP meeting at Michigan State. (2) Subsequent versions of this paper explain the rational of the 27 GeV lepton, which this model dubbed the "gluon" and for that information please refer to pages 608 to 618 of the "Rise of the Standard Model", editors: Hoddeson, Brown, Riordan, and Dresden. NOTE: the 27 GeV lepton was predicted by this model and uploaded before the author read the Rise of the Standard Model, so indeed it was a prediction, not a fitting. Early on it was believed that the 27 GeV resonance was attributed to the finding of the gluon, yet today this is not the case and the standard model requires the gluon be massless. One key conflict between the 3G and 5G model is that the 3G model has established the mass of the "t" quark as 175 GeV based upon experimental findings, whereas the 5G model anticipates the "t" quark is hidden among the Upsilon mesons between 10 and 11 GeV. The recent find by CERN of an 113 GeV particle may end up being the meson of the combination of the massive "up" and massive "dn" (T' B'). The final convincing argument in favor of the 5G model is that all the masses of the quarks and leptons in this model are related with one another based upon a simple geometric mean. The masses of all the "up like" quarks fall on a straight line on semi-log paper, as do the "dn like" quarks, and the leptons. All three of these (equally spaced) curves come together at about 430 GeV. One last point, the upper bound masses of the neutrinos also fall on a straight line in this theory and also appear to join the other curves at about 430 GeV. The Checkerboard model has a close association with the Cluster Model, since the structures envisioned by the Cluster model are totally compatible with the CBM. Binding energies of the nuclei are also easily explained in this 2D model, see reference #1. (1). T.M. Lach, Checkerboard Structure of the Nucleus, Infinite Energy, Vol. 5, issue 30, (2000). (2). T.M. Lach, Masses of the Sub-Nuclear Particles, nucl-th/0008026, @http://xxx.lanl.gov/
Flight Guidance System Requirements Specification
NASA Technical Reports Server (NTRS)
Miller, Steven P.; Tribble, Alan C.; Carlson, Timothy M.; Danielson, Eric J.
2003-01-01
This report describes a requirements specification written in the RSML-e language for the mode logic of a Flight Guidance System of a typical regional jet aircraft. This model was created as one of the first steps in a five-year project sponsored by the NASA Langley Research Center, Rockwell Collins Inc., and the Critical Systems Research Group of the University of Minnesota to develop new methods and tools to improve the safety of avionics designs. This model will be used to demonstrate the application of a variety of methods and techniques, including safety analysis of system and subsystem requirements, verification of key properties using theorem provers and model checkers, identification of potential sources mode confusion in system designs, partitioning of applications based on the criticality of system hazards, and autogeneration of avionics quality code. While this model is representative of the mode logic of a typical regional jet aircraft, it does not describe an actual or planned product. Several aspects of a full Flight Guidance System, such as recovery from failed sensors, have been omitted, and no claims are made regarding the accuracy or completeness of this specification.
Hiraishi, Kunihiko
2014-01-01
One of the significant topics in systems biology is to develop control theory of gene regulatory networks (GRNs). In typical control of GRNs, expression of some genes is inhibited (activated) by manipulating external stimuli and expression of other genes. It is expected to apply control theory of GRNs to gene therapy technologies in the future. In this paper, a control method using a Boolean network (BN) is studied. A BN is widely used as a model of GRNs, and gene expression is expressed by a binary value (ON or OFF). In particular, a context-sensitive probabilistic Boolean network (CS-PBN), which is one of the extended models of BNs, is used. For CS-PBNs, the verification problem and the optimal control problem are considered. For the verification problem, a solution method using the probabilistic model checker PRISM is proposed. For the optimal control problem, a solution method using polynomial optimization is proposed. Finally, a numerical example on the WNT5A network, which is related to melanoma, is presented. The proposed methods provide us useful tools in control theory of GRNs. PMID:24587766
Model-Checking with Edge-Valued Decision Diagrams
NASA Technical Reports Server (NTRS)
Roux, Pierre; Siminiceanu, Radu I.
2010-01-01
We describe an algebra of Edge-Valued Decision Diagrams (EVMDDs) to encode arithmetic functions and its implementation in a model checking library along with state-of-the-art algorithms for building the transition relation and the state space of discrete state systems. We provide efficient algorithms for manipulating EVMDDs and give upper bounds of the theoretical time complexity of these algorithms for all basic arithmetic and relational operators. We also demonstrate that the time complexity of the generic recursive algorithm for applying a binary operator on EVMDDs is no worse than that of Multi-Terminal Decision Diagrams. We have implemented a new symbolic model checker with the intention to represent in one formalism the best techniques available at the moment across a spectrum of existing tools: EVMDDs for encoding arithmetic expressions, identity-reduced MDDs for representing the transition relation, and the saturation algorithm for reachability analysis. We compare our new symbolic model checking EVMDD library with the widely used CUDD package and show that, in many cases, our tool is several orders of magnitude faster than CUDD.
Spatiotemporal access model based on reputation for the sensing layer of the IoT.
Guo, Yunchuan; Yin, Lihua; Li, Chao; Qian, Junyan
2014-01-01
Access control is a key technology in providing security in the Internet of Things (IoT). The mainstream security approach proposed for the sensing layer of the IoT concentrates only on authentication while ignoring the more general models. Unreliable communications and resource constraints make the traditional access control techniques barely meet the requirements of the sensing layer of the IoT. In this paper, we propose a model that combines space and time with reputation to control access to the information within the sensing layer of the IoT. This model is called spatiotemporal access control based on reputation (STRAC). STRAC uses a lattice-based approach to decrease the size of policy bases. To solve the problem caused by unreliable communications, we propose both nondeterministic authorizations and stochastic authorizations. To more precisely manage the reputation of nodes, we propose two new mechanisms to update the reputation of nodes. These new approaches are the authority-based update mechanism (AUM) and the election-based update mechanism (EUM). We show how the model checker UPPAAL can be used to analyze the spatiotemporal access control model of an application. Finally, we also implement a prototype system to demonstrate the efficiency of our model.
Land Desertification and it’s Control in Gonghe Basin of Qinghai Plateau, China
NASA Astrophysics Data System (ADS)
Zhang, D.; Gao, S.; Lu, R.
2009-12-01
Land desertification is an important environmental and social-economic problems that threatening people’s living conditions and impacting social sustainable development. The Gonghe basin in Qinghai Plateau is a fragile cold alpine area which is one of the places seriously threatened by desertification in China. This paper selected Gonghe basin as a study area to study land sandy desertification and its controlling measures. The engineering measures for sandy desertification control include setting clay sand barrier, Salix cheilophila sand barrier, Tamarix sand barrier, Artemisia sand barrier and straw-checker sand-barriers to fix dunes; the biological measures include closure for natural vegetation recovery, direct seeding forestation, transplanting seedlings, and so on. The combination of engineering and biologic measures can fix dunes 2~3 years earlier than the common single measure; and the costs were basically identical. A synthesized evaluation system established based on experimental results and experience in recent years indicated that the effectiveness of the four kinds of sand barrier for prevention and control of sand in study area were: Tamarix sand barrier > Artemisia sand barrier > clay sand barrier > straw-checker sand-barriers. In addition, different optimized management model can be selected according to local material and geographical place. New plants such as Salix cheilophila and Tamarix, which are available in study area, can change from dead sand barrier to live one set in proper seasons, changing engineering measure to biological one directly speeds the progress of forestation and dunes fixation. In addition, we developed new technique of deep planting Salix cheilophila and Tamarix with their long stem, which can effectively resist drought. We found that it had lower cost and higher live rate, and has a better sand prevention effect than deep planting of Poplar. Finally we choose the optimize management model as follows: Artemisia direct seeding > Caragana direct seeding, Tamarix cutting and seedling > Salix cheilophila deep planting, Sea-buckthorn seedling > Tamarix deep planting > Tamarix seedling > Poplar deep planting > Salix cheilophila seedling > Poplar seedling. It has resolved the key problem of control sand flow speed and low efficiency, sand burying and wind erosion and low conservation rate for forestation in the sandy area.
U.S. Navy Fault-Tolerant Microcomputer.
1982-07-01
105 8929 SEPULVEDA BLVD. LOS ANGELES, CALIFORNIA 90045 To: DEFENSE TECHNICAL INFORMATION CENTER Fal-ae Technoog Corporation MILITARY STANDARD FAULT...maintainability. Com- puter errors at any significant level can be disastrous in terms of human injury, aborted missions, loss of critical information and...employed to resolve the question "who checks the checker?" The IOC votes on information received from the bus and outputs the maiority decision. Thus no
Making Microscopic Cubes Of Boron
NASA Technical Reports Server (NTRS)
Faulkner, Joseph M.
1993-01-01
Production of finely divided cubes of boron involves vacuum-deposition technology and requires making of template. Template supports pattern of checkered squares 25 micrometers on side, which are etched 25 micrometers into template material. Template coasted uniformly with paralyene or some similar vacuum coating with low coefficient of adhesion. Intended application to solid rocket fuels, explosives, and pyrotechnics; process used for other applications, from manufacture of pharmaceuticals to processing of nuclear materials.
A Study on Run Time Assurance for Complex Cyber Physical Systems
2013-04-18
safety verification approach was applied to synchronization of distributed local clocks of the nodes on a CAN bus by Jiang et al. [36]. The class of...mode of interaction between the instrumented system and the checker, we distin- guish between synchronous and asynchronous monitoring. In synchronous ...occurred. Synchronous monitoring may deliver a higher degree of assurance than the asynchronous one, because it can block a dangerous action. However
ERIC Educational Resources Information Center
Fredholm, Kent
2014-01-01
The use of online translation (OT) is increasing as more pupils receive laptops from their schools. This study investigates OT use in two groups of Swedish pupils (ages 17-18) studying Spanish as an L3: one group (A) having free Internet access and the spelling and grammar checker of Microsoft Word, the other group (B) using printed dictionaries…
NASA Astrophysics Data System (ADS)
Mohamad, M.; Sabbri, A. R. M.; Mat Jafri, M. Z.; Omar, A. F.
2014-11-01
Near infrared (NIR) spectroscopy technique serves as an important tool for the measurement of moisture content of skin owing to the advantages it has over the other techniques. The purpose of the study is to develop a correlation between NIR spectrometer with electrical conventional techniques for skin moisture measurement. A non-invasive measurement of moisture content of skin was performed on different part of human face and hand under control environment (temperature 21 ± 1 °C, relative humidity 45 ± 5 %). Ten healthy volunteers age between 21-25 (male and female) participated in this study. The moisture content of skin was measured using DermaLab® USB Moisture Module, Scalar Moisture Checker and NIR spectroscopy (NIRQuest). Higher correlation was observed between NIRQuest and Dermalab moisture probe with a coefficient of determination (R2) above 70 % for all the subjects. However, the value of R2 between NIRQuest and Moisture Checker was observed to be lower with the R2 values ranges from 51.6 to 94.4 %. The correlation of NIR spectroscopy technique successfully developed for measuring moisture content of the skin. The analysis of this correlation can help to establish novel instruments based on an optical system in clinical used especially in the dermatology field.
Modeling and formal analysis of urban road traffic
NASA Astrophysics Data System (ADS)
Avram, Camelia; Machado, José; Aştilean, Adina
2013-10-01
Modern life in cities leads to complex urban traffic road and, sometimes, to go from one point to another, in a city, is a hard and very complex task. The use of assisted systems for helping drivers on their task of reaching the desired destination is being common, mainly systems like GPS location systems or other similar systems. The main gap of those systems is that they are not able to assist drivers when some unexpected changes occur, like accidents, or another unexpected situations. In this context, it would be desirable to have a dynamic system to inform the drivers, about everything that is happening "online". This work is inserted in this context and the work presented here is one part of a bigger project that has, as main goal, to be a dynamic system for assisting drivers under hard conditions of urban road traffic. In this paper is modeled, and formally analyzed, the intersection of four street segments, in order to take some considerations about this subject. This paper presents the model of the considered system, using timed automata formalism. The validation and verification of the road traffic model it is realized using UPPAAL model-checker.
On Secure Implementation of an IHE XUA-Based Protocol for Authenticating Healthcare Professionals
NASA Astrophysics Data System (ADS)
Masi, Massimiliano; Pugliese, Rosario; Tiezzi, Francesco
The importance of the Electronic Health Record (EHR) has been addressed in recent years by governments and institutions.Many large scale projects have been funded with the aim to allow healthcare professionals to consult patients data. Properties such as confidentiality, authentication and authorization are the key for the success for these projects. The Integrating the Healthcare Enterprise (IHE) initiative promotes the coordinated use of established standards for authenticated and secure EHR exchanges among clinics and hospitals. In particular, the IHE integration profile named XUA permits to attest user identities by relying on SAML assertions, i.e. XML documents containing authentication statements. In this paper, we provide a formal model for the secure issuance of such an assertion. We first specify the scenario using the process calculus COWS and then analyse it using the model checker CMC. Our analysis reveals a potential flaw in the XUA profile when using a SAML assertion in an unprotected network. We then suggest a solution for this flaw, and model check and implement this solution to show that it is secure and feasible.
Tester Detects Steady-Short Or Intermittent-Open Circuits
NASA Technical Reports Server (NTRS)
Anderson, Bobby L.
1990-01-01
Momentary open circuits or steady short circuits trigger buzzer. Simple, portable, lightweight testing circuit sounds long-duration alarm when it detects steady short circuit or momentary open circuit in coaxial cable or other two-conductor transmission line. Tester sensitive to discontinuities lasting 10 microseconds or longer. Used extensively for detecting intermittent open shorts in accelerometer and extensometer cables. Also used as ordinary buzzer-type continuity checker to detect steady short or open circuits.
New name can sharpen a hospital's image--or diffuse checkered past.
Burns, J
1992-06-22
Hospitals change their names for a variety of reasons. Some seek a new identity that conveys a wider range of services; others need a new moniker as a result of a merger; still others hope a change can help the facility erase past problems. Whatever the case may be, a hospital's decision to change its name has evolved from a mere technical formality to a competitive, and often costly, marketing strategy.
ERIC Educational Resources Information Center
BUCKS COUNTY SCHOOL STUDY COUNCIL
A MATH RESEARCH CENTER MAY BE SET UP IN A CORNER OF A ROOM, PERFERABLY WITH A CHALK BOARD. CHILDREN ARE ABLE TO CREATE MANY OF THE GAMES, CHARTS AND STORIES TO BE USED IN SUCH A CENTER. THE CENTER MAY BE SUPPLEMENTED WITH BOOKS, GAMES, BLOCKS, POPPIT BEADS, COUNTERS, CHECKERS, AND NUMBER LINE CHARTS. ATTENTION IS GIVEN TO A WIDE VARIETY OF…
2017-03-01
Implementation of a Loosely-Coupled Lockstep Approach in the Xilinx Zynq-7000 All Programmable SoC™ for High Consequence Applications Ryan D...sandia.gov Abstract: For high consequence applications requiring information assurance, the architecture of the Xilinx Zynq- 7000 All Programmable ...transaction checker residing in the Programmable Logic portion of the Zynq device will be discussed along with implementation results and latency
2011-09-28
CAFE Foundation Hanger Boss Mike Fenn waves the speed competition checkered flag for the PhoEnix aircraft during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Thursday, Sept. 29, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-28
CAFE Foundation Hanger Boss Mike Fenn waves the speed competition checkered flag for the EcoEagle aircraft during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Thursday, Sept. 29, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-28
CAFE Foundation Hanger Boss Mike Fenn waves the speed competition checkered flag for the e-Genius aircraft during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Thursday, Sept. 29, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
Spatiotemporal Access Model Based on Reputation for the Sensing Layer of the IoT
Guo, Yunchuan; Yin, Lihua; Li, Chao
2014-01-01
Access control is a key technology in providing security in the Internet of Things (IoT). The mainstream security approach proposed for the sensing layer of the IoT concentrates only on authentication while ignoring the more general models. Unreliable communications and resource constraints make the traditional access control techniques barely meet the requirements of the sensing layer of the IoT. In this paper, we propose a model that combines space and time with reputation to control access to the information within the sensing layer of the IoT. This model is called spatiotemporal access control based on reputation (STRAC). STRAC uses a lattice-based approach to decrease the size of policy bases. To solve the problem caused by unreliable communications, we propose both nondeterministic authorizations and stochastic authorizations. To more precisely manage the reputation of nodes, we propose two new mechanisms to update the reputation of nodes. These new approaches are the authority-based update mechanism (AUM) and the election-based update mechanism (EUM). We show how the model checker UPPAAL can be used to analyze the spatiotemporal access control model of an application. Finally, we also implement a prototype system to demonstrate the efficiency of our model. PMID:25177731
The Checkerboard Model of the Nucleus
NASA Astrophysics Data System (ADS)
Lach, Theodore
2015-04-01
The Checker Board Model (CBM) of the nucleus and the associated extended standard model predicts that nature has 5 generations of quarks not 3 and that Nucleus is 2 dimensional. The CBM theory began with an insight into the structure of the He nucleus around the year 1989. Details of how this theory evolved which took many years, and is found on my web site (http://checkerboard.dnsalias.net) or in the following references One independent check of this model is that the wavelength of the ``up'' quark orbiting inside the proton at 84.8123% the speed of light (around the ``dn'' quark in the center of the proton) turns out to be exactly one de Broglie wavelength something determined after the mass and speed of the up quark were determined by other means. This theory explains the mass of the proton and neutron and their magnetic moments and this along with the beautiful symmetric 2D structure of the He nucleus led to the evolution of this theory. When this theory was first presented at Argonne in 1996, it was the first time that anyone had predicted the quarks orbited inside the proton at relativistic speeds and it was met with skepticism.
Special Relativity at the Quantum Scale
Lam, Pui K.
2014-01-01
It has been suggested that the space-time structure as described by the theory of special relativity is a macroscopic manifestation of a more fundamental quantum structure (pre-geometry). Efforts to quantify this idea have come mainly from the area of abstract quantum logic theory. Here we present a preliminary attempt to develop a quantum formulation of special relativity based on a model that retains some geometric attributes. Our model is Feynman's “checker-board” trajectory for a 1-D relativistic free particle. We use this model to guide us in identifying (1) the quantum version of the postulates of special relativity and (2) the appropriate quantum “coordinates”. This model possesses a useful feature that it admits an interpretation both in terms of paths in space-time and in terms of quantum states. Based on the quantum version of the postulates, we derive a transformation rule for velocity. This rule reduces to the Einstein's velocity-addition formula in the macroscopic limit and reveals an interesting aspect of time. The 3-D case, time-dilation effect, and invariant interval are also discussed in term of this new formulation. This is a preliminary investigation; some results are derived, while others are interesting observations at this point. PMID:25531675
Special relativity at the quantum scale.
Lam, Pui K
2014-01-01
It has been suggested that the space-time structure as described by the theory of special relativity is a macroscopic manifestation of a more fundamental quantum structure (pre-geometry). Efforts to quantify this idea have come mainly from the area of abstract quantum logic theory. Here we present a preliminary attempt to develop a quantum formulation of special relativity based on a model that retains some geometric attributes. Our model is Feynman's "checker-board" trajectory for a 1-D relativistic free particle. We use this model to guide us in identifying (1) the quantum version of the postulates of special relativity and (2) the appropriate quantum "coordinates". This model possesses a useful feature that it admits an interpretation both in terms of paths in space-time and in terms of quantum states. Based on the quantum version of the postulates, we derive a transformation rule for velocity. This rule reduces to the Einstein's velocity-addition formula in the macroscopic limit and reveals an interesting aspect of time. The 3-D case, time-dilation effect, and invariant interval are also discussed in term of this new formulation. This is a preliminary investigation; some results are derived, while others are interesting observations at this point.
Elements of a next generation time-series ASCII data file format for Earth Sciences
NASA Astrophysics Data System (ADS)
Webster, C. J.
2015-12-01
Data in ASCII comma separated value (CSV) format are recognized as the most simple, straightforward and readable type of data present in the geosciences. Many scientific workflows developed over the years rely on data using this simple format. However, there is a need for a lightweight ASCII header format standard that is easy to create and easy to work with. Current OGC grade XML standards are complex and difficult to implement for researchers with few resources. Ideally, such a format should provide the data in CSV for easy consumption by generic applications such as spreadsheets. The format should use an existing time standard. The header should be easily human readable as well as machine parsable. The metadata format should be extendable to allow vocabularies to be adopted as they are created by external standards bodies. The creation of such a format will increase the productivity of software engineers and scientists because fewer translators and checkers would be required. Data in ASCII comma separated value (CSV) format are recognized as the most simple, straightforward and readable type of data present in the geosciences. Many scientific workflows developed over the years rely on data using this simple format. However, there is a need for a lightweight ASCII header format standard that is easy to create and easy to work with. Current OGC grade XML standards are complex and difficult to implement for researchers with few resources. Ideally, such a format would provide the data in CSV for easy consumption by generic applications such as spreadsheets. The format would use existing time standard. The header would be easily human readable as well as machine parsable. The metadata format would be extendable to allow vocabularies to be adopted as they are created by external standards bodies. The creation of such a format would increase the productivity of software engineers and scientists because fewer translators would be required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, J; Shi, F; Hrycushko, B
2015-06-15
Purpose: For tandem and ovoid (T&O) HDR brachytherapy in our clinic, it is required that the planning physicist manually capture ∼10 images during planning, perform a secondary dose calculation and generate a report, combine them into a single PDF document, and upload it to a record- and-verify system to prove to an independent plan checker that the case was planned correctly. Not only does this slow down the already time-consuming clinical workflow, the PDF document also limits the number of parameters that can be checked. To solve these problems, we have developed a web-based automatic quality assurance (QA) program. Methods:more » We set up a QA server accessible through a web- interface. A T&O plan and CT images are exported as DICOMRT files and uploaded to the server. The software checks 13 geometric features, e.g. if the dwell positions are reasonable, and 10 dosimetric features, e.g. secondary dose calculations via TG43 formalism and D2cc to critical structures. A PDF report is automatically generated with errors and potential issues highlighted. It also contains images showing important geometric and dosimetric aspects to prove the plan was created following standard guidelines. Results: The program has been clinically implemented in our clinic. In each of the 58 T&O plans we tested, a 14- page QA report was automatically generated. It took ∼45 sec to export the plan and CT images and ∼30 sec to perform the QA tests and generate the report. In contrast, our manual QA document preparation tooks on average ∼7 minutes under optimal conditions and up to 20 minutes when mistakes were made during the document assembly. Conclusion: We have tested the efficiency and effectiveness of an automated process for treatment plan QA of HDR T&O cases. This software was shown to improve the workflow compared to our conventional manual approach.« less
Structural Embeddings: Mechanization with Method
NASA Technical Reports Server (NTRS)
Munoz, Cesar; Rushby, John
1999-01-01
The most powerful tools for analysis of formal specifications are general-purpose theorem provers and model checkers, but these tools provide scant methodological support. Conversely, those approaches that do provide a well-developed method generally have less powerful automation. It is natural, therefore, to try to combine the better-developed methods with the more powerful general-purpose tools. An obstacle is that the methods and the tools often employ very different logics. We argue that methods are separable from their logics and are largely concerned with the structure and organization of specifications. We, propose a technique called structural embedding that allows the structural elements of a method to be supported by a general-purpose tool, while substituting the logic of the tool for that of the method. We have found this technique quite effective and we provide some examples of its application. We also suggest how general-purpose systems could be restructured to support this activity better.
Secure Transaction Protocol for CEPS Compliant EPS in Limited Connectivity Environment
NASA Astrophysics Data System (ADS)
Devane, Satish; Phatak, Deepak
Common Electronic Purse Specification (CEPS) used by European countries, elaborately defines the transaction between customer’s CEP card and merchant’s point of sales (POS) terminal. However it merely defines the specification to transfer the transactions between the Merchant and Merchant Acquirer (MA). This paper proposes a novel approach by introducing an entity, mobile merchant acquirer (MMA) which is a trusted agent of MA and principally works on man in middle concept, but facilitates remote two fold mutual authentication and secure transaction transfer between Merchant and MA through MMA. This approach removes the bottle-neck of connectivity issues between Merchant and MA in limited connectivity environment. The proposed protocol ensures the confidentiality, integrity and money atomicity of transaction batch. The proposed protocol has been verified for correctness by Spin, a model checker and security properties of the protocol have been verified by avispa.
The Stylist: A Pascal Program for Analyzing Prose Style
1987-06-01
words from various periods of English literature, using a primitive tabulating device that spit out reels of paper. His results, however, proved little ...LITERATURE REVIEW When I first conceived of The Stylist, I believed that a "style checker" was a completely original idea. Little did I know that major...sonic coillec. P1C.Style, however, had little to recoinwimid itself besi~des.-this 1Ituro, It relics upon a readability rormula. It also attempts somle
2011-09-28
CAFE Foundation Hanger Boss Mike Fenn waves the speed competition checkered flag for the Taurus G4 aircraft during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Thursday, Sept. 29, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
2011-09-27
The checkered flag is waved as the PhoEnix aircraft crosses the finish line of the miles per gallon (MPG) flight during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Tuesday, Sept. 27, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
Aspect: A Formal Specification Language for Detecting Bugs
1992-06-01
the Aspect state from Chapter 6 and, below it, the definition of the approximating state used by the checker. The additional component Multilocs marks...stages. First, each collection object in Multilocs is expanded into a set of objects whose dependency and value sets are subsets of those of the... Multilocs x Prelocs Env = Var ý7 PLoc x PSource Store = Loc x Aspect F-k Val x PSource Vat = Unknown + PLoc Aspect = PlainAspect + Pointer + Collection
NASA Astrophysics Data System (ADS)
Beck, Faith R.; Lind, R. Paul; Smith, James A.
2018-04-01
Novel fuels are part of the nationwide effort to reduce the enrichment of Uranium for energy production. Performance of such fuels is determined by irradiating their surfaces. To test irradiated samples, the instrumentation must operate remotely. The plate checker used in this experiment at Idaho National Lab (INL) performs non-destructive testing on fuel rod and plate geometries with two different types of sensors: eddy current and digital thickness gauges. The sensors measure oxide growth and total sample thickness on research fuels, respectively. Sensor measurement accuracy is crucial because even 10 microns of error is significant when determining the viability of an experimental fuel. One parameter known to affect the eddy current and thickness gauge sensors is temperature. Since both sensor accuracies depend on the ambient temperature of the system, the plate checker has been characterized for these sensitivities. The manufacturer of the digital gauge probes has noted a rather large coefficient of thermal expansion for their linear scale. It should also be noted that the accuracy of the digital gauge probes are specified at 20°C, which is approximately 7°C cooler than the average hot-cell temperature. In this work, the effect of temperature on the eddy current and digital gauge probes is studied, and thickness measurements are given as empirical functions of temperature.
Formal analysis and evaluation of the back-off procedure in IEEE802.11P VANET
NASA Astrophysics Data System (ADS)
Jin, Li; Zhang, Guoan; Zhu, Xiaojun
2017-07-01
The back-off procedure is one of the media access control technologies in 802.11P communication protocol. It plays an important role in avoiding message collisions and allocating channel resources. Formal methods are effective approaches for studying the performances of communication systems. In this paper, we establish a discrete time model for the back-off procedure. We use Markov Decision Processes (MDPs) to model the non-deterministic and probabilistic behaviors of the procedure, and use the probabilistic computation tree logic (PCTL) language to express different properties, which ensure that the discrete time model performs their basic functionality. Based on the model and PCTL specifications, we study the effect of contention window length on the number of senders in the neighborhood of given receivers, and that on the station’s expected cost required by the back-off procedure to successfully send packets. The variation of the window length may increase or decrease the maximum probability of correct transmissions within a time contention unit. We propose to use PRISM model checker to describe our proposed back-off procedure for IEEE802.11P protocol in vehicle network, and define different probability properties formulas to automatically verify the model and derive numerical results. The obtained results are helpful for justifying the values of the time contention unit.
Characterization of a digital camera as an absolute tristimulus colorimeter
NASA Astrophysics Data System (ADS)
Martinez-Verdu, Francisco; Pujol, Jaume; Vilaseca, Meritxell; Capilla, Pascual
2003-01-01
An algorithm is proposed for the spectral and colorimetric characterization of digital still cameras (DSC) which allows to use them as tele-colorimeters with CIE-XYZ color output, in cd/m2. The spectral characterization consists of the calculation of the color-matching functions from the previously measured spectral sensitivities. The colorimetric characterization consists of transforming the RGB digital data into absolute tristimulus values CIE-XYZ (in cd/m2) under variable and unknown spectroradiometric conditions. Thus, at the first stage, a gray balance has been applied over the RGB digital data to convert them into RGB relative colorimetric values. At a second stage, an algorithm of luminance adaptation vs. lens aperture has been inserted in the basic colorimetric profile. Capturing the ColorChecker chart under different light sources, the DSC color analysis accuracy indexes, both in a raw state and with the corrections from a linear model of color correction, have been evaluated using the Pointer'86 color reproduction index with the unrelated Hunt'91 color appearance model. The results indicate that our digital image capture device, in raw performance, lightens and desaturates the colors.
Design and analysis of DNA strand displacement devices using probabilistic model checking
Lakin, Matthew R.; Parker, David; Cardelli, Luca; Kwiatkowska, Marta; Phillips, Andrew
2012-01-01
Designing correct, robust DNA devices is difficult because of the many possibilities for unwanted interference between molecules in the system. DNA strand displacement has been proposed as a design paradigm for DNA devices, and the DNA strand displacement (DSD) programming language has been developed as a means of formally programming and analysing these devices to check for unwanted interference. We demonstrate, for the first time, the use of probabilistic verification techniques to analyse the correctness, reliability and performance of DNA devices during the design phase. We use the probabilistic model checker prism, in combination with the DSD language, to design and debug DNA strand displacement components and to investigate their kinetics. We show how our techniques can be used to identify design flaws and to evaluate the merits of contrasting design decisions, even on devices comprising relatively few inputs. We then demonstrate the use of these components to construct a DNA strand displacement device for approximate majority voting. Finally, we discuss some of the challenges and possible directions for applying these methods to more complex designs. PMID:22219398
Implementation and Performance Analysis of Parallel Assignment Algorithms on a Hypercube Computer.
1987-12-01
coupled pro- cessors because of the degree of interaction between processors imposed by the global memory [HwB84]. Another sub-class of MIMD... interaction between the individual processors [MuA87]. Many of the commercial MIMD computers available today are loosely coupled [HwB84]. 2.1.3 The Hypercube...Alpha-beta is a method usually employed in the solution of two-person zero-sum games like chess and checkers [Qui87]. The ha sic approach of the alpha
2011-09-27
CAFE Foundation Hanger Boss Mike Fenn waves the checkered flag as aircraft pass the finish line of the miles per gallon (MPG) flight during the 2011 Green Flight Challenge, sponsored by Google, at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. on Tuesday, Sept. 27, 2011. NASA and the Comparative Aircraft Flight Efficiency (CAFE) Foundation are having the challenge with the goal to advance technologies in fuel efficiency and reduced emissions with cleaner renewable fuels and electric aircraft. Photo Credit: (NASA/Bill Ingalls)
Learning a Health Knowledge Graph from Electronic Medical Records.
Rotmensch, Maya; Halpern, Yoni; Tlimat, Abdulhakim; Horng, Steven; Sontag, David
2017-07-20
Demand for clinical decision support systems in medicine and self-diagnostic symptom checkers has substantially increased in recent years. Existing platforms rely on knowledge bases manually compiled through a labor-intensive process or automatically derived using simple pairwise statistics. This study explored an automated process to learn high quality knowledge bases linking diseases and symptoms directly from electronic medical records. Medical concepts were extracted from 273,174 de-identified patient records and maximum likelihood estimation of three probabilistic models was used to automatically construct knowledge graphs: logistic regression, naive Bayes classifier and a Bayesian network using noisy OR gates. A graph of disease-symptom relationships was elicited from the learned parameters and the constructed knowledge graphs were evaluated and validated, with permission, against Google's manually-constructed knowledge graph and against expert physician opinions. Our study shows that direct and automated construction of high quality health knowledge graphs from medical records using rudimentary concept extraction is feasible. The noisy OR model produces a high quality knowledge graph reaching precision of 0.85 for a recall of 0.6 in the clinical evaluation. Noisy OR significantly outperforms all tested models across evaluation frameworks (p < 0.01).
Using Integer Clocks to Verify the Timing-Sync Sensor Network Protocol
NASA Technical Reports Server (NTRS)
Huang, Xiaowan; Singh, Anu; Smolka, Scott A.
2010-01-01
We use the UPPAAL model checker for Timed Automata to verify the Timing-Sync time-synchronization protocol for sensor networks (TPSN). The TPSN protocol seeks to provide network-wide synchronization of the distributed clocks in a sensor network. Clock-synchronization algorithms for sensor networks such as TPSN must be able to perform arithmetic on clock values to calculate clock drift and network propagation delays. They must be able to read the value of a local clock and assign it to another local clock. Such operations are not directly supported by the theory of Timed Automata. To overcome this formal-modeling obstacle, we augment the UPPAAL specification language with the integer clock derived type. Integer clocks, which are essentially integer variables that are periodically incremented by a global pulse generator, greatly facilitate the encoding of the operations required to synchronize clocks as in the TPSN protocol. With this integer-clock-based model of TPSN in hand, we use UPPAAL to verify that the protocol achieves network-wide time synchronization and is devoid of deadlock. We also use the UPPAAL Tracer tool to illustrate how integer clocks can be used to capture clock drift and resynchronization during protocol execution
Modeling biological pathway dynamics with timed automata.
Schivo, Stefano; Scholma, Jetse; Wanders, Brend; Urquidi Camacho, Ricardo A; van der Vet, Paul E; Karperien, Marcel; Langerak, Rom; van de Pol, Jaco; Post, Janine N
2014-05-01
Living cells are constantly subjected to a plethora of environmental stimuli that require integration into an appropriate cellular response. This integration takes place through signal transduction events that form tightly interconnected networks. The understanding of these networks requires capturing their dynamics through computational support and models. ANIMO (analysis of Networks with Interactive Modeling) is a tool that enables the construction and exploration of executable models of biological networks, helping to derive hypotheses and to plan wet-lab experiments. The tool is based on the formalism of Timed Automata, which can be analyzed via the UPPAAL model checker. Thanks to Timed Automata, we can provide a formal semantics for the domain-specific language used to represent signaling networks. This enforces precision and uniformity in the definition of signaling pathways, contributing to the integration of isolated signaling events into complex network models. We propose an approach to discretization of reaction kinetics that allows us to efficiently use UPPAAL as the computational engine to explore the dynamic behavior of the network of interest. A user-friendly interface hides the use of Timed Automata from the user, while keeping the expressive power intact. Abstraction to single-parameter kinetics speeds up construction of models that remain faithful enough to provide meaningful insight. The resulting dynamic behavior of the network components is displayed graphically, allowing for an intuitive and interactive modeling experience.
Testing First-Order Logic Axioms in AutoCert
NASA Technical Reports Server (NTRS)
Ahn, Ki Yung; Denney, Ewen
2009-01-01
AutoCert [2] is a formal verification tool for machine generated code in safety critical domains, such as aerospace control code generated from MathWorks Real-Time Workshop. AutoCert uses Automated Theorem Provers (ATPs) [5] based on First-Order Logic (FOL) to formally verify safety and functional correctness properties of the code. These ATPs try to build proofs based on user provided domain-specific axioms, which can be arbitrary First-Order Formulas (FOFs). These axioms are the most crucial part of the trusted base, since proofs can be submitted to a proof checker removing the need to trust the prover and AutoCert itself plays the part of checking the code generator. However, formulating axioms correctly (i.e. precisely as the user had really intended) is non-trivial in practice. The challenge of axiomatization arise from several dimensions. First, the domain knowledge has its own complexity. AutoCert has been used to verify mathematical requirements on navigation software that carries out various geometric coordinate transformations involving matrices and quaternions. Axiomatic theories for such constructs are complex enough that mistakes are not uncommon. Second, adjusting axioms for ATPs can add even more complexity. The axioms frequently need to be modified in order to have them in a form suitable for use with ATPs. Such modifications tend to obscure the axioms further. Thirdly, speculating validity of the axioms from the output of existing ATPs is very hard since theorem provers typically do not give any examples or counterexamples.
Performance of rapid test kits to assess household coverage of iodized salt.
Gorstein, Jonathan; van der Haar, Frits; Codling, Karen; Houston, Robin; Knowles, Jacky; Timmer, Arnold
2016-10-01
The main indicator adopted to track universal salt iodization has been the coverage of adequately iodized salt in households. Rapid test kits (RTK) have been included in household surveys to test the iodine content in salt. However, laboratory studies of their performance have concluded that RTK are reliable only to distinguish between the presence and absence of iodine in salt, but not to determine whether salt is adequately iodized. The aim of the current paper was to examine the performance of RTK under field conditions and to recommend their most appropriate use in household surveys. Standard performance characteristics of the ability of RTK to detect the iodine content in salt at 0 mg/kg (salt with no iodine), 5 mg/kg (salt with any added iodine) and 15 mg/kg ('adequately' iodized salt) were calculated. Our analysis employed the agreement rate (AR) as a preferred metric of RTK performance. Setting/Subjects Twenty-five data sets from eighteen population surveys which assessed household iodized salt by both the RTK and a quantitative method (i.e. titration or WYD Checker) were obtained from Asian (nineteen data sets), African (five) and European (one) countries. In detecting iodine in salt at 0 mg/kg, the RTK had an AR>90 % in eight of twenty-three surveys, while eight surveys had an AR90 %. The RTK is not suited for assessment of adequately iodized salt coverage. Quantitative assessment, such as by titration or WYD Checker, is necessary for estimates of adequately iodized salt coverage.
Real reproduction and evaluation of color based on BRDF method
NASA Astrophysics Data System (ADS)
Qin, Feng; Yang, Weiping; Yang, Jia; Li, Hongning; Luo, Yanlin; Long, Hongli
2013-12-01
It is difficult to reproduce the original color of targets really in different illuminating environment using the traditional methods. So a function which can reconstruct the characteristics of reflection about every point on the surface of target is required urgently to improve the authenticity of color reproduction, which known as the Bidirectional Reflectance Distribution Function(BRDF). A method of color reproduction based on the BRDF measurement is introduced in this paper. Radiometry is combined with the colorimetric theories to measure the irradiance and radiance of GretagMacbeth 24 ColorChecker by using PR-715 Radiation Spectrophotometer of PHOTO RESEARCH, Inc, USA. The BRDF and BRF (Bidirectional Reflectance Factor) values of every color piece corresponding to the reference area are calculated according to irradiance and radiance, thus color tristimulus values of 24 ColorChecker are reconstructed. The results reconstructed by BRDF method are compared with values calculated by the reflectance using PR-715, at last, the chromaticity coordinates in color space and color difference between each other are analyzed. The experimental result shows average color difference and sample standard deviation between the method proposed in this paper and traditional reconstruction method depended on reflectance are 2.567 and 1.3049 respectively. The conclusion indicates that the method of color reproduction based on BRDF has the more obvious advantages to describe the color information of object than the reflectance in hemisphere space through the theoretical and experimental analysis. This method proposed in this paper is effective and feasible during the research of reproducing the chromaticity.
Physics beyond the Standard Model
NASA Astrophysics Data System (ADS)
Lach, Theodore
2011-04-01
Recent discoveries of the excited states of the Bs** meson along with the discovery of the omega-b-minus have brought into popular acceptance the concept of the orbiting quarks predicted by the Checker Board Model (CBM) 14 years ago. Back then the concept of orbiting quarks was not fashionable. Recent estimates of velocities of these quarks inside the proton and neutron are in excess of 90% the speed of light also in agreement with the CBM model. Still a 2D structure of the nucleus has not been accepted nor has it been proven wrong. The CBM predicts masses of the up and dn quarks are 237.31 MeV and 42.392 MeV respectively and suggests that a lighter generation of quarks u and d make up a different generation of quarks that make up light mesons. The CBM also predicts that the T' and B' quarks do exist and are not as massive as might be expected. (this would make it a 5G world in conflict with the SM) The details of the CB model and prediction of quark masses can be found at: http://checkerboard.dnsalias.net/ (1). T.M. Lach, Checkerboard Structure of the Nucleus, Infinite Energy, Vol. 5, issue 30, (2000). (2). T.M. Lach, Masses of the Sub-Nuclear Particles, nucl-th/0008026, @http://xxx.lanl.gov/.
The 5th Generation model of Particle Physics
NASA Astrophysics Data System (ADS)
Lach, Theodore
2009-05-01
The Standard model of Particle Physics is able to account for all known HEP phenomenon, yet it is not able to predict the masses of the quarks or leptons nor can it explain why they have their respective values. The Checker Board Model (CBM) predicts that there are 5 generation of quarks and leptons and shows a pattern to those masses, namely each three quarks or leptons (within adjacent generations or within a generation) are related to each other by a geometric mean relationship. A 2D structure of the nucleus can be imaged as 2D plate spinning on its axis, it would for all practical circumstances appear to be a 3D object. The masses of the hypothesized ``up'' and ``dn'' quarks determined by the CBM are 237.31 MeV and 42.392 MeV respectively. These new quarks in addition to a lepton of 7.4 MeV make up one of the missing generations. The details of this new particle physics model can be found at the web site: checkerboard.dnsalias.net. The only areas were this theory conflicts with existing dogma is in the value of the mass of the Top quark. The particle found at Fermi Lab must be some sort of composite particle containing Top quarks.
Timsit, E; Dendukuri, N; Schiller, I; Buczinski, S
2016-12-01
Diagnosis of bovine respiratory disease (BRD) in beef cattle placed in feedlots is typically based on clinical illness (CI) detected by pen-checkers. Unfortunately, the accuracy of this diagnostic approach (namely, sensitivity [Se] and specificity [Sp]) remains poorly understood, in part due to the absence of a reference test for ante-mortem diagnosis of BRD. Our objective was to pool available estimates of CI's diagnostic accuracy for BRD diagnosis in feedlot beef cattle while adjusting for the inaccuracy in the reference test. The presence of lung lesions (LU) at slaughter was used as the reference test. A systematic review of the literature was conducted to identify research articles comparing CI detected by pen-checkers during the feeding period to LU at slaughter. A hierarchical Bayesian latent-class meta-analysis was used to model test accuracy. This approach accounted for imperfections of both tests as well as the within and between study variability in the accuracy of CI. Furthermore, it also predicted the Se CI and Sp CI for future studies. Conditional independence between CI and LU was assumed, as these two tests are not based on similar biological principles. Seven studies were included in the meta-analysis. Estimated pooled Se CI and Sp CI were 0.27 (95% Bayesian credible interval: 0.12-0.65) and 0.92 (0.72-0.98), respectively, whereas estimated pooled Se LU and Sp LU were 0.91 (0.82-0.99) and 0.67 (0.64-0.79). Predicted Se CI and Sp CI for future studies were 0.27 (0.01-0.96) and 0.92 (0.14-1.00), respectively. The wide credible intervals around predicted Se CI and Sp CI estimates indicated considerable heterogeneity among studies, which suggests that pooled Se CI and Sp CI are not generalizable to individual studies. In conclusion, CI appeared to have poor Se but high Sp for BRD diagnosis in feedlots. Furthermore, considerable heterogeneity among studies highlighted an urgent need to standardize BRD diagnosis in feedlots. Copyright © 2016 Elsevier B.V. All rights reserved.
Luo, Qiang; Yan, Zhuangzhi; Gu, Dongxing; Cao, Lei
This paper proposed an image interpolation algorithm based on bilinear interpolation and a color correction algorithm based on polynomial regression on FPGA, which focused on the limited number of imaging pixels and color distortion of the ultra-thin electronic endoscope. Simulation experiment results showed that the proposed algorithm realized the real-time display of 1280 x 720@60Hz HD video, and using the X-rite color checker as standard colors, the average color difference was reduced about 30% comparing with that before color correction.
2001-10-22
This 39 by 47 km ASTER sub-scene was acquired on May 20, 2000 and shows an area along the west side of the Cascade Range in west central Oregon. Bands 4, 3, and 2 were combined as red, green, and blue. In this composite, snow appears blue, forests are green, and clear-cut areas are orange-pink. The magnitude of logging operations is quite obvious, appearing as a checker board pattern. The image is centered at 44.6 degrees north latitude, 122.2 degrees west longitude. http://photojournal.jpl.nasa.gov/catalog/PIA11165
Dr. William O. Coffee and his absorption cure for cataract.
Ferry, A P
1989-08-01
Dr. William O. Coffee was an ophthalmologist who conducted an office and mail-order practice in the Midwest from the 1880s until 1927. His main stock in trade was a self-discovered absorption cure for a variety of ocular diseases, with particular emphasis on the medical cure of cataracts. Dr. Coffee's career was a checkered one, marked by dubious credentials, exuberant self-promotion, unlikely and exaggerated claims of medical successes, plagiarism, and rejection by the medical "establishment." Certain parallels may be drawn between his activities and some currently observed practices in ophthalmology.
High temperature solar thermal technology
NASA Technical Reports Server (NTRS)
Leibowitz, L. P.; Hanseth, E. J.; Peelgren, M. L.
1980-01-01
Some advanced technology concepts under development for high-temperature solar thermal energy systems to achieve significant energy cost reductions and performance gains and thus promote the application of solar thermal power technology are presented. Consideration is given to the objectives, current efforts and recent test and analysis results in the development of high-temperature (950-1650 C) ceramic receivers, thermal storage module checker stoves, and the use of reversible chemical reactions to transport collected solar energy. It is pointed out that the analysis and testing of such components will accelerate the commercial deployment of solar energy.
Simultaneous adaptation to size, distance, and curvature underwater.
Vernoy, M W
1989-02-01
Perceptual adaptation to underwater size, distance, and curvature distortion was measured for four different adaptation conditions. These conditions consisted of (a) playing Chinese checkers underwater, (b) swimming with eyes open underwater, (c) viewing a square underwater, and (d) an air control. Significant adaptation to underwater distortions was recorded in all except the air control condition. In the viewing square condition a positive correlation between size and distance adaptation was noted. It was suggested that adaptation to curvature may have mediated the positive correlation. Possible applications for the training of divers are discussed.
Launching "the evolution of cooperation".
Axelrod, Robert
2012-04-21
This article describes three aspects of the author's early work on the evolution of the cooperation. First, it explains how the idea for a computer tournament for the iterated Prisoner's Dilemma was inspired by the artificial intelligence research on computer checkers and computer chess. Second, it shows how the vulnerability of simple reciprocity of misunderstanding or misimplementation can be eliminated with the addition of some degree of generosity or contrition. Third, it recounts the unusual collaboration between the author, a political scientist, and William D. Hamilton, an evolutionary biologist. Copyright © 2011 Elsevier Ltd. All rights reserved.
A Model for Assessing the Liability of Seemingly Correct Software
NASA Technical Reports Server (NTRS)
Voas, Jeffrey M.; Voas, Larry K.; Miller, Keith W.
1991-01-01
Current research on software reliability does not lend itself to quantitatively assessing the risk posed by a piece of life-critical software. Black-box software reliability models are too general and make too many assumptions to be applied confidently to assessing the risk of life-critical software. We present a model for assessing the risk caused by a piece of software; this model combines software testing results and Hamlet's probable correctness model. We show how this model can assess software risk for those who insure against a loss that can occur if life-critical software fails.
Formally verifying human–automation interaction as part of a system model: limitations and tradeoffs
Bass, Ellen J.
2011-01-01
Both the human factors engineering (HFE) and formal methods communities are concerned with improving the design of safety-critical systems. This work discusses a modeling effort that leveraged methods from both fields to perform formal verification of human–automation interaction with a programmable device. This effort utilizes a system architecture composed of independent models of the human mission, human task behavior, human-device interface, device automation, and operational environment. The goals of this architecture were to allow HFE practitioners to perform formal verifications of realistic systems that depend on human–automation interaction in a reasonable amount of time using representative models, intuitive modeling constructs, and decoupled models of system components that could be easily changed to support multiple analyses. This framework was instantiated using a patient controlled analgesia pump in a two phased process where models in each phase were verified using a common set of specifications. The first phase focused on the mission, human-device interface, and device automation; and included a simple, unconstrained human task behavior model. The second phase replaced the unconstrained task model with one representing normative pump programming behavior. Because models produced in the first phase were too large for the model checker to verify, a number of model revisions were undertaken that affected the goals of the effort. While the use of human task behavior models in the second phase helped mitigate model complexity, verification time increased. Additional modeling tools and technological developments are necessary for model checking to become a more usable technique for HFE. PMID:21572930
Zhang, Yongquan; Tang, Huiming; Li, Changdong; Lu, Guiying; Cai, Yi; Zhang, Junrong; Tan, Fulin
2018-01-14
The physical model test of landslides is important for studying landslide structural damage, and parameter measurement is key in this process. To meet the measurement requirements for deep displacement in landslide physical models, an automatic flexible inclinometer probe with good coupling and large deformation capacity was designed. The flexible inclinometer probe consists of several gravity acceleration sensing units that are protected and positioned by silicon encapsulation, all the units are connected to a 485-comunication bus. By sensing the two-axis tilt angle, the direction and magnitude of the displacement for a measurement unit can be calculated, then the overall displacement is accumulated according to all units, integrated from bottom to top in turn. In the conversion from angle to displacement, two spline interpolation methods are introduced to correct and resample the data; one is to interpolate the displacement after conversion, and the other is to interpolate the angle before conversion; compared with the result read from checkered paper, the latter is proved to have a better effect, with an additional condition that the displacement curve move up half the length of the unit. The flexible inclinometer is verified with respect to its principle and arrangement by a laboratory physical model test, and the test results are highly consistent with the actual deformation of the landslide model.
Zhang, Yongquan; Tang, Huiming; Li, Changdong; Lu, Guiying; Cai, Yi; Zhang, Junrong; Tan, Fulin
2018-01-01
The physical model test of landslides is important for studying landslide structural damage, and parameter measurement is key in this process. To meet the measurement requirements for deep displacement in landslide physical models, an automatic flexible inclinometer probe with good coupling and large deformation capacity was designed. The flexible inclinometer probe consists of several gravity acceleration sensing units that are protected and positioned by silicon encapsulation, all the units are connected to a 485-comunication bus. By sensing the two-axis tilt angle, the direction and magnitude of the displacement for a measurement unit can be calculated, then the overall displacement is accumulated according to all units, integrated from bottom to top in turn. In the conversion from angle to displacement, two spline interpolation methods are introduced to correct and resample the data; one is to interpolate the displacement after conversion, and the other is to interpolate the angle before conversion; compared with the result read from checkered paper, the latter is proved to have a better effect, with an additional condition that the displacement curve move up half the length of the unit. The flexible inclinometer is verified with respect to its principle and arrangement by a laboratory physical model test, and the test results are highly consistent with the actual deformation of the landslide model. PMID:29342902
Universal Verification Methodology Based Register Test Automation Flow.
Woo, Jae Hun; Cho, Yong Kwan; Park, Sun Kyu
2016-05-01
In today's SoC design, the number of registers has been increased along with complexity of hardware blocks. Register validation is a time-consuming and error-pron task. Therefore, we need an efficient way to perform verification with less effort in shorter time. In this work, we suggest register test automation flow based UVM (Universal Verification Methodology). UVM provides a standard methodology, called a register model, to facilitate stimulus generation and functional checking of registers. However, it is not easy for designers to create register models for their functional blocks or integrate models in test-bench environment because it requires knowledge of SystemVerilog and UVM libraries. For the creation of register models, many commercial tools support a register model generation from register specification described in IP-XACT, but it is time-consuming to describe register specification in IP-XACT format. For easy creation of register model, we propose spreadsheet-based register template which is translated to IP-XACT description, from which register models can be easily generated using commercial tools. On the other hand, we also automate all the steps involved integrating test-bench and generating test-cases, so that designers may use register model without detailed knowledge of UVM or SystemVerilog. This automation flow involves generating and connecting test-bench components (e.g., driver, checker, bus adaptor, etc.) and writing test sequence for each type of register test-case. With the proposed flow, designers can save considerable amount of time to verify functionality of registers.
Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model
NASA Technical Reports Server (NTRS)
Rizvi, Farheen
2016-01-01
Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.
Statistical modelling of software reliability
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1991-01-01
During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety.
NASA Astrophysics Data System (ADS)
Hardman, M.; Brodzik, M. J.; Long, D. G.
2017-12-01
Beginning in 1978, the satellite passive microwave data record has been a mainstay of remote sensing of the cryosphere, providing twice-daily, near-global spatial coverage for monitoring changes in hydrologic and cryospheric parameters that include precipitation, soil moisture, surface water, vegetation, snow water equivalent, sea ice concentration and sea ice motion. Historical versions of the gridded passive microwave data sets were produced as flat binary files described in human-readable documentation. This format is error-prone and makes it difficult to reliably include all processing and provenance. Funded by NASA MEaSUREs, we have completely reprocessed the gridded data record that includes SMMR, SSM/I-SSMIS and AMSR-E. The new Calibrated Enhanced-Resolution Brightness Temperature (CETB) Earth System Data Record (ESDR) files are self-describing. Our approach to the new data set was to create netCDF4 files that use standard metadata conventions and best practices to incorporate file-level, machine- and human-readable contents, geolocation, processing and provenance metadata. We followed the flexible and adaptable Climate and Forecast (CF-1.6) Conventions with respect to their coordinate conventions and map projection parameters. Additionally, we made use of Attribute Conventions for Dataset Discovery (ACDD-1.3) that provided file-level conventions with spatio-temporal bounds that enable indexing software to search for coverage. Our CETB files also include temporal coverage and spatial resolution in the file-level metadata for human-readability. We made use of the JPL CF/ACDD Compliance Checker to guide this work. We tested our file format with real software, for example, netCDF Command-line Operators (NCO) power tools for unlimited control on spatio-temporal subsetting and concatenation of files. The GDAL tools understand the CF metadata and produce fully-compliant geotiff files from our data. ArcMap can then reproject the geotiff files on-the-fly and work with other geolocated data such as coastlines, with no special work required. We expect this combination of standards and well-tested interoperability to significantly improve the usability of this important ESDR for the Earth Science community.
Pinus ponderosa: A checkered past obscured four species.
Willyard, Ann; Gernandt, David S; Potter, Kevin; Hipkins, Valerie; Marquardt, Paula; Mahalovich, Mary Frances; Langer, Stephen K; Telewski, Frank W; Cooper, Blake; Douglas, Connor; Finch, Kristen; Karemera, Hassani H; Lefler, Julia; Lea, Payton; Wofford, Austin
2017-01-01
Molecular genetic evidence can help delineate taxa in species complexes that lack diagnostic morphological characters. Pinus ponderosa (Pinaceae; subsection Ponderosae) is recognized as a problematic taxon: plastid phylogenies of exemplars were paraphyletic, and mitochondrial phylogeography suggested at least four subdivisions of P. ponderosa. These patterns have not been examined in the context of other Ponderosae species. We hypothesized that putative intraspecific subdivisions might each represent a separate taxon. We genotyped six highly variable plastid simple sequence repeats in 1903 individuals from 88 populations of P. ponderosa and related Ponderosae (P. arizonica, P. engelmannii, and P. jeffreyi). We used multilocus haplotype networks and discriminant analysis of principal components to test clustering of individuals into genetically and geographically meaningful taxonomic units. There are at least four distinct plastid clusters within P. ponderosa that roughly correspond to the geographic distribution of mitochondrial haplotypes. Some geographic regions have intermixed plastid lineages, and some mitochondrial and plastid boundaries do not coincide. Based on relative distances to other species of Ponderosae, these clusters diagnose four distinct taxa. Newly revealed geographic boundaries of four distinct taxa (P. benthamiana, P. brachyptera, P. scopulorum, and a narrowed concept of P. ponderosa) do not correspond completely with taxonomies. Further research is needed to understand their morphological and nuclear genetic makeup, but we suggest that resurrecting originally published species names would more appropriately reflect the taxonomy of this checkered classification than their current treatment as varieties of P. ponderosa. © 2017 Willyard et al. Published by the Botanical Society of America. This work is licensed under a Creative Commons public domain license (CC0 1.0).
Sala, Giovanni; Gobet, Fernand
2017-12-01
It has been proposed that playing chess enables children to improve their ability in mathematics. These claims have been recently evaluated in a meta-analysis (Sala & Gobet, 2016, Educational Research Review, 18, 46-57), which indicated a significant effect in favor of the groups playing chess. However, the meta-analysis also showed that most of the reviewed studies used a poor experimental design (in particular, they lacked an active control group). We ran two experiments that used a three-group design including both an active and a passive control group, with a focus on mathematical ability. In the first experiment (N = 233), a group of third and fourth graders was taught chess for 25 hours and tested on mathematical problem-solving tasks. Participants also filled in a questionnaire assessing their meta-cognitive ability for mathematics problems. The group playing chess was compared to an active control group (playing checkers) and a passive control group. The three groups showed no statistically significant difference in mathematical problem-solving or metacognitive abilities in the posttest. The second experiment (N = 52) broadly used the same design, but the Oriental game of Go replaced checkers in the active control group. While the chess-treated group and the passive control group slightly outperformed the active control group with mathematical problem solving, the differences were not statistically significant. No differences were found with respect to metacognitive ability. These results suggest that the effects (if any) of chess instruction, when rigorously tested, are modest and that such interventions should not replace the traditional curriculum in mathematics.
Model Driven Engineering with Ontology Technologies
NASA Astrophysics Data System (ADS)
Staab, Steffen; Walter, Tobias; Gröner, Gerd; Parreiras, Fernando Silva
Ontologies constitute formal models of some aspect of the world that may be used for drawing interesting logical conclusions even for large models. Software models capture relevant characteristics of a software artifact to be developed, yet, most often these software models have limited formal semantics, or the underlying (often graphical) software language varies from case to case in a way that makes it hard if not impossible to fix its semantics. In this contribution, we survey the use of ontology technologies for software modeling in order to carry over advantages from ontology technologies to the software modeling domain. It will turn out that ontology-based metamodels constitute a core means for exploiting expressive ontology reasoning in the software modeling domain while remaining flexible enough to accommodate varying needs of software modelers.
Generic domain models in software engineering
NASA Technical Reports Server (NTRS)
Maiden, Neil
1992-01-01
This paper outlines three research directions related to domain-specific software development: (1) reuse of generic models for domain-specific software development; (2) empirical evidence to determine these generic models, namely elicitation of mental knowledge schema possessed by expert software developers; and (3) exploitation of generic domain models to assist modelling of specific applications. It focuses on knowledge acquisition for domain-specific software development, with emphasis on tool support for the most important phases of software development.
Proving the correctness of the flight director program EADIFD, volume 1
NASA Technical Reports Server (NTRS)
Lee, F. J.; Maurer, W. D.
1977-01-01
EADIFD is written in symbolic assembly language for execution on the C4000 airborne computer. It is a subprogram of an aircraft navigation and guidance program and is used to generate pitch and roll command signals for use in terminal airspace. The proof of EADIFD was carried out by an inductive assertion method consisting of two parts, a verification condition generator and a source language independent proof checker. With the specifications provided by NASA, EADIFD was proved correct. The termination of the program is guaranteed and the program contains no instructions that can modify it under any conditions.
Research in mathematical theory of computation. [computer programming applications
NASA Technical Reports Server (NTRS)
Mccarthy, J.
1973-01-01
Research progress in the following areas is reviewed: (1) new version of computer program LCF (logic for computable functions) including a facility to search for proofs automatically; (2) the description of the language PASCAL in terms of both LCF and in first order logic; (3) discussion of LISP semantics in LCF and attempt to prove the correctness of the London compilers in a formal way; (4) design of both special purpose and domain independent proving procedures specifically program correctness in mind; (5) design of languages for describing such proof procedures; and (6) the embedding of ideas in the first order checker.
Earth observation photo taken by JPL with the Shuttle Imaging Radar-A
NASA Technical Reports Server (NTRS)
1981-01-01
Earth observation photo taken by the Jet Propulsion Laboratory (JPL) with the Shuttle Imaging Radar-A (SIR-A). This image shows a 50 by 100 kilometer (30 by 60 mile) area of the Imperial Valley in Southern California and neighboring Mexico. The checkered patterns represent agricultural fields where different types of crops in different stages of growth are cultivated. The very bright areas are (top left to lower right) the U.S. towns of Brawley, Imperial, El Centro, Calexico and the Mexican city of Mexicali. The bright L-shaped line (upper right) is the All-American water canal.
Earth observations taken from shuttle orbiter Columbia
1995-11-02
STS073-745-055 (2 November 1995) --- This photograph in color infrared highlights the different vegetation zones on the island of Maui, Hawaii. The dark red tropical forests live on the steep volcanic slopes on the north side of the island, and the fertile lowlands support large sugar cane plantations, which are the red and black checkered pattern. The Wailuku and Kahului area near the center on the north shore of the island was formerly a whaling center. Much of the eastern part of the island is Haleakala National Park, including the spectacular Haleakala Crater (under clouds).
Is your ED a medical department or a business? Survey says...both.
2009-07-01
Taking a solid business-like approach to the management of your ED involves--but is certainly not limited to--getting a handle on revenues and expenses. Here are a few strategies some ED managers say help them run a tighter, and better, ship: Have a clinical audit specialist review charts, and have a clerical person "check the checker." Use a "cultural fit" interview with prospective staff members to ensure you're on the same page when it comes to service. Develop a charge structure with numerical values for clinical activities and services, to help ensure optimal reimbursement.
DEVELOPMENT OF A PORTABLE SOFTWARE LANGUAGE FOR PHYSIOLOGICALLY-BASED PHARMACOKINETIC (PBPK) MODELS
The PBPK modeling community has had a long-standing problem with modeling software compatibility. The numerous software packages used for PBPK models are, at best, minimally compatible. This creates problems ranging from model obsolescence due to software support discontinuation...
Cost Estimation of Software Development and the Implications for the Program Manager
1992-06-01
Software Lifecycle Model (SLIM), the Jensen System-4 model, the Software Productivity, Quality, and Reliability Estimator ( SPQR \\20), the Constructive...function models in current use are the Software Productivity, Quality, and Reliability Estimator ( SPQR /20) and the Software Architecture Sizing and...Estimator ( SPQR /20) was developed by T. Capers Jones of Software Productivity Research, Inc., in 1985. The model is intended to estimate the outcome
Software reliability models for critical applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pham, H.; Pham, M.
This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the secondmore » place. 407 refs., 4 figs., 2 tabs.« less
Software reliability models for critical applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pham, H.; Pham, M.
This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place.more » 407 refs., 4 figs., 2 tabs.« less
Li, Chen; Nagasaki, Masao; Koh, Chuan Hock; Miyano, Satoru
2011-05-01
Mathematical modeling and simulation studies are playing an increasingly important role in helping researchers elucidate how living organisms function in cells. In systems biology, researchers typically tune many parameters manually to achieve simulation results that are consistent with biological knowledge. This severely limits the size and complexity of simulation models built. In order to break this limitation, we propose a computational framework to automatically estimate kinetic parameters for a given network structure. We utilized an online (on-the-fly) model checking technique (which saves resources compared to the offline approach), with a quantitative modeling and simulation architecture named hybrid functional Petri net with extension (HFPNe). We demonstrate the applicability of this framework by the analysis of the underlying model for the neuronal cell fate decision model (ASE fate model) in Caenorhabditis elegans. First, we built a quantitative ASE fate model containing 3327 components emulating nine genetic conditions. Then, using our developed efficient online model checker, MIRACH 1.0, together with parameter estimation, we ran 20-million simulation runs, and were able to locate 57 parameter sets for 23 parameters in the model that are consistent with 45 biological rules extracted from published biological articles without much manual intervention. To evaluate the robustness of these 57 parameter sets, we run another 20 million simulation runs using different magnitudes of noise. Our simulation results concluded that among these models, one model is the most reasonable and robust simulation model owing to the high stability against these stochastic noises. Our simulation results provide interesting biological findings which could be used for future wet-lab experiments.
Usability Prediction & Ranking of SDLC Models Using Fuzzy Hierarchical Usability Model
NASA Astrophysics Data System (ADS)
Gupta, Deepak; Ahlawat, Anil K.; Sagar, Kalpna
2017-06-01
Evaluation of software quality is an important aspect for controlling and managing the software. By such evaluation, improvements in software process can be made. The software quality is significantly dependent on software usability. Many researchers have proposed numbers of usability models. Each model considers a set of usability factors but do not cover all the usability aspects. Practical implementation of these models is still missing, as there is a lack of precise definition of usability. Also, it is very difficult to integrate these models into current software engineering practices. In order to overcome these challenges, this paper aims to define the term `usability' using the proposed hierarchical usability model with its detailed taxonomy. The taxonomy considers generic evaluation criteria for identifying the quality components, which brings together factors, attributes and characteristics defined in various HCI and software models. For the first time, the usability model is also implemented to predict more accurate usability values. The proposed system is named as fuzzy hierarchical usability model that can be easily integrated into the current software engineering practices. In order to validate the work, a dataset of six software development life cycle models is created and employed. These models are ranked according to their predicted usability values. This research also focuses on the detailed comparison of proposed model with the existing usability models.
Evaluating the accuracy of technicians and pharmacists in checking unit dose medication cassettes.
Ambrose, Peter J; Saya, Frank G; Lovett, Larry T; Tan, Sandy; Adams, Dale W; Shane, Rita
2002-06-15
The accuracy rates of board-registered pharmacy technicians and pharmacists in checking unit dose medication cassettes in the inpatient setting at two separate institutions were examined. Cedars-Sinai Medical Center and Long Beach Memorial Medical Center, both in Los Angeles county, petitioned the California State Board of Pharmacy to approve a waiver of the California Code of Regulations to conduct an experimental program to compare the accuracy of unit dose medication cassettes checked by pharmacists with that of cassettes checked by trained, certified pharmacy technicians. The study consisted of three parts: assessing pharmacist baseline checking accuracy (Phase I), developing a technician-training program and certifying technicians who completed the didactic and practical training (Phase II), and evaluating the accuracy of certified technicians checking unit dose medication cassettes as a daily function (Phase III). Twenty-nine pharmacists and 41 technicians (3 of whom were pharmacy interns) participated in the study. Of the technicians, all 41 successfully completed the didactic and practical training, 39 successfully completed the audits and became certified checkers, and 2 (including 1 of the interns) did not complete the certification audits because they were reassigned to another work area or had resigned. In Phase II, the observed accuracy rate and its lower confidence limit exceeded the predetermined minimum requirement of 99.8% for a certified checker. The mean accuracy rates for technicians were identical at the two institutions (p = 1.0). The difference in mean accuracy rates between pharmacists (99.52%; 95% confidence interval [CI] 99.44-99.58%) and technicians, (99.89%; 95% CI 99.87-99.90%) was significant (p < 0.0001). Inpatient technicians who had been trained and certified in a closely supervised program that incorporated quality assurance mechanisms could safely and accurately check unit dose medication cassettes filled by other technicians.
Roodbeen, Ruud T J; Schelleman-Offermans, Karen; Lemmens, Paul H H M
2016-06-01
Age limits are effective in reducing alcohol- and tobacco-related harm, however, their effectiveness depends on the extent to which they are complied with. This study aimed to investigate the effectiveness of different age verification systems (AVSs) implemented by 400 Dutch supermarkets on requesting a valid age verification (ID) and on sellers' compliance. A mixed method design was used. Compliance was measured by 800 alcohol and tobacco purchase attempts by 17-year-old mystery shoppers. To analyze the effectiveness of AVSs, logistic regression analyses were performed. Insight into facilitating and hindering factors in the purchase process was obtained by 13 interviews with supermarket managers. Only a tendency toward a positive effect of the presence of the keying-on-date-of-birth AVS or ID swiper/checker was found on ID request for both alcohol and tobacco purchase attempts. The use of the keying-on-date-of-birth AVS or ID swiper/checker significantly increased the odds for compliance after an ID was requested, for both alcohol and tobacco purchase attempts. Managers indicated that ID requests and compliance could be facilitated by providing cashiers with sufficient managerial support, technical support, and regular training about the purchase process and use of the AVS. The usage of AVSs calculating and confirming whether the customer reached the legal purchase age for cashiers significantly increases the odds for cashiers to comply with age limits of alcohol and tobacco. Future research should gain insight into how usage of effective AVSs can be improved and explore the feasibility of implementation and effectiveness in other outlets. Copyright © 2016 The Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Study of factors affecting the appearance of colors under microscopes
NASA Astrophysics Data System (ADS)
Zakizadeh, Roshanak; Martinez-Garcia, Juan; Raja, Kiran B.; Siakidis, Christos
2013-11-01
The variation of colors in microscopy systems can be quite critical for some users. To address this problem, a study is conducted to analyze how different factors such as size of the sample, intensity of the microscope's light source and the characteristics of the material like chroma and saturation can affect the color appearance through the eyepiece of the microscope. To study the changes in colors considering these factors, the spectral reflectance of 24 colors of GretagMacbeth Classic ColorChecker® and Mini ColorChecker® which are placed under a Nikon ECLIPSE MA200 microscope®2 using dark filed and bright field illuminations which result in different intensity levels, is measured using a spectroradiometer®3 which was placed in front of the eyepiece of the microscope. The results are compared with the original data from N. Ohta1. The evaluation is done by observing the shift in colors in the CIE 1931 Chromaticity Diagram and the CIELAB space, also by applying a wide set of color-difference formulas, namely: CIELAB, CMC, BFD, CIE94, CIEDE2000, DIN99d and DIN99b. Furthermore, to emphasize on the color regions in which the highest difference is observed, the authors have obtained the results from another microscope; Olympus SZX10®4, which in this case the measurement is done by mounting the spectroradiometer to the camera port of the microscope. The experiment leads to some interesting results, among which is the consistency in the highest difference observed considering different factors or how the change in saturation of the samples of the same hue can affect the results.
Davis, Mary; Jessee, Renee; Close, Matthew; Fu, Xiangping; Settlage, Robert; Wang, Guoqing; Cline, Mark A; Gilbert, Elizabeth R
2015-12-01
Snakes often undergo periods of prolonged fasting and, under certain conditions, can survive years without food. Despite this unique phenomenon, there are relatively few reports of the physiological adaptations to fasting in snakes. At post-prandial day 1 (fed) or 21 (fasting), brain, liver, and adipose tissues were collected from juvenile checkered garter snakes (Thamnophis marcianus). There was greater glycerol-3-phosphate dehydrogenase (G3PDH)-specific activity in the liver of fasted than fed snakes (P=0.01). The mRNA abundance of various fat metabolism-associated factors was measured in brain, liver, and adipose tissue. Lipoprotein lipase (LPL) mRNA was greater in fasted than fed snakes in the brain (P=0.04). Adipose triglyceride lipase (ATGL; P=0.006) mRNA was greater in the liver of fasted than fed snakes. In adipose tissue, expression of peroxisome proliferator-activated receptor (PPAR)γ (P=0.01), and fatty acid binding protein 4 (P=0.03) was greater in fed than fasted snakes. Analysis of adipocyte morphology revealed that cross-sectional area (P=0.095) and diameter (P=0.27) were not significantly different between fed and fasted snakes. Results suggest that mean adipocyte area can be preserved during fasting by dampening lipid biosynthesis while not changing rates of lipid hydrolysis. In the liver, however, extensive lipid remodeling may provide energy and lipoproteins to maintain lipid structural integrity during energy restriction. Because the duration of fasting was not sufficient to change adipocyte size, results suggest that the liver is important as a short-term provider of energy in the snake. Copyright © 2015 Elsevier Inc. All rights reserved.
Validation and Verification of LADEE Models and Software
NASA Technical Reports Server (NTRS)
Gundy-Burlet, Karen
2013-01-01
The Lunar Atmosphere Dust Environment Explorer (LADEE) mission will orbit the moon in order to measure the density, composition and time variability of the lunar dust environment. The ground-side and onboard flight software for the mission is being developed using a Model-Based Software methodology. In this technique, models of the spacecraft and flight software are developed in a graphical dynamics modeling package. Flight Software requirements are prototyped and refined using the simulated models. After the model is shown to work as desired in this simulation framework, C-code software is automatically generated from the models. The generated software is then tested in real time Processor-in-the-Loop and Hardware-in-the-Loop test beds. Travelling Road Show test beds were used for early integration tests with payloads and other subsystems. Traditional techniques for verifying computational sciences models are used to characterize the spacecraft simulation. A lightweight set of formal methods analysis, static analysis, formal inspection and code coverage analyses are utilized to further reduce defects in the onboard flight software artifacts. These techniques are applied early and often in the development process, iteratively increasing the capabilities of the software and the fidelity of the vehicle models and test beds.
jFuzz: A Concolic Whitebox Fuzzer for Java
NASA Technical Reports Server (NTRS)
Jayaraman, Karthick; Harvison, David; Ganesh, Vijay; Kiezun, Adam
2009-01-01
We present jFuzz, a automatic testing tool for Java programs. jFuzz is a concolic whitebox fuzzer, built on the NASA Java PathFinder, an explicit-state Java model checker, and a framework for developing reliability and analysis tools for Java. Starting from a seed input, jFuzz automatically and systematically generates inputs that exercise new program paths. jFuzz uses a combination of concrete and symbolic execution, and constraint solving. Time spent on solving constraints can be significant. We implemented several well-known optimizations and name-independent caching, which aggressively normalizes the constraints to reduce the number of calls to the constraint solver. We present preliminary results due to the optimizations, and demonstrate the effectiveness of jFuzz in creating good test inputs. The source code of jFuzz is available as part of the NASA Java PathFinder. jFuzz is intended to be a research testbed for investigating new testing and analysis techniques based on concrete and symbolic execution. The source code of jFuzz is available as part of the NASA Java PathFinder.
Gallium arsenide pilot line for high performance components
NASA Astrophysics Data System (ADS)
1990-01-01
The Gallium Arsenide Pilot Line for High Performance Components (Pilot Line III) is to develop a facility for the fabrication of GaAs logic and memory chips. The first thirty months of this contract are now complete, and this report covers the period from March 27 through September 24, 1989. Similar to the PT-2M SRAM function for memories, the six logic circuits of PT-2L and PT-2M have served their functions as stepping stones toward the custom, standard cell, and cell array logic circuits. All but one of these circuits was right first time; the remaining circuit had a layout error due to a bug in the design rule checker that has since been fixed. The working devices all function over the full temperature range from -55 to 125 C. They all comfortably meet the 200 MHz requirement. They do not solidly conform to the required input and output voltage levels, particularly Vih. It is known that these circuits were designed with the older design models and that they came from an era where the DFET thresholds were often not on target.
Trait-based perspectives of leadership.
Zaccaro, Stephen J
2007-01-01
The trait-based perspective of leadership has a long but checkered history. Trait approaches dominated the initial decades of scientific leadership research. Later, they were disdained for their inability to offer clear distinctions between leaders and nonleaders and for their failure to account for situational variance in leadership behavior. Recently, driven by greater conceptual, methodological, and statistical sophistication, such approaches have again risen to prominence. However, their contributions are likely to remain limited unless leadership researchers who adopt this perspective address several fundamental issues. The author argues that combinations of traits and attributes, integrated in conceptually meaningful ways, are more likely to predict leadership than additive or independent contributions of several single traits. Furthermore, a defining core of these dominant leader trait patterns reflects a stable tendency to lead in different ways across disparate organizational domains. Finally, the author summarizes a multistage model that specifies some leader traits as having more distal influences on leadership processes and performance, whereas others have more proximal effects that are integrated with, and influenced by, situational parameters. ((c) 2007 APA, all rights reserved)
NASA Astrophysics Data System (ADS)
Lach, Theodore
2017-01-01
The Checkerboard model of the Nucleus has been in the public domain for over 20 years. Over those years it has been described by nuclear and particle physicists as; cute, ``the Bohr model of the nucleus'' and ``reminiscent of the Eightfold Way''. It has also been ridiculed as numerology, laughed at, and even worse. In 2000 the theory was taken to the next level by attempting to explain why the mass of the ``up'' and ``dn'' quarks were significantly heavier than the SM ``u'' and ``d'' quarks. This resulted in a paper published on arXiv.nucl-th/0008026 in 2000, predicting 5 generations of quarks, each quark and negative lepton particle related to each other by a simple geometric mean. The CBM predicts that the radii of the elementary particles are proportional to the cube root of their masses. This was realized Pythagorean musical intervals (octave, perfect 5th, perfect 4th plus two others). Therefore each generation can be explained by a simple right triangle and the height of the hypotenuse. Notice that the height of a right triangle breaks the hypotenuse into two line segments. The geometric mean of those two segments equals the length of the height of this characteristic triangle. Therefore the CBM theory now predicts that all the elementary particles mass are proportion to the cube of their radii. Therefore the mass density of all elementary particles (and perhaps black holes too) are a constant of nature.
State-of-the-Art Resources (SOAR) for Software Vulnerability Detection, Test, and Evaluation
2014-07-01
preclude in-depth analysis, and widespread use of a Software -as-a- Service ( SaaS ) model that limits data availability and application to DoD systems...provide mobile application analysis using a Software - as-a- Service ( SaaS ) model. In this case, any software to be analyzed must be sent to the...tools are only available through a SaaS model. The widespread use of a Software -as-a- Service ( SaaS ) model as a sole evaluation model limits data
NASA Astrophysics Data System (ADS)
Jarboe, N.; Minnett, R.; Constable, C.; Koppers, A. A.; Tauxe, L.
2013-12-01
The Magnetics Information Consortium (MagIC) is dedicated to supporting the paleomagnetic, geomagnetic, and rock magnetic communities through the development and maintenance of an online database (http://earthref.org/MAGIC/), data upload and quality control, searches, data downloads, and visualization tools. While MagIC has completed importing some of the IAGA paleomagnetic databases (TRANS, PINT, PSVRL, GPMDB) and continues to import others (ARCHEO, MAGST and SECVR), further individual data uploading from the community contributes a wealth of easily-accessible rich datasets. Previously uploading of data to the MagIC database required the use of an Excel spreadsheet using either a Mac or PC. The new method of uploading data utilizes an HTML 5 web interface where the only computer requirement is a modern browser. This web interface will highlight all errors discovered in the dataset at once instead of the iterative error checking process found in the previous Excel spreadsheet data checker. As a web service, the community will always have easy access to the most up-to-date and bug free version of the data upload software. The filtering search mechanism of the MagIC database has been changed to a more intuitive system where the data from each contribution is displayed in tables similar to how the data is uploaded (http://earthref.org/MAGIC/search/). Searches themselves can be saved as a permanent URL, if desired. The saved search URL could then be used as a citation in a publication. When appropriate, plots (equal area, Zijderveld, ARAI, demagnetization, etc.) are associated with the data to give the user a quicker understanding of the underlying dataset. The MagIC database will continue to evolve to meet the needs of the paleomagnetic, geomagnetic, and rock magnetic communities.
NASA Technical Reports Server (NTRS)
Kershaw, John
1990-01-01
The VIPER project has so far produced a formal specification of a 32 bit RISC microprocessor, an implementation of that chip in radiation-hard SOS technology, a partial proof of correctness of the implementation which is still being extended, and a large body of supporting software. The time has now come to consider what has been achieved and what directions should be pursued in the future. The most obvious lesson from the VIPER project was the time and effort needed to use formal methods properly. Most of the problems arose in the interfaces between different formalisms, e.g., between the (informal) English description and the HOL spec, between the block-level spec in HOL and the equivalent in ELLA needed by the low-level CAD tools. These interfaces need to be made rigorous or (better) eliminated. VIPER 1A (the latest chip) is designed to operate in pairs, to give protection against breakdowns in service as well as design faults. We have come to regard redundancy and formal design methods as complementary, the one to guard against normal component failures and the other to provide insurance against the risk of the common-cause failures which bedevil reliability predictions. Any future VIPER chips will certainly need improved performance to keep up with increasingly demanding applications. We have a prototype design (not yet specified formally) which includes 32 and 64 bit multiply, instruction pre-fetch, more efficient interface timing, and a new instruction to allow a quick response to peripheral requests. Work is under way to specify this device in MIRANDA, and then to refine the spec into a block-level design by top-down transformations. When the refinement is complete, a relatively simple proof checker should be able to demonstrate its correctness. This paper is presented in viewgraph form.
A UML-based metamodel for software evolution process
NASA Astrophysics Data System (ADS)
Jiang, Zuo; Zhou, Wei-Hong; Fu, Zhi-Tao; Xiong, Shun-Qing
2014-04-01
A software evolution process is a set of interrelated software processes under which the corresponding software is evolving. An object-oriented software evolution process meta-model (OO-EPMM), abstract syntax and formal OCL constraint of meta-model are presented in this paper. OO-EPMM can not only represent software development process, but also represent software evolution.
Tornado occurrences related to overshooting cloud-top heights as determined from ATS pictures
NASA Technical Reports Server (NTRS)
Fujita, T. T.
1972-01-01
A sequence of ATS 3 pictures including the development history of large anvil clouds near Salina, Kansas was enlarged by NASA into 8X negatives which were used to obtain the best quality prints by mixing scan lines in 8 steps to minimize checker-board patterns. These images resulted in the best possible resolution, permitting use to compute the heights of overshooting tops above environmental anvil levels based on cloud shadow relationships along with the techniques of lunar topographic mapping. Of 39 heights computed, 6 were within 15 miles of reported positions of 3 tornadoes. It was found that the tornado proximity tops were mostly less than 5000 ft, with one exception of 7000 ft, suggesting that tornadoes are most likely to occur when overshooting height decreases. In order to simulate surface vortices induced by cloud-scale rotation and updraft fields, a laboratory model was constructed. The model experiment has shown that the rotation or updraft field induces a surface vortex but their combination does prevent the formation of the surface vortex. This research leads to a conclusion that the determination of the cloud-top topography and its time variation is of extreme importance in predicting severe local storms for a period of 0 to 6 hours.
Raz, Amir
2011-12-01
An early form of psychotherapy, hypnosis has been tarnished by a checkered history: stage shows, movies and cartoons that perpetuate specious myths; and individuals who unabashedly write 'hypnotist' on their business cards. Hypnosis is in the twilight zone alongside a few other mind-body exemplars. Although scientists are still unraveling how hypnosis works, little is mystical about this powerful top-down process, which is an important tool in the armamentarium of the cognitive scientist seeking to unlock topical conundrums. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Bishop, Matt
1990-01-01
Password selection has long been a difficult issue; traditionally, passwords are either assigned by the computer or chosen by the user. When the computer does the assignment, the passwords are often hard to remember; when the user makes the selection, the passwords are often easy to guess. This paper describes a technique, and a mechanism, to allow users to select passwords which to them are easy to remember but to others would be very difficult to guess. The technique is site, user, and group compatible, and allows rapid changing of constraints imposed upon the password. Although experience with this technique is limited, it appears to have much promise.
Expert opinions on optimal enforcement of minimum purchase age laws for tobacco.
Levy, D T; Chaloupka, F; Slater, S
2000-05-01
A questionnaire on how youth access laws should be enforced was sent to 20 experts who had administered and/or evaluated a youth access enforcement program. Respondents agreed on the need for a high level of retail compliance, checkers representative of the community, checks at least twice per year, a graduated penalty structure with license revocation, and bans on self-service and vending machines. Respondents indicated the need for research on the effects of ID use, frequency of checks, penalty structures, and the effects on smoking rates of youth access policies alone and in conjunction with other tobacco control policies.
NASA Astrophysics Data System (ADS)
Toadere, Florin
2017-12-01
A spectral image processing algorithm that allows the illumination of the scene with different illuminants together with the reconstruction of the scene's reflectance is presented. Color checker spectral image and CIE A (warm light 2700 K), D65 (cold light 6500 K) and Cree TW Series LED T8 (4000 K) are employed for scene illumination. Illuminants used in the simulations have different spectra and, as a result of their illumination, the colors of the scene change. The influence of the illuminants on the reconstruction of the scene's reflectance is estimated. Demonstrative images and reflectance showing the operation of the algorithm are illustrated.
Tree-oriented interactive processing with an application to theorem-proving, appendix E
NASA Technical Reports Server (NTRS)
Hammerslag, David; Kamin, Samuel N.; Campbell, Roy H.
1985-01-01
The concept of unstructured structure editing and ted, an editor for unstructured trees, is described. Ted is used to manipulate hierarchies of information in an unrestricted manner. The tool was implemented and applied to the problem of organizing formal proofs. As a proof management tool, it maintains the validity of a proof and its constituent lemmas independently from the methods used to validate the proof. It includes an adaptable interface which may be used to invoke theorem provers and other aids to proof construction. Using ted, a user may construct, maintain, and verify formal proofs using a variety of theorem provers, proof checkers, and formatters.
Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.
Chatzis, Sotirios P; Andreou, Andreas S
2015-11-01
Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.
Modeling the Arden Syntax for medical decisions in XML.
Kim, Sukil; Haug, Peter J; Rocha, Roberto A; Choi, Inyoung
2008-10-01
A new model expressing Arden Syntax with the eXtensible Markup Language (XML) was developed to increase its portability. Every example was manually parsed and reviewed until the schema and the style sheet were considered to be optimized. When the first schema was finished, several MLMs in Arden Syntax Markup Language (ArdenML) were validated against the schema. They were then transformed to HTML formats with the style sheet, during which they were compared to the original text version of their own MLM. When faults were found in the transformed MLM, the schema and/or style sheet was fixed. This cycle continued until all the examples were encoded into XML documents. The original MLMs were encoded in XML according to the proposed XML schema and reverse-parsed MLMs in ArdenML were checked using a public domain Arden Syntax checker. Two hundred seventy seven examples of MLMs were successfully transformed into XML documents using the model, and the reverse-parse yielded the original text version of MLMs. Two hundred sixty five of the 277 MLMs showed the same error patterns before and after transformation, and all 11 errors related to statement structure were resolved in XML version. The model uses two syntax checking mechanisms, first an XML validation process, and second, a syntax check using an XSL style sheet. Now that we have a schema for ArdenML, we can also begin the development of style sheets for transformation ArdenML into other languages.
A measurement system for large, complex software programs
NASA Technical Reports Server (NTRS)
Rone, Kyle Y.; Olson, Kitty M.; Davis, Nathan E.
1994-01-01
This paper describes measurement systems required to forecast, measure, and control activities for large, complex software development and support programs. Initial software cost and quality analysis provides the foundation for meaningful management decisions as a project evolves. In modeling the cost and quality of software systems, the relationship between the functionality, quality, cost, and schedule of the product must be considered. This explicit relationship is dictated by the criticality of the software being developed. This balance between cost and quality is a viable software engineering trade-off throughout the life cycle. Therefore, the ability to accurately estimate the cost and quality of software systems is essential to providing reliable software on time and within budget. Software cost models relate the product error rate to the percent of the project labor that is required for independent verification and validation. The criticality of the software determines which cost model is used to estimate the labor required to develop the software. Software quality models yield an expected error discovery rate based on the software size, criticality, software development environment, and the level of competence of the project and developers with respect to the processes being employed.
Dependability modeling and assessment in UML-based software development.
Bernardi, Simona; Merseguer, José; Petriu, Dorina C
2012-01-01
Assessment of software nonfunctional properties (NFP) is an important problem in software development. In the context of model-driven development, an emerging approach for the analysis of different NFPs consists of the following steps: (a) to extend the software models with annotations describing the NFP of interest; (b) to transform automatically the annotated software model to the formalism chosen for NFP analysis; (c) to analyze the formal model using existing solvers; (d) to assess the software based on the results and give feedback to designers. Such a modeling→analysis→assessment approach can be applied to any software modeling language, be it general purpose or domain specific. In this paper, we focus on UML-based development and on the dependability NFP, which encompasses reliability, availability, safety, integrity, and maintainability. The paper presents the profile used to extend UML with dependability information, the model transformation to generate a DSPN formal model, and the assessment of the system properties based on the DSPN results.
Dependability Modeling and Assessment in UML-Based Software Development
Bernardi, Simona; Merseguer, José; Petriu, Dorina C.
2012-01-01
Assessment of software nonfunctional properties (NFP) is an important problem in software development. In the context of model-driven development, an emerging approach for the analysis of different NFPs consists of the following steps: (a) to extend the software models with annotations describing the NFP of interest; (b) to transform automatically the annotated software model to the formalism chosen for NFP analysis; (c) to analyze the formal model using existing solvers; (d) to assess the software based on the results and give feedback to designers. Such a modeling→analysis→assessment approach can be applied to any software modeling language, be it general purpose or domain specific. In this paper, we focus on UML-based development and on the dependability NFP, which encompasses reliability, availability, safety, integrity, and maintainability. The paper presents the profile used to extend UML with dependability information, the model transformation to generate a DSPN formal model, and the assessment of the system properties based on the DSPN results. PMID:22988428
Software reliability models for fault-tolerant avionics computers and related topics
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1987-01-01
Software reliability research is briefly described. General research topics are reliability growth models, quality of software reliability prediction, the complete monotonicity property of reliability growth, conceptual modelling of software failure behavior, assurance of ultrahigh reliability, and analysis techniques for fault-tolerant systems.
Bolton, Matthew L.; Bass, Ellen J.; Siminiceanu, Radu I.
2012-01-01
Breakdowns in complex systems often occur as a result of system elements interacting in unanticipated ways. In systems with human operators, human-automation interaction associated with both normative and erroneous human behavior can contribute to such failures. Model-driven design and analysis techniques provide engineers with formal methods tools and techniques capable of evaluating how human behavior can contribute to system failures. This paper presents a novel method for automatically generating task analytic models encompassing both normative and erroneous human behavior from normative task models. The generated erroneous behavior is capable of replicating Hollnagel’s zero-order phenotypes of erroneous action for omissions, jumps, repetitions, and intrusions. Multiple phenotypical acts can occur in sequence, thus allowing for the generation of higher order phenotypes. The task behavior model pattern capable of generating erroneous behavior can be integrated into a formal system model so that system safety properties can be formally verified with a model checker. This allows analysts to prove that a human-automation interactive system (as represented by the model) will or will not satisfy safety properties with both normative and generated erroneous human behavior. We present benchmarks related to the size of the statespace and verification time of models to show how the erroneous human behavior generation process scales. We demonstrate the method with a case study: the operation of a radiation therapy machine. A potential problem resulting from a generated erroneous human action is discovered. A design intervention is presented which prevents this problem from occurring. We discuss how our method could be used to evaluate larger applications and recommend future paths of development. PMID:23105914
Software For Computing Reliability Of Other Software
NASA Technical Reports Server (NTRS)
Nikora, Allen; Antczak, Thomas M.; Lyu, Michael
1995-01-01
Computer Aided Software Reliability Estimation (CASRE) computer program developed for use in measuring reliability of other software. Easier for non-specialists in reliability to use than many other currently available programs developed for same purpose. CASRE incorporates mathematical modeling capabilities of public-domain Statistical Modeling and Estimation of Reliability Functions for Software (SMERFS) computer program and runs in Windows software environment. Provides menu-driven command interface; enabling and disabling of menu options guides user through (1) selection of set of failure data, (2) execution of mathematical model, and (3) analysis of results from model. Written in C language.
Software-Engineering Process Simulation (SEPS) model
NASA Technical Reports Server (NTRS)
Lin, C. Y.; Abdel-Hamid, T.; Sherif, J. S.
1992-01-01
The Software Engineering Process Simulation (SEPS) model is described which was developed at JPL. SEPS is a dynamic simulation model of the software project development process. It uses the feedback principles of system dynamics to simulate the dynamic interactions among various software life cycle development activities and management decision making processes. The model is designed to be a planning tool to examine tradeoffs of cost, schedule, and functionality, and to test the implications of different managerial policies on a project's outcome. Furthermore, SEPS will enable software managers to gain a better understanding of the dynamics of software project development and perform postmodern assessments.
Villamor Ordozgoiti, Alberto; Delgado Hito, Pilar; Guix Comellas, Eva María; Fernandez Sanchez, Carlos Manuel; Garcia Hernandez, Milagros; Lluch Canut, Teresa
2016-01-01
Information and Communications Technologies in healthcare has increased the need to consider quality criteria through standardised processes. The aim of this study was to analyse the software quality evaluation models applicable to healthcare from the perspective of ICT-purchasers. Through a systematic literature review with the keywords software, product, quality, evaluation and health, we selected and analysed 20 original research papers published from 2005-2016 in health science and technology databases. The results showed four main topics: non-ISO models, software quality evaluation models based on ISO/IEC standards, studies analysing software quality evaluation models, and studies analysing ISO standards for software quality evaluation. The models provide cost-efficiency criteria for specific software, and improve use outcomes. The ISO/IEC25000 standard is shown as the most suitable for evaluating the quality of ICTs for healthcare use from the perspective of institutional acquisition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willenbring, James M.; Bartlett, Roscoe Ainsworth; Heroux, Michael Allen
2012-01-01
Software lifecycles are becoming an increasingly important issue for computational science and engineering (CSE) software. The process by which a piece of CSE software begins life as a set of research requirements and then matures into a trusted high-quality capability is both commonplace and extremely challenging. Although an implicit lifecycle is obviously being used in any effort, the challenges of this process - respecting the competing needs of research vs. production - cannot be overstated. Here we describe a proposal for a well-defined software lifecycle process based on modern Lean/Agile software engineering principles. What we propose is appropriate for manymore » CSE software projects that are initially heavily focused on research but also are expected to eventually produce usable high-quality capabilities. The model is related to TriBITS, a build, integration and testing system, which serves as a strong foundation for this lifecycle model, and aspects of this lifecycle model are ingrained in the TriBITS system. Here, we advocate three to four phases or maturity levels that address the appropriate handling of many issues associated with the transition from research to production software. The goals of this lifecycle model are to better communicate maturity levels with customers and to help to identify and promote Software Engineering (SE) practices that will help to improve productivity and produce better software. An important collection of software in this domain is Trilinos, which is used as the motivation and the initial target for this lifecycle model. However, many other related and similar CSE (and non-CSE) software projects can also make good use of this lifecycle model, especially those that use the TriBITS system. Indeed this lifecycle process, if followed, will enable large-scale sustainable integration of many complex CSE software efforts across several institutions.« less
Deep space network software cost estimation model
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1981-01-01
A parametric software cost estimation model prepared for Jet PRopulsion Laboratory (JPL) Deep Space Network (DSN) Data System implementation tasks is described. The resource estimation mdel modifies and combines a number of existing models. The model calibrates the task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit JPL software life-cycle statistics.
Quality Assessment of Medical Apps that Target Medication-Related Problems.
Loy, John Shiguang; Ali, Eskinder Eshetu; Yap, Kevin Yi-Lwern
2016-10-01
The advent of smartphones has enabled a plethora of medical apps for disease management. As of 2012, there are 40,000 health care-related mobile apps available in the market. Since most of these medical apps do not go through any stringent quality assessment, there is a risk of consumers being misinformed or misled by unreliable information. In this regard, apps that target medication-related problems (MRPs) are not an exception. There is little information on what constitutes quality in apps that target MRPs and how good the existing apps are. To develop a quality assessment tool for evaluating apps that target MRPs and assess the quality of such apps available in the major mobile app stores (iTunes and Google Play). The top 100 free and paid apps in the medical categories of iTunes and Google Play stores (total of 400 apps) were screened for inclusion in the final analysis. English language apps that targeted MRPs were downloaded on test devices to evaluate their quality. Apps intended for clinicians, patients, or both were eligible for evaluation. The quality assessment tool consisted of 4 sections (appropriateness, reliability, usability, privacy), which determined the overall quality of the apps. Apps that fulfilled the inclusion criteria were classified based on the presence of any 1 or more of the 5 features considered important for apps targeting MRPs (monitoring, interaction checker, dose calculator, medication information, medication record). Descriptive statistics and Mann-Whitney tests were used for analysis. Final analysis was based on 59 apps that fulfilled the study inclusion criteria. Apps with interaction checker (66.9%) and monitoring features (54.8%) had the highest and lowest overall qualities. Paid apps generally scored higher for usability than free apps (P = 0.006) but lower for privacy (P = 0.003). Half of the interaction checker apps were unable to detect interactions with herbal medications. Blood pressure and heart rate monitoring apps had the highest overall quality scores (67.7%), while apps that monitored visual, hearing, and temperature changes scored the lowest (35.5%). A quality assessment tool for evaluating medical apps targeting MRPs has been developed. Clinicians can use this tool to guide their assessments of medical apps that are appropriate for use in the health care setting. Although potentially useful apps were identified, many apps were found to have deficiencies in quality, among which was poor reliability scores for most of the apps. Continued assessments of the quality of apps targeting MRPs are recommended to ensure their usefulness for clinicians and patients. No outside funding supported this study. The authors have no conflicts of interests directly related to this study. Study concept and design were contributed by Loy and Yap. Loy collected the data and took the lead in data interpretation, along with Ali and Yap. The manuscript was primarily written by Loy, along with Yap, and revised primarily by Ali, along with Yap.
Open Source Molecular Modeling
Pirhadi, Somayeh; Sunseri, Jocelyn; Koes, David Ryan
2016-01-01
The success of molecular modeling and computational chemistry efforts are, by definition, dependent on quality software applications. Open source software development provides many advantages to users of modeling applications, not the least of which is that the software is free and completely extendable. In this review we categorize, enumerate, and describe available open source software packages for molecular modeling and computational chemistry. PMID:27631126
NASA Software Cost Estimation Model: An Analogy Based Estimation Model
NASA Technical Reports Server (NTRS)
Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James
2015-01-01
The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K- nearest neighbor prediction model performance on the same data set.
Presenting an evaluation model of the trauma registry software.
Asadi, Farkhondeh; Paydar, Somayeh
2018-04-01
Trauma is a major cause of 10% death in the worldwide and is considered as a global concern. This problem has made healthcare policy makers and managers to adopt a basic strategy in this context. Trauma registry has an important and basic role in decreasing the mortality and the disabilities due to injuries resulted from trauma. Today, different software are designed for trauma registry. Evaluation of this software improves management, increases efficiency and effectiveness of these systems. Therefore, the aim of this study is to present an evaluation model for trauma registry software. The present study is an applied research. In this study, general and specific criteria of trauma registry software were identified by reviewing literature including books, articles, scientific documents, valid websites and related software in this domain. According to general and specific criteria and related software, a model for evaluating trauma registry software was proposed. Based on the proposed model, a checklist designed and its validity and reliability evaluated. Mentioned model by using of the Delphi technique presented to 12 experts and specialists. To analyze the results, an agreed coefficient of %75 was determined in order to apply changes. Finally, when the model was approved by the experts and professionals, the final version of the evaluation model for the trauma registry software was presented. For evaluating of criteria of trauma registry software, two groups were presented: 1- General criteria, 2- Specific criteria. General criteria of trauma registry software were classified into four main categories including: 1- usability, 2- security, 3- maintainability, and 4-interoperability. Specific criteria were divided into four main categories including: 1- data submission and entry, 2- reporting, 3- quality control, 4- decision and research support. The presented model in this research has introduced important general and specific criteria of trauma registry software and sub criteria related to each main criteria separately. This model was validated by experts in this field. Therefore, this model can be used as a comprehensive model and a standard evaluation tool for measuring efficiency and effectiveness and performance improvement of trauma registry software. Copyright © 2018 Elsevier B.V. All rights reserved.
NHPP-Based Software Reliability Models Using Equilibrium Distribution
NASA Astrophysics Data System (ADS)
Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi
Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.
An information model for use in software management estimation and prediction
NASA Technical Reports Server (NTRS)
Li, Ningda R.; Zelkowitz, Marvin V.
1993-01-01
This paper describes the use of cluster analysis for determining the information model within collected software engineering development data at the NASA/GSFC Software Engineering Laboratory. We describe the Software Management Environment tool that allows managers to predict development attributes during early phases of a software project and the modifications we propose to allow it to develop dynamic models for better predictions of these attributes.
Predicting Software Suitability Using a Bayesian Belief Network
NASA Technical Reports Server (NTRS)
Beaver, Justin M.; Schiavone, Guy A.; Berrios, Joseph S.
2005-01-01
The ability to reliably predict the end quality of software under development presents a significant advantage for a development team. It provides an opportunity to address high risk components earlier in the development life cycle, when their impact is minimized. This research proposes a model that captures the evolution of the quality of a software product, and provides reliable forecasts of the end quality of the software being developed in terms of product suitability. Development team skill, software process maturity, and software problem complexity are hypothesized as driving factors of software product quality. The cause-effect relationships between these factors and the elements of software suitability are modeled using Bayesian Belief Networks, a machine learning method. This research presents a Bayesian Network for software quality, and the techniques used to quantify the factors that influence and represent software quality. The developed model is found to be effective in predicting the end product quality of small-scale software development efforts.
Program Instrumentation and Trace Analysis
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Goldberg, Allen; Filman, Robert; Rosu, Grigore; Koga, Dennis (Technical Monitor)
2002-01-01
Several attempts have been made recently to apply techniques such as model checking and theorem proving to the analysis of programs. This shall be seen as a current trend to analyze real software systems instead of just their designs. This includes our own effort to develop a model checker for Java, the Java PathFinder 1, one of the very first of its kind in 1998. However, model checking cannot handle very large programs without some kind of abstraction of the program. This paper describes a complementary scalable technique to handle such large programs. Our interest is turned on the observation part of the equation: How much information can be extracted about a program from observing a single execution trace? It is our intention to develop a technology that can be applied automatically and to large full-size applications, with minimal modification to the code. We present a tool, Java PathExplorer (JPaX), for exploring execution traces of Java programs. The tool prioritizes scalability for completeness, and is directed towards detecting errors in programs, not to prove correctness. One core element in JPaX is an instrumentation package that allows to instrument Java byte code files to log various events when executed. The instrumentation is driven by a user provided script that specifies what information to log. Examples of instructions that such a script can contain are: 'report name and arguments of all called methods defined in class C, together with a timestamp'; 'report all updates to all variables'; and 'report all acquisitions and releases of locks'. In more complex instructions one can specify that certain expressions should be evaluated and even that certain code should be executed under various conditions. The instrumentation package can hence be seen as implementing Aspect Oriented Programming for Java in the sense that one can add functionality to a Java program without explicitly changing the code of the original program, but one rather writes an aspect and compiles it into the original program using the instrumentation. Another core element of JPaX is an observation package that supports the analysis of the generated event stream. Two kinds of analysis are currently supported. In temporal analysis the execution trace is evaluated against formulae written in temporal logic. We have implemented a temporal logic evaluator on finite traces using the Maude rewriting system from SRI International, USA. Temporal logic is defined in Maude by giving its syntax as a signature and its semantics as rewrite equations. The resulting semantics is extremely efficient and can handle event streams of hundreds of millions events in few minutes. Furthermore, the implementation is very succinct. The second form of even stream analysis supported is error pattern analysis where an execution trace is analyzed using various error detection algorithms that can identify error-prone programming practices that may potentially lead to errors in some different executions. Two such algorithms focusing on concurrency errors have been implemented in JPaX, one for deadlocks and the other for data races. It is important to note, that a deadlock or data race potential does not need to occur in order for its potential to be detected with these algorithms. This is what makes them very scalable in practice. The data race algorithm implemented is the Eraser algorithm from Compaq, however adopted to Java. The tool is currently being applied to a code base for controlling a spacecraft by the developers of that software in order to evaluate its applicability.
System and Software Reliability (C103)
NASA Technical Reports Server (NTRS)
Wallace, Dolores
2003-01-01
Within the last decade better reliability models (hardware. software, system) than those currently used have been theorized and developed but not implemented in practice. Previous research on software reliability has shown that while some existing software reliability models are practical, they are no accurate enough. New paradigms of development (e.g. OO) have appeared and associated reliability models have been proposed posed but not investigated. Hardware models have been extensively investigated but not integrated into a system framework. System reliability modeling is the weakest of the three. NASA engineers need better methods and tools to demonstrate that the products meet NASA requirements for reliability measurement. For the new models for the software component of the last decade, there is a great need to bring them into a form that they can be used on software intensive systems. The Statistical Modeling and Estimation of Reliability Functions for Systems (SMERFS'3) tool is an existing vehicle that may be used to incorporate these new modeling advances. Adapting some existing software reliability modeling changes to accommodate major changes in software development technology may also show substantial improvement in prediction accuracy. With some additional research, the next step is to identify and investigate system reliability. System reliability models could then be incorporated in a tool such as SMERFS'3. This tool with better models would greatly add value in assess in GSFC projects.
Are Earth System model software engineering practices fit for purpose? A case study.
NASA Astrophysics Data System (ADS)
Easterbrook, S. M.; Johns, T. C.
2009-04-01
We present some analysis and conclusions from a case study of the culture and practices of scientists at the Met Office and Hadley Centre working on the development of software for climate and Earth System models using the MetUM infrastructure. The study examined how scientists think about software correctness, prioritize their requirements in making changes, and develop a shared understanding of the resulting models. We conclude that highly customized techniques driven strongly by scientific research goals have evolved for verification and validation of such models. In a formal software engineering context these represents costly, but invaluable, software integration tests with considerable benefits. The software engineering practices seen also exhibit recognisable features of both agile and open source software development projects - self-organisation of teams consistent with a meritocracy rather than top-down organisation, extensive use of informal communication channels, and software developers who are generally also users and science domain experts. We draw some general conclusions on whether these practices work well, and what new software engineering challenges may lie ahead as Earth System models become ever more complex and petascale computing becomes the norm.
Students' Different Understandings of Class Diagrams
ERIC Educational Resources Information Center
Boustedt, Jonas
2012-01-01
The software industry needs well-trained software designers and one important aspect of software design is the ability to model software designs visually and understand what visual models represent. However, previous research indicates that software design is a difficult task to many students. This article reports empirical findings from a…
Model-based software process improvement
NASA Technical Reports Server (NTRS)
Zettervall, Brenda T.
1994-01-01
The activities of a field test site for the Software Engineering Institute's software process definition project are discussed. Products tested included the improvement model itself, descriptive modeling techniques, the CMM level 2 framework document, and the use of process definition guidelines and templates. The software process improvement model represents a five stage cyclic approach for organizational process improvement. The cycles consist of the initiating, diagnosing, establishing, acting, and leveraging phases.
Increasing the reliability of ecological models using modern software engineering techniques
Robert M. Scheller; Brian R. Sturtevant; Eric J. Gustafson; Brendan C. Ward; David J. Mladenoff
2009-01-01
Modern software development techniques are largely unknown to ecologists. Typically, ecological models and other software tools are developed for limited research purposes, and additional capabilities are added later, usually in an ad hoc manner. Modern software engineering techniques can substantially increase scientific rigor and confidence in ecological models and...
Tips on Creating Complex Geometry Using Solid Modeling Software
ERIC Educational Resources Information Center
Gow, George
2008-01-01
Three-dimensional computer-aided drafting (CAD) software, sometimes referred to as "solid modeling" software, is easy to learn, fun to use, and becoming the standard in industry. However, many users have difficulty creating complex geometry with the solid modeling software. And the problem is not entirely a student problem. Even some teachers and…
Software engineering and the role of Ada: Executive seminar
NASA Technical Reports Server (NTRS)
Freedman, Glenn B.
1987-01-01
The objective was to introduce the basic terminology and concepts of software engineering and Ada. The life cycle model is reviewed. The application of the goals and principles of software engineering is applied. An introductory understanding of the features of the Ada language is gained. Topics addressed include: the software crises; the mandate of the Space Station Program; software life cycle model; software engineering; and Ada under the software engineering umbrella.
Presenting an Evaluation Model for the Cancer Registry Software.
Moghaddasi, Hamid; Asadi, Farkhondeh; Rabiei, Reza; Rahimi, Farough; Shahbodaghi, Reihaneh
2017-12-01
As cancer is increasingly growing, cancer registry is of great importance as the main core of cancer control programs, and many different software has been designed for this purpose. Therefore, establishing a comprehensive evaluation model is essential to evaluate and compare a wide range of such software. In this study, the criteria of the cancer registry software have been determined by studying the documents and two functional software of this field. The evaluation tool was a checklist and in order to validate the model, this checklist was presented to experts in the form of a questionnaire. To analyze the results of validation, an agreed coefficient of %75 was determined in order to apply changes. Finally, when the model was approved, the final version of the evaluation model for the cancer registry software was presented. The evaluation model of this study contains tool and method of evaluation. The evaluation tool is a checklist including the general and specific criteria of the cancer registry software along with their sub-criteria. The evaluation method of this study was chosen as a criteria-based evaluation method based on the findings. The model of this study encompasses various dimensions of cancer registry software and a proper method for evaluating it. The strong point of this evaluation model is the separation between general criteria and the specific ones, while trying to fulfill the comprehensiveness of the criteria. Since this model has been validated, it can be used as a standard to evaluate the cancer registry software.
Methods for cost estimation in software project management
NASA Astrophysics Data System (ADS)
Briciu, C. V.; Filip, I.; Indries, I. I.
2016-02-01
The speed in which the processes used in software development field have changed makes it very difficult the task of forecasting the overall costs for a software project. By many researchers, this task has been considered unachievable, but there is a group of scientist for which this task can be solved using the already known mathematical methods (e.g. multiple linear regressions) and the new techniques as genetic programming and neural networks. The paper presents a solution for building a model for the cost estimation models in the software project management using genetic algorithms starting from the PROMISE datasets related COCOMO 81 model. In the first part of the paper, a summary of the major achievements in the research area of finding a model for estimating the overall project costs is presented together with the description of the existing software development process models. In the last part, a basic proposal of a mathematical model of a genetic programming is proposed including here the description of the chosen fitness function and chromosome representation. The perspective of model described it linked with the current reality of the software development considering as basis the software product life cycle and the current challenges and innovations in the software development area. Based on the author's experiences and the analysis of the existing models and product lifecycle it was concluded that estimation models should be adapted with the new technologies and emerging systems and they depend largely by the chosen software development method.
Program Model Checking as a New Trend
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Visser, Willem; Clancy, Daniel (Technical Monitor)
2002-01-01
This paper introduces a special section of STTT (International Journal on Software Tools for Technology Transfer) containing a selection of papers that were presented at the 7th International SPIN workshop, Stanford, August 30 - September 1, 2000. The workshop was named SPIN Model Checking and Software Verification, with an emphasis on model checking of programs. The paper outlines the motivation for stressing software verification, rather than only design and model verification, by presenting the work done in the Automated Software Engineering group at NASA Ames Research Center within the last 5 years. This includes work in software model checking, testing like technologies and static analysis.
Knowledge-based approach for generating target system specifications from a domain model
NASA Technical Reports Server (NTRS)
Gomaa, Hassan; Kerschberg, Larry; Sugumaran, Vijayan
1992-01-01
Several institutions in industry and academia are pursuing research efforts in domain modeling to address unresolved issues in software reuse. To demonstrate the concepts of domain modeling and software reuse, a prototype software engineering environment is being developed at George Mason University to support the creation of domain models and the generation of target system specifications. This prototype environment, which is application domain independent, consists of an integrated set of commercial off-the-shelf software tools and custom-developed software tools. This paper describes the knowledge-based tool that was developed as part of the environment to generate target system specifications from a domain model.
Software cost/resource modeling: Deep space network software cost estimation model
NASA Technical Reports Server (NTRS)
Tausworthe, R. J.
1980-01-01
A parametric software cost estimation model prepared for JPL deep space network (DSN) data systems implementation tasks is presented. The resource estimation model incorporates principles and data from a number of existing models, such as those of the General Research Corporation, Doty Associates, IBM (Walston-Felix), Rome Air Force Development Center, University of Maryland, and Rayleigh-Norden-Putnam. The model calibrates task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit JPL software lifecycle statistics. The estimation model output scales a standard DSN work breakdown structure skeleton, which is then input to a PERT/CPM system, producing a detailed schedule and resource budget for the project being planned.
Software Cost-Estimation Model
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1985-01-01
Software Cost Estimation Model SOFTCOST provides automated resource and schedule model for software development. Combines several cost models found in open literature into one comprehensive set of algorithms. Compensates for nearly fifty implementation factors relative to size of task, inherited baseline, organizational and system environment and difficulty of task.
Software Program: Software Management Guidebook
NASA Technical Reports Server (NTRS)
1996-01-01
The purpose of this NASA Software Management Guidebook is twofold. First, this document defines the core products and activities required of NASA software projects. It defines life-cycle models and activity-related methods but acknowledges that no single life-cycle model is appropriate for all NASA software projects. It also acknowledges that the appropriate method for accomplishing a required activity depends on characteristics of the software project. Second, this guidebook provides specific guidance to software project managers and team leaders in selecting appropriate life cycles and methods to develop a tailored plan for a software engineering project.
Empirical studies of software design: Implications for SSEs
NASA Technical Reports Server (NTRS)
Krasner, Herb
1988-01-01
Implications for Software Engineering Environments (SEEs) are presented in viewgraph format for characteristics of projects studied; significant problems and crucial problem areas in software design for large systems; layered behavioral model of software processes; implications of field study results; software project as an ecological system; results of the LIFT study; information model of design exploration; software design strategies; results of the team design study; and a list of publications.
1988-12-01
software development scene is often charac- c. SPQR Model-Jones terized by: * schedule and cost estimates that are gross-d. COPMO-Thebaut ly inaccurate, SEI...time c. SPQR Model-Jones (in seconds) is simply derived from E by dividing T. Capers Jones has developed a software cost by the Stroud number, S...estimation model called the Software Produc- T=E/S tivity, Quality, and Reliability ( SPQR ) model. The basic approach is similar to that of Boehm’s The value
Studying the Accuracy of Software Process Elicitation: The User Articulated Model
ERIC Educational Resources Information Center
Crabtree, Carlton A.
2010-01-01
Process models are often the basis for demonstrating improvement and compliance in software engineering organizations. A descriptive model is a type of process model describing the human activities in software development that actually occur. The purpose of a descriptive model is to provide a documented baseline for further process improvement…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartlett, Roscoe A; Heroux, Dr. Michael A; Willenbring, James
2012-01-01
Software lifecycles are becoming an increasingly important issue for computational science & engineering (CSE) software. The process by which a piece of CSE software begins life as a set of research requirements and then matures into a trusted high-quality capability is both commonplace and extremely challenging. Although an implicit lifecycle is obviously being used in any effort, the challenges of this process--respecting the competing needs of research vs. production--cannot be overstated. Here we describe a proposal for a well-defined software lifecycle process based on modern Lean/Agile software engineering principles. What we propose is appropriate for many CSE software projects thatmore » are initially heavily focused on research but also are expected to eventually produce usable high-quality capabilities. The model is related to TriBITS, a build, integration and testing system, which serves as a strong foundation for this lifecycle model, and aspects of this lifecycle model are ingrained in the TriBITS system. Indeed this lifecycle process, if followed, will enable large-scale sustainable integration of many complex CSE software efforts across several institutions.« less
Peace umbrella, a vague policy and checkered past. Research report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biszak, G.A.
1997-03-01
With the break-up of the former Soviet Union, the United Nations Security Council enjoyed a greater consensus among its members in confronting aggression and participation in humanitarian and peace operations. Deploying significant military forces under the peace umbrella at the beginning of this decade was highly unlikely. However, since 1990, 25 deployments have been conducted with the majority falling under the peace umbrella. This paper analyzes current national and military strategy in regards to the peace umbrella, specifically peace enforcement, military doctrine, and the case of Somalia. In addition, this paper looks at doctrine and directives that currently guide deploymentmore » of forces and the potential for future peace operations.« less
A conceptual model for megaprogramming
NASA Technical Reports Server (NTRS)
Tracz, Will
1990-01-01
Megaprogramming is component-based software engineering and life-cycle management. Magaprogramming and its relationship to other research initiatives (common prototyping system/common prototyping language, domain specific software architectures, and software understanding) are analyzed. The desirable attributes of megaprogramming software components are identified and a software development model and resulting prototype megaprogramming system (library interconnection language extended by annotated Ada) are described.
Deep space network software cost estimation model
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1981-01-01
A parametric software cost estimation model prepared for Deep Space Network (DSN) Data Systems implementation tasks is presented. The resource estimation model incorporates principles and data from a number of existing models. The model calibrates task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit DSN software life cycle statistics. The estimation model output scales a standard DSN Work Breakdown Structure skeleton, which is then input into a PERT/CPM system, producing a detailed schedule and resource budget for the project being planned.
Consistent Evolution of Software Artifacts and Non-Functional Models
2014-11-14
induce bad software performance)? 15. SUBJECT TERMS EOARD, Nano particles, Photo-Acoustic Sensors, Model-Driven Engineering ( MDE ), Software Performance...Università degli Studi dell’Aquila, Via Vetoio, 67100 L’Aquila, Italy Email: vittorio.cortellessa@univaq.it Web : http: // www. di. univaq. it/ cortelle/ Phone...Model-Driven Engineering ( MDE ), Software Performance Engineering (SPE), Change Propagation, Performance Antipatterns. For sake of readability of the
THE EPA MULTIMEDIA INTEGRATED MODELING SYSTEM SOFTWARE SUITE
The U.S. EPA is developing a Multimedia Integrated Modeling System (MIMS) framework that will provide a software infrastructure or environment to support constructing, composing, executing, and evaluating complex modeling studies. The framework will include (1) common software ...
Development and Application of New Quality Model for Software Projects
Karnavel, K.; Dillibabu, R.
2014-01-01
The IT industry tries to employ a number of models to identify the defects in the construction of software projects. In this paper, we present COQUALMO and its limitations and aim to increase the quality without increasing the cost and time. The computation time, cost, and effort to predict the residual defects are very high; this was overcome by developing an appropriate new quality model named the software testing defect corrective model (STDCM). The STDCM was used to estimate the number of remaining residual defects in the software product; a few assumptions and the detailed steps of the STDCM are highlighted. The application of the STDCM is explored in software projects. The implementation of the model is validated using statistical inference, which shows there is a significant improvement in the quality of the software projects. PMID:25478594
Development and application of new quality model for software projects.
Karnavel, K; Dillibabu, R
2014-01-01
The IT industry tries to employ a number of models to identify the defects in the construction of software projects. In this paper, we present COQUALMO and its limitations and aim to increase the quality without increasing the cost and time. The computation time, cost, and effort to predict the residual defects are very high; this was overcome by developing an appropriate new quality model named the software testing defect corrective model (STDCM). The STDCM was used to estimate the number of remaining residual defects in the software product; a few assumptions and the detailed steps of the STDCM are highlighted. The application of the STDCM is explored in software projects. The implementation of the model is validated using statistical inference, which shows there is a significant improvement in the quality of the software projects.
Visualization Skills: A Prerequisite to Advanced Solid Modeling
ERIC Educational Resources Information Center
Gow, George
2007-01-01
Many educators believe that solid modeling software has made teaching two- and three-dimensional visualization skills obsolete. They claim that the visual tools built into the solid modeling software serve as a replacement for the CAD operator's personal visualization skills. They also claim that because solid modeling software can produce…
Software engineering the mixed model for genome-wide association studies on large samples.
Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J
2009-11-01
Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.
NASA Technical Reports Server (NTRS)
Lawrence, Stella
1991-01-01
The object of this project was to develop and calibrate quantitative models for predicting the quality of software. Reliable flight and supporting ground software is a highly important factor in the successful operation of the space shuttle program. The models used in the present study consisted of SMERFS (Statistical Modeling and Estimation of Reliability Functions for Software). There are ten models in SMERFS. For a first run, the results obtained in modeling the cumulative number of failures versus execution time showed fairly good results for our data. Plots of cumulative software failures versus calendar weeks were made and the model results were compared with the historical data on the same graph. If the model agrees with actual historical behavior for a set of data then there is confidence in future predictions for this data. Considering the quality of the data, the models have given some significant results, even at this early stage. With better care in data collection, data analysis, recording of the fixing of failures and CPU execution times, the models should prove extremely helpful in making predictions regarding the future pattern of failures, including an estimate of the number of errors remaining in the software and the additional testing time required for the software quality to reach acceptable levels. It appears that there is no one 'best' model for all cases. It is for this reason that the aim of this project was to test several models. One of the recommendations resulting from this study is that great care must be taken in the collection of data. When using a model, the data should satisfy the model assumptions.
Revealing the ISO/IEC 9126-1 Clique Tree for COTS Software Evaluation
NASA Technical Reports Server (NTRS)
Morris, A. Terry
2007-01-01
Previous research has shown that acyclic dependency models, if they exist, can be extracted from software quality standards and that these models can be used to assess software safety and product quality. In the case of commercial off-the-shelf (COTS) software, the extracted dependency model can be used in a probabilistic Bayesian network context for COTS software evaluation. Furthermore, while experts typically employ Bayesian networks to encode domain knowledge, secondary structures (clique trees) from Bayesian network graphs can be used to determine the probabilistic distribution of any software variable (attribute) using any clique that contains that variable. Secondary structures, therefore, provide insight into the fundamental nature of graphical networks. This paper will apply secondary structure calculations to reveal the clique tree of the acyclic dependency model extracted from the ISO/IEC 9126-1 software quality standard. Suggestions will be provided to describe how the clique tree may be exploited to aid efficient transformation of an evaluation model.
Collected software engineering papers, volume 9
NASA Technical Reports Server (NTRS)
1991-01-01
This document is a collection of selected technical papers produced by participants in the Software Engineering Laboratory (SEL) from November 1990 through October 1991. The purpose of the document is to make available, in one reference, some results of SEL research that originally appeared in a number of different forums. This is the ninth such volume of technical papers produced by the SEL. Although these papers cover several topics related to software engineering, they do not encompass the entire scope of SEL activities and interests. For the convenience of this presentation, the eight papers contained here are grouped into three major categories: (1) software models studies; (2) software measurement studies; and (3) Ada technology studies. The first category presents studies on reuse models, including a software reuse model applied to maintenance and a model for an organization to support software reuse. The second category includes experimental research methods and software measurement techniques. The third category presents object-oriented approaches using Ada and object-oriented features proposed for Ada. The SEL is actively working to understand and improve the software development process at GSFC.
The TAME Project: Towards improvement-oriented software environments
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Rombach, H. Dieter
1988-01-01
Experience from a dozen years of analyzing software engineering processes and products is summarized as a set of software engineering and measurement principles that argue for software engineering process models that integrate sound planning and analysis into the construction process. In the TAME (Tailoring A Measurement Environment) project at the University of Maryland, such an improvement-oriented software engineering process model was developed that uses the goal/question/metric paradigm to integrate the constructive and analytic aspects of software development. The model provides a mechanism for formalizing the characterization and planning tasks, controlling and improving projects based on quantitative analysis, learning in a deeper and more systematic way about the software process and product, and feeding the appropriate experience back into the current and future projects. The TAME system is an instantiation of the TAME software engineering process model as an ISEE (integrated software engineering environment). The first in a series of TAME system prototypes has been developed. An assessment of experience with this first limited prototype is presented including a reassessment of its initial architecture.
Software-defined Quantum Networking Ecosystem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S.; Sadlier, Ronald
The software enables a user to perform modeling and simulation of software-defined quantum networks. The software addresses the problem of how to synchronize transmission of quantum and classical signals through multi-node networks and to demonstrate quantum information protocols such as quantum teleportation. The software approaches this problem by generating a graphical model of the underlying network and attributing properties to each node and link in the graph. The graphical model is then simulated using a combination of discrete-event simulators to calculate the expected state of each node and link in the graph at a future time. A user interacts withmore » the software by providing an initial network model and instantiating methods for the nodes to transmit information with each other. This includes writing application scripts in python that make use of the software library interfaces. A user then initiates the application scripts, which invokes the software simulation. The user then uses the built-in diagnostic tools to query the state of the simulation and to collect statistics on synchronization.« less
A Prototype for the Support of Integrated Software Process Development and Improvement
NASA Astrophysics Data System (ADS)
Porrawatpreyakorn, Nalinpat; Quirchmayr, Gerald; Chutimaskul, Wichian
An efficient software development process is one of key success factors for quality software. Not only can the appropriate establishment but also the continuous improvement of integrated project management and of the software development process result in efficiency. This paper hence proposes a software process maintenance framework which consists of two core components: an integrated PMBOK-Scrum model describing how to establish a comprehensive set of project management and software engineering processes and a software development maturity model advocating software process improvement. Besides, a prototype tool to support the framework is introduced.
Idea Paper: The Lifecycle of Software for Scientific Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubey, Anshu; McInnes, Lois C.
The software lifecycle is a well researched topic that has produced many models to meet the needs of different types of software projects. However, one class of projects, software development for scientific computing, has received relatively little attention from lifecycle researchers. In particular, software for end-to-end computations for obtaining scientific results has received few lifecycle proposals and no formalization of a development model. An examination of development approaches employed by the teams implementing large multicomponent codes reveals a great deal of similarity in their strategies. This idea paper formalizes these related approaches into a lifecycle model for end-to-end scientific applicationmore » software, featuring loose coupling between submodels for development of infrastructure and scientific capability. We also invite input from stakeholders to converge on a model that captures the complexity of this development processes and provides needed lifecycle guidance to the scientific software community.« less
Software development predictors, error analysis, reliability models and software metric analysis
NASA Technical Reports Server (NTRS)
Basili, Victor
1983-01-01
The use of dynamic characteristics as predictors for software development was studied. It was found that there are some significant factors that could be useful as predictors. From a study on software errors and complexity, it was shown that meaningful results can be obtained which allow insight into software traits and the environment in which it is developed. Reliability models were studied. The research included the field of program testing because the validity of some reliability models depends on the answers to some unanswered questions about testing. In studying software metrics, data collected from seven software engineering laboratory (FORTRAN) projects were examined and three effort reporting accuracy checks were applied to demonstrate the need to validate a data base. Results are discussed.
Resource utilization during software development
NASA Technical Reports Server (NTRS)
Zelkowitz, Marvin V.
1988-01-01
This paper discusses resource utilization over the life cycle of software development and discusses the role that the current 'waterfall' model plays in the actual software life cycle. Software production in the NASA environment was analyzed to measure these differences. The data from 13 different projects were collected by the Software Engineering Laboratory at NASA Goddard Space Flight Center and analyzed for similarities and differences. The results indicate that the waterfall model is not very realistic in practice, and that as technology introduces further perturbations to this model with concepts like executable specifications, rapid prototyping, and wide-spectrum languages, we need to modify our model of this process.
NASA Astrophysics Data System (ADS)
Abdullah, Johari Yap; Omar, Marzuki; Pritam, Helmi Mohd Hadi; Husein, Adam; Rajion, Zainul Ahmad
2016-12-01
3D printing of mandible is important for pre-operative planning, diagnostic purposes, as well as for education and training. Currently, the processing of CT data is routinely performed with commercial software which increases the cost of operation and patient management for a small clinical setting. Usage of open-source software as an alternative to commercial software for 3D reconstruction of the mandible from CT data is scarce. The aim of this study is to compare two methods of 3D reconstruction of the mandible using commercial Materialise Mimics software and open-source Medical Imaging Interaction Toolkit (MITK) software. Head CT images with a slice thickness of 1 mm and a matrix of 512x512 pixels each were retrieved from the server located at the Radiology Department of Hospital Universiti Sains Malaysia. The CT data were analysed and the 3D models of mandible were reconstructed using both commercial Materialise Mimics and open-source MITK software. Both virtual 3D models were saved in STL format and exported to 3matic and MeshLab software for morphometric and image analyses. Both models were compared using Wilcoxon Signed Rank Test and Hausdorff Distance. No significant differences were obtained between the 3D models of the mandible produced using Mimics and MITK software. The 3D model of the mandible produced using MITK open-source software is comparable to the commercial MIMICS software. Therefore, open-source software could be used in clinical setting for pre-operative planning to minimise the operational cost.
3D modeling based on CityEngine
NASA Astrophysics Data System (ADS)
Jia, Guangyin; Liao, Kaiju
2017-03-01
Currently, there are many 3D modeling softwares, like 3DMAX, AUTOCAD, and more populous BIM softwares represented by REVIT. CityEngine modeling software introduced in this paper can fully utilize the existing GIS data and combine other built models to make 3D modeling on internal and external part of buildings in a rapid and batch manner, so as to improve the 3D modeling efficiency.
NASA Astrophysics Data System (ADS)
Daniell, James; Simpson, Alanna; Gunasekara, Rashmin; Baca, Abigail; Schaefer, Andreas; Ishizawa, Oscar; Murnane, Rick; Tijssen, Annegien; Deparday, Vivien; Forni, Marc; Himmelfarb, Anne; Leder, Jan
2015-04-01
Over the past few decades, a plethora of open access software packages for the calculation of earthquake, volcanic, tsunami, storm surge, wind and flood have been produced globally. As part of the World Bank GFDRR Review released at the Understanding Risk 2014 Conference, over 80 such open access risk assessment software packages were examined. Commercial software was not considered in the evaluation. A preliminary analysis was used to determine whether the 80 models were currently supported and if they were open access. This process was used to select a subset of 31 models that include 8 earthquake models, 4 cyclone models, 11 flood models, and 8 storm surge/tsunami models for more detailed analysis. By using multi-criteria analysis (MCDA) and simple descriptions of the software uses, the review allows users to select a few relevant software packages for their own testing and development. The detailed analysis evaluated the models on the basis of over 100 criteria and provides a synopsis of available open access natural hazard risk modelling tools. In addition, volcano software packages have since been added making the compendium of risk software tools in excess of 100. There has been a huge increase in the quality and availability of open access/source software over the past few years. For example, private entities such as Deltares now have an open source policy regarding some flood models (NGHS). In addition, leaders in developing risk models in the public sector, such as Geoscience Australia (EQRM, TCRM, TsuDAT, AnuGA) or CAPRA (ERN-Flood, Hurricane, CRISIS2007 etc.), are launching and/or helping many other initiatives. As we achieve greater interoperability between modelling tools, we will also achieve a future wherein different open source and open access modelling tools will be increasingly connected and adapted towards unified multi-risk model platforms and highly customised solutions. It was seen that many software tools could be improved by enabling user-defined exposure and vulnerability. Without this function, many tools can only be used regionally and not at global or continental scale. It is becoming increasingly easy to use multiple packages for a single region and/or hazard to characterize the uncertainty in the risk, or use as checks for the sensitivities in the analysis. There is a potential for valuable synergy between existing software. A number of open source software packages could be combined to generate a multi-risk model with multiple views of a hazard. This extensive review has simply attempted to provide a platform for dialogue between all open source and open access software packages and to hopefully inspire collaboration between developers, given the great work done by all open access and open source developers.
Models and metrics for software management and engineering
NASA Technical Reports Server (NTRS)
Basili, V. R.
1988-01-01
This paper attempts to characterize and present a state of the art view of several quantitative models and metrics of the software life cycle. These models and metrics can be used to aid in managing and engineering software projects. They deal with various aspects of the software process and product, including resources allocation and estimation, changes and errors, size, complexity and reliability. Some indication is given of the extent to which the various models have been used and the success they have achieved.
Open source molecular modeling.
Pirhadi, Somayeh; Sunseri, Jocelyn; Koes, David Ryan
2016-09-01
The success of molecular modeling and computational chemistry efforts are, by definition, dependent on quality software applications. Open source software development provides many advantages to users of modeling applications, not the least of which is that the software is free and completely extendable. In this review we categorize, enumerate, and describe available open source software packages for molecular modeling and computational chemistry. An updated online version of this catalog can be found at https://opensourcemolecularmodeling.github.io. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.
A software quality model and metrics for risk assessment
NASA Technical Reports Server (NTRS)
Hyatt, L.; Rosenberg, L.
1996-01-01
A software quality model and its associated attributes are defined and used as the model for the basis for a discussion on risk. Specific quality goals and attributes are selected based on their importance to a software development project and their ability to be quantified. Risks that can be determined by the model's metrics are identified. A core set of metrics relating to the software development process and its products is defined. Measurements for each metric and their usability and applicability are discussed.
Software Technology for Adaptable, Reliable Systems (STARS)
1994-03-25
Tmeline(3), SECOMO(3), SEER(3), GSFC Software Engineering Lab Model(l), SLIM(4), SEER-SEM(l), SPQR (2), PRICE-S(2), internally-developed models(3), APMSS(1...3 " Timeline - 3 " SASET (Software Architecture Sizing Estimating Tool) - 2 " MicroMan 11- 2 * LCM (Logistics Cost Model) - 2 * SPQR - 2 * PRICE-S - 2
Experiences in Teaching a Graduate Course on Model-Driven Software Development
ERIC Educational Resources Information Center
Tekinerdogan, Bedir
2011-01-01
Model-driven software development (MDSD) aims to support the development and evolution of software intensive systems using the basic concepts of model, metamodel, and model transformation. In parallel with the ongoing academic research, MDSD is more and more applied in industrial practices. After being accepted both by a broad community of…
Industry Software Trustworthiness Criterion Research Based on Business Trustworthiness
NASA Astrophysics Data System (ADS)
Zhang, Jin; Liu, Jun-fei; Jiao, Hai-xing; Shen, Yi; Liu, Shu-yuan
To industry software Trustworthiness problem, an idea aiming to business to construct industry software trustworthiness criterion is proposed. Based on the triangle model of "trustworthy grade definition-trustworthy evidence model-trustworthy evaluating", the idea of business trustworthiness is incarnated from different aspects of trustworthy triangle model for special industry software, power producing management system (PPMS). Business trustworthiness is the center in the constructed industry trustworthy software criterion. Fusing the international standard and industry rules, the constructed trustworthy criterion strengthens the maneuverability and reliability. Quantitive evaluating method makes the evaluating results be intuitionistic and comparable.
Automated support for experience-based software management
NASA Technical Reports Server (NTRS)
Valett, Jon D.
1992-01-01
To effectively manage a software development project, the software manager must have access to key information concerning a project's status. This information includes not only data relating to the project of interest, but also, the experience of past development efforts within the environment. This paper describes the concepts and functionality of a software management tool designed to provide this information. This tool, called the Software Management Environment (SME), enables the software manager to compare an ongoing development effort with previous efforts and with models of the 'typical' project within the environment, to predict future project status, to analyze a project's strengths and weaknesses, and to assess the project's quality. In order to provide these functions the tool utilizes a vast corporate memory that includes a data base of software metrics, a set of models and relationships that describe the software development environment, and a set of rules that capture other knowledge and experience of software managers within the environment. Integrating these major concepts into one software management tool, the SME is a model of the type of management tool needed for all software development organizations.
A nondestructive method to estimate the chlorophyll content of Arabidopsis seedlings
Liang, Ying; Urano, Daisuke; Liao, Kang-Ling; ...
2017-04-14
Chlorophyll content decreases in plants under stress conditions, therefore it is used commonly as an indicator of plant health. Arabidopsis thaliana offers a convenient and fast way to test physiological phenotypes of mutations and treatments. But, chlorophyll measurements with conventional solvent extraction are not applicable to Arabidopsis leaves due to their small size, especially when grown on culture dishes. We provide a nondestructive method for chlorophyll measurement whereby the red, green and blue (RGB) values of a color leaf image is used to estimate the chlorophyll content from Arabidopsis leaves. The method accommodates different profiles of digital cameras by incorporatingmore » the ColorChecker chart to make the digital negative profiles, to adjust the white balance, and to calibrate the exposure rate differences caused by the environment so that this method is applicable in any environment. We chose an exponential function model to estimate chlorophyll content from the RGB values, and fitted the model parameters with physical measurements of chlorophyll contents. As further proof of utility, this method was used to estimate chlorophyll content of G protein mutants grown on different sugar to nitrogen ratios. Our method is a simple, fast, inexpensive, and nondestructive estimation of chlorophyll content of Arabidopsis seedlings. This method lead to the discovery that G proteins are important in sensing the C/N balance to control chlorophyll content in Arabidopsis.« less
Graphical Technique to Support the Teaching/Learning Process of Software Process Reference Models
NASA Astrophysics Data System (ADS)
Espinosa-Curiel, Ismael Edrein; Rodríguez-Jacobo, Josefina; Fernández-Zepeda, José Alberto
In this paper, we propose a set of diagrams to visualize software process reference models (PRM). The diagrams, called dimods, are the combination of some visual and process modeling techniques such as rich pictures, mind maps, IDEF and RAD diagrams. We show the use of this technique by designing a set of dimods for the Mexican Software Industry Process Model (MoProSoft). Additionally, we perform an evaluation of the usefulness of dimods. The result of the evaluation shows that dimods may be a support tool that facilitates the understanding, memorization, and learning of software PRMs in both, software development organizations and universities. The results also show that dimods may have advantages over the traditional description methods for these types of models.
Capability Maturity Model (CMM) for Software Process Improvements
NASA Technical Reports Server (NTRS)
Ling, Robert Y.
2000-01-01
This slide presentation reviews the Avionic Systems Division's implementation of the Capability Maturity Model (CMM) for improvements in the software development process. The presentation reviews the process involved in implementing the model and the benefits of using CMM to improve the software development process.
A bridge role metric model for nodes in software networks.
Li, Bo; Feng, Yanli; Ge, Shiyu; Li, Dashe
2014-01-01
A bridge role metric model is put forward in this paper. Compared with previous metric models, our solution of a large-scale object-oriented software system as a complex network is inherently more realistic. To acquire nodes and links in an undirected network, a new model that presents the crucial connectivity of a module or the hub instead of only centrality as in previous metric models is presented. Two previous metric models are described for comparison. In addition, it is obvious that the fitting curve between the Bre results and degrees can well be fitted by a power law. The model represents many realistic characteristics of actual software structures, and a hydropower simulation system is taken as an example. This paper makes additional contributions to an accurate understanding of module design of software systems and is expected to be beneficial to software engineering practices.
A Bridge Role Metric Model for Nodes in Software Networks
Li, Bo; Feng, Yanli; Ge, Shiyu; Li, Dashe
2014-01-01
A bridge role metric model is put forward in this paper. Compared with previous metric models, our solution of a large-scale object-oriented software system as a complex network is inherently more realistic. To acquire nodes and links in an undirected network, a new model that presents the crucial connectivity of a module or the hub instead of only centrality as in previous metric models is presented. Two previous metric models are described for comparison. In addition, it is obvious that the fitting curve between the results and degrees can well be fitted by a power law. The model represents many realistic characteristics of actual software structures, and a hydropower simulation system is taken as an example. This paper makes additional contributions to an accurate understanding of module design of software systems and is expected to be beneficial to software engineering practices. PMID:25364938
Development of an Environment for Software Reliability Model Selection
1992-09-01
now is directed to other related problems such as tools for model selection, multiversion programming, and software fault tolerance modeling... multiversion programming, 7. Hlardware can be repaired by spare modules, which is not. the case for software, 2-6 N. Preventive maintenance is very important
GeoTess: A generalized Earth model software utility
Ballard, Sanford; Hipp, James; Kraus, Brian; ...
2016-03-23
GeoTess is a model parameterization and software support library that manages the construction, population, storage, and interrogation of data stored in 2D and 3D Earth models. Here, the software is available in Java and C++, with a C interface to the C++ library.
Mental Models of Software Forecasting
NASA Technical Reports Server (NTRS)
Hihn, J.; Griesel, A.; Bruno, K.; Fouser, T.; Tausworthe, R.
1993-01-01
The majority of software engineers resist the use of the currently available cost models. One problem is that the mathematical and statistical models that are currently available do not correspond with the mental models of the software engineers. In an earlier JPL funded study (Hihn and Habib-agahi, 1991) it was found that software engineers prefer to use analogical or analogy-like techniques to derive size and cost estimates, whereas curren CER's hide any analogy in the regression equations. In addition, the currently available models depend upon information which is not available during early planning when the most important forecasts must be made.
NASA Astrophysics Data System (ADS)
Gaševic, Dragan; Djuric, Dragan; Devedžic, Vladan
A relevant initiative from the software engineering community called Model Driven Engineering (MDE) is being developed in parallel with the Semantic Web (Mellor et al. 2003a). The MDE approach to software development suggests that one should first develop a model of the system under study, which is then transformed into the real thing (i.e., an executable software entity). The most important research initiative in this area is the Model Driven Architecture (MDA), which is Model Driven Architecture being developed under the umbrella of the Object Management Group (OMG). This chapter describes the basic concepts of this software engineering effort.
ERIC Educational Resources Information Center
Ferrer, Emilio; Hamagami, Fumiaki; McArdle, John J.
2004-01-01
This article offers different examples of how to fit latent growth curve (LGC) models to longitudinal data using a variety of different software programs (i.e., LISREL, Mx, Mplus, AMOS, SAS). The article shows how the same model can be fitted using both structural equation modeling and multilevel software, with nearly identical results, even in…
Supporting the Use of CERT (registered trademark) Secure Coding Standards in DoD Acquisitions
2012-07-01
Capability Maturity Model IntegrationSM (CMMI®) [Davis 2009]. SM Team Software Process, TSP, and Capability Maturity Model Integration are service...STP Software Test Plan TEP Test and Evaluation Plan TSP Team Software Process V & V verification and validation CMU/SEI-2012-TN-016 | 47...Supporting the Use of CERT® Secure Coding Standards in DoD Acquisitions Tim Morrow ( Software Engineering Institute) Robert Seacord ( Software
Defect measurement and analysis of JPL ground software: a case study
NASA Technical Reports Server (NTRS)
Powell, John D.; Spagnuolo, John N., Jr.
2004-01-01
Ground software systems at JPL must meet high assurance standards while remaining on schedule due to relatively immovable launch dates for spacecraft that will be controlled by such systems. Toward this end, the Software Quality Improvement (SQI) project's Measurement and Benchmarking (M&B) team is collecting and analyzing defect data of JPL ground system software projects to build software defect prediction models. The aim of these models is to improve predictability with regard to software quality activities. Predictive models will quantitatively define typical trends for JPL ground systems as well as Critical Discriminators (CDs) to provide explanations for atypical deviations from the norm at JPL. CDs are software characteristics that can be estimated or foreseen early in a software project's planning. Thus, these CDs will assist in planning for the predicted degree to which software quality activities for a project are likely to deviation from the normal JPL ground system based on pasted experience across the lab.
2017-03-20
computation, Prime Implicates, Boolean Abstraction, real- time embedded software, software synthesis, correct by construction software design , model...types for time -dependent data-flow networks". J.-P. Talpin, P. Jouvelot, S. Shukla. ACM-IEEE Conference on Methods and Models for System Design ...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and
Adaptive Long-Term Monitoring at Environmental Restoration Sites (ER-0629)
2009-05-01
Figures Figure 2-1 General Flowchart of Software Application Figure 2-2 Overview of the Genetic Algorithm Approach Figure 2-3 Example of a...and Model Builder) are highlighted on Figure 2-1, which is a general flowchart illustrating the application of the software. The software is applied...monitoring event (e.g., contaminant mass based on interpolation) that modeling is provided by Model Builder. 4 Figure 2-1. General Flowchart of Software
Software forecasting as it is really done: A study of JPL software engineers
NASA Technical Reports Server (NTRS)
Griesel, Martha Ann; Hihn, Jairus M.; Bruno, Kristin J.; Fouser, Thomas J.; Tausworthe, Robert C.
1993-01-01
This paper presents a summary of the results to date of a Jet Propulsion Laboratory internally funded research task to study the costing process and parameters used by internally recognized software cost estimating experts. Protocol Analysis and Markov process modeling were used to capture software engineer's forecasting mental models. While there is significant variation between the mental models that were studied, it was nevertheless possible to identify a core set of cost forecasting activities, and it was also found that the mental models cluster around three forecasting techniques. Further partitioning of the mental models revealed clustering of activities, that is very suggestive of a forecasting lifecycle. The different forecasting methods identified were based on the use of multiple-decomposition steps or multiple forecasting steps. The multiple forecasting steps involved either forecasting software size or an additional effort forecast. Virtually no subject used risk reduction steps in combination. The results of the analysis include: the identification of a core set of well defined costing activities, a proposed software forecasting life cycle, and the identification of several basic software forecasting mental models. The paper concludes with a discussion of the implications of the results for current individual and institutional practices.
A comparative approach to computer aided design model of a dog femur.
Turamanlar, O; Verim, O; Karabulut, A
2016-01-01
Computer assisted technologies offer new opportunities in medical imaging and rapid prototyping in biomechanical engineering. Three dimensional (3D) modelling of soft tissues and bones are becoming more important. The accuracy of the analysis in modelling processes depends on the outline of the tissues derived from medical images. The aim of this study is the evaluation of the accuracy of 3D models of a dog femur derived from computed tomography data by using point cloud method and boundary line method on several modelling software. Solidworks, Rapidform and 3DSMax software were used to create 3D models and outcomes were evaluated statistically. The most accurate 3D prototype of the dog femur was created with stereolithography method using rapid prototype device. Furthermore, the linearity of the volumes of models was investigated between software and the constructed models. The difference between the software and real models manifests the sensitivity of the software and the devices used in this manner.
Integrated Functional and Executional Modelling of Software Using Web-Based Databases
NASA Technical Reports Server (NTRS)
Kulkarni, Deepak; Marietta, Roberta
1998-01-01
NASA's software subsystems undergo extensive modification and updates over the operational lifetimes. It is imperative that modified software should satisfy safety goals. This report discusses the difficulties encountered in doing so and discusses a solution based on integrated modelling of software, use of automatic information extraction tools, web technology and databases.
Experimental Evaluation of a Serious Game for Teaching Software Process Modeling
ERIC Educational Resources Information Center
Chaves, Rafael Oliveira; von Wangenheim, Christiane Gresse; Furtado, Julio Cezar Costa; Oliveira, Sandro Ronaldo Bezerra; Santos, Alex; Favero, Eloi Luiz
2015-01-01
Software process modeling (SPM) is an important area of software engineering because it provides a basis for managing, automating, and supporting software process improvement (SPI). Teaching SPM is a challenging task, mainly because it lays great emphasis on theory and offers few practical exercises. Furthermore, as yet few teaching approaches…
Estimating Software-Development Costs With Greater Accuracy
NASA Technical Reports Server (NTRS)
Baker, Dan; Hihn, Jairus; Lum, Karen
2008-01-01
COCOMOST is a computer program for use in estimating software development costs. The goal in the development of COCOMOST was to increase estimation accuracy in three ways: (1) develop a set of sensitivity software tools that return not only estimates of costs but also the estimation error; (2) using the sensitivity software tools, precisely define the quantities of data needed to adequately tune cost estimation models; and (3) build a repository of software-cost-estimation information that NASA managers can retrieve to improve the estimates of costs of developing software for their project. COCOMOST implements a methodology, called '2cee', in which a unique combination of well-known pre-existing data-mining and software-development- effort-estimation techniques are used to increase the accuracy of estimates. COCOMOST utilizes multiple models to analyze historical data pertaining to software-development projects and performs an exhaustive data-mining search over the space of model parameters to improve the performances of effort-estimation models. Thus, it is possible to both calibrate and generate estimates at the same time. COCOMOST is written in the C language for execution in the UNIX operating system.
SWIFT MODELLER: a Java based GUI for molecular modeling.
Mathur, Abhinav; Shankaracharya; Vidyarthi, Ambarish S
2011-10-01
MODELLER is command line argument based software which requires tedious formatting of inputs and writing of Python scripts which most people are not comfortable with. Also the visualization of output becomes cumbersome due to verbose files. This makes the whole software protocol very complex and requires extensive study of MODELLER manuals and tutorials. Here we describe SWIFT MODELLER, a GUI that automates formatting, scripting and data extraction processes and present it in an interactive way making MODELLER much easier to use than before. The screens in SWIFT MODELLER are designed keeping homology modeling in mind and their flow is a depiction of its steps. It eliminates the formatting of inputs, scripting processes and analysis of verbose output files through automation and makes pasting of the target sequence as the only prerequisite. Jmol (3D structure visualization tool) has been integrated into the GUI which opens and demonstrates the protein data bank files created by the MODELLER software. All files required and created by the software are saved in a folder named after the work instance's date and time of execution. SWIFT MODELLER lowers the skill level required for the software through automation of many of the steps in the original software protocol, thus saving an enormous amount of time per instance and making MODELLER very easy to work with.
Modeling and MBL: Software Tools for Science.
ERIC Educational Resources Information Center
Tinker, Robert F.
Recent technological advances and new software packages put unprecedented power for experimenting and theory-building in the hands of students at all levels. Microcomputer-based laboratory (MBL) and model-solving tools illustrate the educational potential of the technology. These tools include modeling software and three MBL packages (which are…
Modeling software systems by domains
NASA Technical Reports Server (NTRS)
Dippolito, Richard; Lee, Kenneth
1992-01-01
The Software Architectures Engineering (SAE) Project at the Software Engineering Institute (SEI) has developed engineering modeling techniques that both reduce the complexity of software for domain-specific computer systems and result in systems that are easier to build and maintain. These techniques allow maximum freedom for system developers to apply their domain expertise to software. We have applied these techniques to several types of applications, including training simulators operating in real time, engineering simulators operating in non-real time, and real-time embedded computer systems. Our modeling techniques result in software that mirrors both the complexity of the application and the domain knowledge requirements. We submit that the proper measure of software complexity reflects neither the number of software component units nor the code count, but the locus of and amount of domain knowledge. As a result of using these techniques, domain knowledge is isolated by fields of engineering expertise and removed from the concern of the software engineer. In this paper, we will describe kinds of domain expertise, describe engineering by domains, and provide relevant examples of software developed for simulator applications using the techniques.
A Petri Net-Based Software Process Model for Developing Process-Oriented Information Systems
NASA Astrophysics Data System (ADS)
Li, Yu; Oberweis, Andreas
Aiming at increasing flexibility, efficiency, effectiveness, and transparency of information processing and resource deployment in organizations to ensure customer satisfaction and high quality of products and services, process-oriented information systems (POIS) represent a promising realization form of computerized business information systems. Due to the complexity of POIS, explicit and specialized software process models are required to guide POIS development. In this chapter we characterize POIS with an architecture framework and present a Petri net-based software process model tailored for POIS development with consideration of organizational roles. As integrated parts of the software process model, we also introduce XML nets, a variant of high-level Petri nets as basic methodology for business processes modeling, and an XML net-based software toolset providing comprehensive functionalities for POIS development.
Ueno, Yutaka; Ito, Shuntaro; Konagaya, Akihiko
2014-12-01
To better understand the behaviors and structural dynamics of proteins within a cell, novel software tools are being developed that can create molecular animations based on the findings of structural biology. This study proposes our method developed based on our prototypes to detect collisions and examine the soft-body dynamics of molecular models. The code was implemented with a software development toolkit for rigid-body dynamics simulation and a three-dimensional graphics library. The essential functions of the target software system included the basic molecular modeling environment, collision detection in the molecular models, and physical simulations of the movement of the model. Taking advantage of recent software technologies such as physics simulation modules and interpreted scripting language, the functions required for accurate and meaningful molecular animation were implemented efficiently.
Proposal for constructing an advanced software tool for planetary atmospheric modeling
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Sims, Michael H.; Podolak, Esther; Mckay, Christopher P.; Thompson, David E.
1990-01-01
Scientific model building can be a time intensive and painstaking process, often involving the development of large and complex computer programs. Despite the effort involved, scientific models cannot easily be distributed and shared with other scientists. In general, implemented scientific models are complex, idiosyncratic, and difficult for anyone but the original scientist/programmer to understand. We believe that advanced software techniques can facilitate both the model building and model sharing process. We propose to construct a scientific modeling software tool that serves as an aid to the scientist in developing and using models. The proposed tool will include an interactive intelligent graphical interface and a high level, domain specific, modeling language. As a testbed for this research, we propose development of a software prototype in the domain of planetary atmospheric modeling.
A General Water Resources Regulation Software System in China
NASA Astrophysics Data System (ADS)
LEI, X.
2017-12-01
To avoid iterative development of core modules in water resource normal regulation and emergency regulation and improve the capability of maintenance and optimization upgrading of regulation models and business logics, a general water resources regulation software framework was developed based on the collection and analysis of common demands for water resources regulation and emergency management. It can provide a customizable, secondary developed and extensible software framework for the three-level platform "MWR-Basin-Province". Meanwhile, this general software system can realize business collaboration and information sharing of water resources regulation schemes among the three-level platforms, so as to improve the decision-making ability of national water resources regulation. There are four main modules involved in the general software system: 1) A complete set of general water resources regulation modules allows secondary developer to custom-develop water resources regulation decision-making systems; 2) A complete set of model base and model computing software released in the form of Cloud services; 3) A complete set of tools to build the concept map and model system of basin water resources regulation, as well as a model management system to calibrate and configure model parameters; 4) A database which satisfies business functions and functional requirements of general water resources regulation software can finally provide technical support for building basin or regional water resources regulation models.
The discounting model selector: Statistical software for delay discounting applications.
Gilroy, Shawn P; Franck, Christopher T; Hantula, Donald A
2017-05-01
Original, open-source computer software was developed and validated against established delay discounting methods in the literature. The software executed approximate Bayesian model selection methods from user-supplied temporal discounting data and computed the effective delay 50 (ED50) from the best performing model. Software was custom-designed to enable behavior analysts to conveniently apply recent statistical methods to temporal discounting data with the aid of a graphical user interface (GUI). The results of independent validation of the approximate Bayesian model selection methods indicated that the program provided results identical to that of the original source paper and its methods. Monte Carlo simulation (n = 50,000) confirmed that true model was selected most often in each setting. Simulation code and data for this study were posted to an online repository for use by other researchers. The model selection approach was applied to three existing delay discounting data sets from the literature in addition to the data from the source paper. Comparisons of model selected ED50 were consistent with traditional indices of discounting. Conceptual issues related to the development and use of computer software by behavior analysts and the opportunities afforded by free and open-sourced software are discussed and a review of possible expansions of this software are provided. © 2017 Society for the Experimental Analysis of Behavior.
A Comparison and Evaluation of Real-Time Software Systems Modeling Languages
NASA Technical Reports Server (NTRS)
Evensen, Kenneth D.; Weiss, Kathryn Anne
2010-01-01
A model-driven approach to real-time software systems development enables the conceptualization of software, fostering a more thorough understanding of its often complex architecture and behavior while promoting the documentation and analysis of concerns common to real-time embedded systems such as scheduling, resource allocation, and performance. Several modeling languages have been developed to assist in the model-driven software engineering effort for real-time systems, and these languages are beginning to gain traction with practitioners throughout the aerospace industry. This paper presents a survey of several real-time software system modeling languages, namely the Architectural Analysis and Design Language (AADL), the Unified Modeling Language (UML), Systems Modeling Language (SysML), the Modeling and Analysis of Real-Time Embedded Systems (MARTE) UML profile, and the AADL for UML profile. Each language has its advantages and disadvantages, and in order to adequately describe a real-time software system's architecture, a complementary use of multiple languages is almost certainly necessary. This paper aims to explore these languages in the context of understanding the value each brings to the model-driven software engineering effort and to determine if it is feasible and practical to combine aspects of the various modeling languages to achieve more complete coverage in architectural descriptions. To this end, each language is evaluated with respect to a set of criteria such as scope, formalisms, and architectural coverage. An example is used to help illustrate the capabilities of the various languages.
NASA Astrophysics Data System (ADS)
Yetman, G.; Downs, R. R.
2011-12-01
Software deployment is needed to process and distribute scientific data throughout the data lifecycle. Developing software in-house can take software development teams away from other software development projects and can require efforts to maintain the software over time. Adopting and reusing software and system modules that have been previously developed by others can reduce in-house software development and maintenance costs and can contribute to the quality of the system being developed. A variety of models are available for reusing and deploying software and systems that have been developed by others. These deployment models include open source software, vendor-supported open source software, commercial software, and combinations of these approaches. Deployment in Earth science data processing and distribution has demonstrated the advantages and drawbacks of each model. Deploying open source software offers advantages for developing and maintaining scientific data processing systems and applications. By joining an open source community that is developing a particular system module or application, a scientific data processing team can contribute to aspects of the software development without having to commit to developing the software alone. Communities of interested developers can share the work while focusing on activities that utilize in-house expertise and addresses internal requirements. Maintenance is also shared by members of the community. Deploying vendor-supported open source software offers similar advantages to open source software. However, by procuring the services of a vendor, the in-house team can rely on the vendor to provide, install, and maintain the software over time. Vendor-supported open source software may be ideal for teams that recognize the value of an open source software component or application and would like to contribute to the effort, but do not have the time or expertise to contribute extensively. Vendor-supported software may also have the additional benefits of guaranteed up-time, bug fixes, and vendor-added enhancements. Deploying commercial software can be advantageous for obtaining system or software components offered by a vendor that meet in-house requirements. The vendor can be contracted to provide installation, support and maintenance services as needed. Combining these options offers a menu of choices, enabling selection of system components or software modules that meet the evolving requirements encountered throughout the scientific data lifecycle.
MATTS- A Step Towards Model Based Testing
NASA Astrophysics Data System (ADS)
Herpel, H.-J.; Willich, G.; Li, J.; Xie, J.; Johansen, B.; Kvinnesland, K.; Krueger, S.; Barrios, P.
2016-08-01
In this paper we describe a Model Based approach to testing of on-board software and compare it with traditional validation strategy currently applied to satellite software. The major problems that software engineering will face over at least the next two decades are increasing application complexity driven by the need for autonomy and serious application robustness. In other words, how do we actually get to declare success when trying to build applications one or two orders of magnitude more complex than today's applications. To solve the problems addressed above the software engineering process has to be improved at least for two aspects: 1) Software design and 2) Software testing. The software design process has to evolve towards model-based approaches with extensive use of code generators. Today, testing is an essential, but time and resource consuming activity in the software development process. Generating a short, but effective test suite usually requires a lot of manual work and expert knowledge. In a model-based process, among other subtasks, test construction and test execution can also be partially automated. The basic idea behind the presented study was to start from a formal model (e.g. State Machines), generate abstract test cases which are then converted to concrete executable test cases (input and expected output pairs). The generated concrete test cases were applied to an on-board software. Results were collected and evaluated wrt. applicability, cost-efficiency, effectiveness at fault finding, and scalability.
Software dependability in the Tandem GUARDIAN system
NASA Technical Reports Server (NTRS)
Lee, Inhwan; Iyer, Ravishankar K.
1995-01-01
Based on extensive field failure data for Tandem's GUARDIAN operating system this paper discusses evaluation of the dependability of operational software. Software faults considered are major defects that result in processor failures and invoke backup processes to take over. The paper categorizes the underlying causes of software failures and evaluates the effectiveness of the process pair technique in tolerating software faults. A model to describe the impact of software faults on the reliability of an overall system is proposed. The model is used to evaluate the significance of key factors that determine software dependability and to identify areas for improvement. An analysis of the data shows that about 77% of processor failures that are initially considered due to software are confirmed as software problems. The analysis shows that the use of process pairs to provide checkpointing and restart (originally intended for tolerating hardware faults) allows the system to tolerate about 75% of reported software faults that result in processor failures. The loose coupling between processors, which results in the backup execution (the processor state and the sequence of events) being different from the original execution, is a major reason for the measured software fault tolerance. Over two-thirds (72%) of measured software failures are recurrences of previously reported faults. Modeling, based on the data, shows that, in addition to reducing the number of software faults, software dependability can be enhanced by reducing the recurrence rate.
NASA Astrophysics Data System (ADS)
Wi, S.; Ray, P. A.; Brown, C.
2015-12-01
A software package developed to facilitate building distributed hydrologic models in a modular modeling system is presented. The software package provides a user-friendly graphical user interface that eases its practical use in water resources-related research and practice. The modular modeling system organizes the options available to users when assembling models according to the stages of hydrological cycle, such as potential evapotranspiration, soil moisture accounting, and snow/glacier melting processes. The software is intended to be a comprehensive tool that simplifies the task of developing, calibrating, validating, and using hydrologic models through the inclusion of intelligent automation to minimize user effort, and reduce opportunities for error. Processes so far automated include the definition of system boundaries (i.e., watershed delineation), climate and geographical input generation, and parameter calibration. Built-in post-processing toolkits greatly improve the functionality of the software as a decision support tool for water resources system management and planning. Example post-processing toolkits enable streamflow simulation at ungauged sites with predefined model parameters, and perform climate change risk assessment by means of the decision scaling approach. The software is validated through application to watersheds representing a variety of hydrologic regimes.
RAD-ADAPT: Software for modelling clonogenic assay data in radiation biology.
Zhang, Yaping; Hu, Kaiqiang; Beumer, Jan H; Bakkenist, Christopher J; D'Argenio, David Z
2017-04-01
We present a comprehensive software program, RAD-ADAPT, for the quantitative analysis of clonogenic assays in radiation biology. Two commonly used models for clonogenic assay analysis, the linear-quadratic model and single-hit multi-target model, are included in the software. RAD-ADAPT uses maximum likelihood estimation method to obtain parameter estimates with the assumption that cell colony count data follow a Poisson distribution. The program has an intuitive interface, generates model prediction plots, tabulates model parameter estimates, and allows automatic statistical comparison of parameters between different groups. The RAD-ADAPT interface is written using the statistical software R and the underlying computations are accomplished by the ADAPT software system for pharmacokinetic/pharmacodynamic systems analysis. The use of RAD-ADAPT is demonstrated using an example that examines the impact of pharmacologic ATM and ATR kinase inhibition on human lung cancer cell line A549 after ionizing radiation. Copyright © 2017 Elsevier B.V. All rights reserved.
Using UML Modeling to Facilitate Three-Tier Architecture Projects in Software Engineering Courses
ERIC Educational Resources Information Center
Mitra, Sandeep
2014-01-01
This article presents the use of a model-centric approach to facilitate software development projects conforming to the three-tier architecture in undergraduate software engineering courses. Many instructors intend that such projects create software applications for use by real-world customers. While it is important that the first version of these…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-01
... Market and Planning Efficiency Through Improved Software; Notice of Agenda and Procedures for Staff... planning models and software. The technical conference will be held from 8 a.m. to 5:30 p.m. (EDT) on June.... Agenda for AD10-12 Staff Technical Conference on Planning Models and Software Federal Energy Regulatory...
The Emergence of Open-Source Software in North America
ERIC Educational Resources Information Center
Pan, Guohua; Bonk, Curtis J.
2007-01-01
Unlike conventional models of software development, the open source model is based on the collaborative efforts of users who are also co-developers of the software. Interest in open source software has grown exponentially in recent years. A "Google" search for the phrase open source in early 2005 returned 28.8 million webpage hits, while…
The Evolution of Software Pricing: From Box Licenses to Application Service Provider Models.
ERIC Educational Resources Information Center
Bontis, Nick; Chung, Honsan
2000-01-01
Describes three different pricing models for software. Findings of this case study support the proposition that software pricing is a complex and subjective process. The key determinant of alignment between vendor and user is the nature of value in the software to the buyer. This value proposition may range from increased cost reduction to…
An ontology based trust verification of software license agreement
NASA Astrophysics Data System (ADS)
Lu, Wenhuan; Li, Xiaoqing; Gan, Zengqin; Wei, Jianguo
2017-08-01
When we install software or download software, there will show up so big mass document to state the rights and obligations, for which lots of person are not patient to read it or understand it. That would may make users feel distrust for the software. In this paper, we propose an ontology based verification for Software License Agreement. First of all, this work proposed an ontology model for domain of Software License Agreement. The domain ontology is constructed by proposed methodology according to copyright laws and 30 software license agreements. The License Ontology can act as a part of generalized copyright law knowledge model, and also can work as visualization of software licenses. Based on this proposed ontology, a software license oriented text summarization approach is proposed which performances showing that it can improve the accuracy of software licenses summarizing. Based on the summarization, the underline purpose of the software license can be explicitly explored for trust verification.
Report on the workshop on Ion Implantation and Ion Beam Assisted Deposition
NASA Astrophysics Data System (ADS)
Dearnaley, G.
1992-03-01
This workshop was organized by the Corpus Christi Army Depot (CCAD), the major helicopter repair base within AVSCOM. Previous meetings had revealed a strong interest throughout DoD in ion beam technology as a means of extending the service life of military systems by reducing wear, corrosion, fatigue, etc. The workshop opened with an account by Dr. Bruce Sartwell of the successful application of ion implantation to bearings and gears at NRL, and the checkered history of the MANTECH Project at Spire Corporation. Dr. James Hirvonen (AMTL) continued with a summary of successful applications to reduce wear in biomedical components, and he also described the processes of ion beam-assisted deposition (IBAD) for a variety of protective coatings, including diamond-like carbon (DLC).
Optical programmable Boolean logic unit.
Chattopadhyay, Tanay
2011-11-10
Logic units are the building blocks of many important computational operations likes arithmetic, multiplexer-demultiplexer, radix conversion, parity checker cum generator, etc. Multifunctional logic operation is very much essential in this respect. Here a programmable Boolean logic unit is proposed that can perform 16 Boolean logical operations from a single optical input according to the programming input without changing the circuit design. This circuit has two outputs. One output is complementary to the other. Hence no loss of data can occur. The circuit is basically designed by a 2×2 polarization independent optical cross bar switch. Performance of the proposed circuit has been achieved by doing numerical simulations. The binary logical states (0,1) are represented by the absence of light (null) and presence of light, respectively.
Games and machine learning: a powerful combination in an artificial intelligence course
NASA Astrophysics Data System (ADS)
Wallace, Scott A.; McCartney, Robert; Russell, Ingrid
2010-03-01
Project MLeXAI (Machine Learning eXperiences in Artificial Intelligence (AI)) seeks to build a set of reusable course curriculum and hands on laboratory projects for the artificial intelligence classroom. In this article, we describe two game-based projects from the second phase of project MLeXAI: Robot Defense - a simple real-time strategy game and Checkers - a classic turn-based board game. From the instructors' prospective, we examine aspects of design and implementation as well as the challenges and rewards of using the curricula. We explore students' responses to the projects via the results of a common survey. Finally, we compare the student perceptions from the game-based projects to non-game based projects from the first phase of Project MLeXAI.
Fixing the Sky: Why the History of Climate Engineering Matters (Invited)
NASA Astrophysics Data System (ADS)
Fleming, J. R.
2010-12-01
What shall we do about climate change? Is a planetary-scale technological fix possible or desirable? The joint AMS and AGU “Policy Statement on Geoengineering the Climate System” (2009) recommends “Coordinated study of historical, ethical, legal, and social implications of geoengineering that integrates international, interdisciplinary, and intergenerational issues and perspectives and includes lessons from past efforts to modify weather and climate.” I wrote Fixing the Sky: The Checkered History of Weather and Climate Control (Columbia University Press, 2010) with this recommendation in mind, to be fully accessible to scientists, policymakers, and the general public, while meeting or exceeding the scholarly standards of history. It is my intent, with this book, to bring history to bear on public policy issues.
Pickstone v. Freemans plc, 25 March 1987.
1987-01-01
The applicants, female warehouse operatives for the defendant, challenged the defendant's policy of paying warehouse checkers more than they were being paid. They claimed that both groups were engaged in work of equal value and should be given equal pay. The case was dismissed by a lower court because the defendant employed men in both positions at the same pay as women in those positions and the Equal Pay Act 1970 prohibited suits in such circumstances. The Court of Appeal reversed this decision, holding that Article 119 of the EEC Treaty authorized the suit and overrode conflicting national legislation. It remanded the case for determination whether the pay differential between the two jobs was due to reasons other than sex discrimination. full text
T1 pseudohyperintensity on fat-suppressed MRI: A potential diagnostic pitfall
Huynh, Tuan N.; Johnson, D. Thor; Poder, Liina; Joe, Bonnie N.; Webb, Emily M.; Coakley, Fergus V.
2011-01-01
MRI findings in two patients with misleading T1 hyperintensity seen only on fat-suppressed images are presented, one with a renal cell carcinoma that was misinterpreted as a hemorrhagic cyst and the other with an ovarian serous cystadenocarcinoma that was misinterpreted as a complicated endometrioma. The apparent T1 hyperintensity on fat suppressed images in these cases was likely due to varying perception of image signal dependent on local contrast, an optical effect known as the checker-shadow illusion. T1 pseudohyperintensity should be considered when apparently high T1 signal intensity is seen only on fat-suppressed images; review of non fat-suppressed images may help prevent an erroneous diagnoses of blood-containing lesions. PMID:21765301
Avoidable Software Procurements
2012-09-01
software license, software usage, ELA, Software as a Service , SaaS , Software Asset...PaaS Platform as a Service SaaS Software as a Service SAM Software Asset Management SMS System Management Server SEWP Solutions for Enterprise Wide...delivery of full Cloud Services , we will see the transition of the Cloud Computing service model from Iaas to SaaS , or Software as a Service . Software
NASA Astrophysics Data System (ADS)
Lanciotti, E.; Merino, G.; Bria, A.; Blomer, J.
2011-12-01
In a distributed computing model as WLCG the software of experiment specific application software has to be efficiently distributed to any site of the Grid. Application software is currently installed in a shared area of the site visible for all Worker Nodes (WNs) of the site through some protocol (NFS, AFS or other). The software is installed at the site by jobs which run on a privileged node of the computing farm where the shared area is mounted in write mode. This model presents several drawbacks which cause a non-negligible rate of job failure. An alternative model for software distribution based on the CERN Virtual Machine File System (CernVM-FS) has been tried at PIC, the Spanish Tierl site of WLCG. The test bed used and the results are presented in this paper.
Automating Risk Analysis of Software Design Models
Ruiz, Guifré; Heymann, Elisa; César, Eduardo; Miller, Barton P.
2014-01-01
The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance. PMID:25136688
Automating risk analysis of software design models.
Frydman, Maxime; Ruiz, Guifré; Heymann, Elisa; César, Eduardo; Miller, Barton P
2014-01-01
The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance.
NASA Technical Reports Server (NTRS)
Mayer, Richard J.; Blinn, Thomas M.; Dewitte, Paul S.; Crump, John W.; Ackley, Keith A.
1992-01-01
The Framework Programmable Software Development Platform (FPP) is a project aimed at effectively combining tool and data integration mechanisms with a model of the software development process to provide an intelligent integrated software development environment. Guided by the model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated. The Advanced Software Development Workstation (ASDW) program is conducting research into development of advanced technologies for Computer Aided Software Engineering (CASE).
NASA Technical Reports Server (NTRS)
Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Reddy, Uday; Ackley, Keith; Futrell, Mike
1991-01-01
The Framework Programmable Software Development Platform (FPP) is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software development environment. Guided by this model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated.
An approach to developing user interfaces for space systems
NASA Astrophysics Data System (ADS)
Shackelford, Keith; McKinney, Karen
1993-08-01
Inherent weakness in the traditional waterfall model of software development has led to the definition of the spiral model. The spiral model software development lifecycle model, however, has not been applied to NASA projects. This paper describes its use in developing real time user interface software for an Environmental Control and Life Support System (ECLSS) Process Control Prototype at NASA's Marshall Space Flight Center.
Investing in Software Sustainment
2015-04-30
colored arrows simply represent a reinforcing loop called the “ Bandwagon Effect ”. This effect simply means that a series of successful missions will...the Software Engineering Institute (SEI) developed a simulation model for analyzing the effects of changes in demand for software sustainment and the...developed a simulation model for analyzing the effects of changes in demand for software sustainment and the corresponding funding decisions. The model
Hope, Ryan M; Schoelles, Michael J; Gray, Wayne D
2014-12-01
Process models of cognition, written in architectures such as ACT-R and EPIC, should be able to interact with the same software with which human subjects interact. By eliminating the need to simulate the experiment, this approach would simplify the modeler's effort, while ensuring that all steps required of the human are also required by the model. In practice, the difficulties of allowing one software system to interact with another present a significant barrier to any modeler who is not also skilled at this type of programming. The barrier increases if the programming language used by the modeling software differs from that used by the experimental software. The JSON Network Interface simplifies this problem for ACT-R modelers, and potentially, modelers using other systems.
Testing Software Development Project Productivity Model
NASA Astrophysics Data System (ADS)
Lipkin, Ilya
Software development is an increasingly influential factor in today's business environment, and a major issue affecting software development is how an organization estimates projects. If the organization underestimates cost, schedule, and quality requirements, the end results will not meet customer needs. On the other hand, if the organization overestimates these criteria, resources that could have been used more profitably will be wasted. There is no accurate model or measure available that can guide an organization in a quest for software development, with existing estimation models often underestimating software development efforts as much as 500 to 600 percent. To address this issue, existing models usually are calibrated using local data with a small sample size, with resulting estimates not offering improved cost analysis. This study presents a conceptual model for accurately estimating software development, based on an extensive literature review and theoretical analysis based on Sociotechnical Systems (STS) theory. The conceptual model serves as a solution to bridge organizational and technological factors and is validated using an empirical dataset provided by the DoD. Practical implications of this study allow for practitioners to concentrate on specific constructs of interest that provide the best value for the least amount of time. This study outlines key contributing constructs that are unique for Software Size E-SLOC, Man-hours Spent, and Quality of the Product, those constructs having the largest contribution to project productivity. This study discusses customer characteristics and provides a framework for a simplified project analysis for source selection evaluation and audit task reviews for the customers and suppliers. Theoretical contributions of this study provide an initial theory-based hypothesized project productivity model that can be used as a generic overall model across several application domains such as IT, Command and Control, Simulation and etc... This research validates findings from previous work concerning software project productivity and leverages said results in this study. The hypothesized project productivity model provides statistical support and validation of expert opinions used by practitioners in the field of software project estimation.
NASA Technical Reports Server (NTRS)
McNeill, Justin
1995-01-01
The Multimission Image Processing Subsystem (MIPS) at the Jet Propulsion Laboratory (JPL) has managed transitions of application software sets from one operating system and hardware platform to multiple operating systems and hardware platforms. As a part of these transitions, cost estimates were generated from the personal experience of in-house developers and managers to calculate the total effort required for such projects. Productivity measures have been collected for two such transitions, one very large and the other relatively small in terms of source lines of code. These estimates used a cost estimation model similar to the Software Engineering Laboratory (SEL) Effort Estimation Model. Experience in transitioning software within JPL MIPS have uncovered a high incidence of interface complexity. Interfaces, both internal and external to individual software applications, have contributed to software transition project complexity, and thus to scheduling difficulties and larger than anticipated design work on software to be ported.
Software reliability through fault-avoidance and fault-tolerance
NASA Technical Reports Server (NTRS)
Vouk, Mladen A.; Mcallister, David F.
1993-01-01
Strategies and tools for the testing, risk assessment and risk control of dependable software-based systems were developed. Part of this project consists of studies to enable the transfer of technology to industry, for example the risk management techniques for safety-concious systems. Theoretical investigations of Boolean and Relational Operator (BRO) testing strategy were conducted for condition-based testing. The Basic Graph Generation and Analysis tool (BGG) was extended to fully incorporate several variants of the BRO metric. Single- and multi-phase risk, coverage and time-based models are being developed to provide additional theoretical and empirical basis for estimation of the reliability and availability of large, highly dependable software. A model for software process and risk management was developed. The use of cause-effect graphing for software specification and validation was investigated. Lastly, advanced software fault-tolerance models were studied to provide alternatives and improvements in situations where simple software fault-tolerance strategies break down.
NASA Technical Reports Server (NTRS)
Wallace, Dolores R.
2003-01-01
In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.
2012-02-01
parameter estimation method, but rather to carefully describe how to use the ERDC software implementation of MLSL that accommodates the PEST model...model independent LM method based parameter estimation software PEST (Doherty, 2004, 2007a, 2007b), which quantifies model to measure- ment misfit...et al. (2011) focused on one drawback associated with LM-based model independent parameter estimation as implemented in PEST ; viz., that it requires
Software Assurance Competency Model
2013-03-01
COTS) software , and software as a service ( SaaS ). L2: Define and analyze risks in the acquisition of contracted software , COTS software , and SaaS ...2010a]: Application of technologies and processes to achieve a required level of confidence that software systems and services function in the...
A Framework of the Use of Information in Software Testing
ERIC Educational Resources Information Center
Kaveh, Payman
2010-01-01
With the increasing role that software systems play in our daily lives, software quality has become extremely important. Software quality is impacted by the efficiency of the software testing process. There are a growing number of software testing methodologies, models, and initiatives to satisfy the need to improve software quality. The main…
A Model Independent S/W Framework for Search-Based Software Testing
Baik, Jongmoon
2014-01-01
In Model-Based Testing (MBT) area, Search-Based Software Testing (SBST) has been employed to generate test cases from the model of a system under test. However, many types of models have been used in MBT. If the type of a model has changed from one to another, all functions of a search technique must be reimplemented because the types of models are different even if the same search technique has been applied. It requires too much time and effort to implement the same algorithm over and over again. We propose a model-independent software framework for SBST, which can reduce redundant works. The framework provides a reusable common software platform to reduce time and effort. The software framework not only presents design patterns to find test cases for a target model but also reduces development time by using common functions provided in the framework. We show the effectiveness and efficiency of the proposed framework with two case studies. The framework improves the productivity by about 50% when changing the type of a model. PMID:25302314
NASA Technical Reports Server (NTRS)
Hops, J. M.; Sherif, J. S.
1994-01-01
A great deal of effort is now being devoted to the study, analysis, prediction, and minimization of software maintenance expected cost, long before software is delivered to users or customers. It has been estimated that, on the average, the effort spent on software maintenance is as costly as the effort spent on all other software costs. Software design methods should be the starting point to aid in alleviating the problems of software maintenance complexity and high costs. Two aspects of maintenance deserve attention: (1) protocols for locating and rectifying defects, and for ensuring that noe new defects are introduced in the development phase of the software process; and (2) protocols for modification, enhancement, and upgrading. This article focuses primarily on the second aspect, the development of protocols to help increase the quality and reduce the costs associated with modifications, enhancements, and upgrades of existing software. This study developed parsimonious models and a relative complexity metric for complexity measurement of software that were used to rank the modules in the system relative to one another. Some success was achieved in using the models and the relative metric to identify maintenance-prone modules.
Hemodynamics model of fluid–solid interaction in internal carotid artery aneurysms
Fu-Yu, Wang; Lei, Liu; Xiao-Jun, Zhang; Hai-Yue, Ju
2010-01-01
The objective of this study is to present a relatively simple method to reconstruct cerebral aneurysms as 3D numerical grids. The method accurately duplicates the geometry to provide computer simulations of the blood flow. Initial images were obtained by using CT angiography and 3D digital subtraction angiography in DICOM format. The image was processed by using MIMICS software, and the 3D fluid model (blood flow) and 3D solid model (wall) were generated. The subsequent output was exported to the ANSYS workbench software to generate the volumetric mesh for further hemodynamic study. The fluid model was defined and simulated in CFX software while the solid model was calculated in ANSYS software. The force data calculated firstly in the CFX software were transferred to the ANSYS software, and after receiving the force data, total mesh displacement data were calculated in the ANSYS software. Then, the mesh displacement data were transferred back to the CFX software. The data exchange was processed in workbench software. The results of simulation could be visualized in CFX-post. Two examples of grid reconstruction and blood flow simulation for patients with internal carotid artery aneurysms were presented. The wall shear stress, wall total pressure, and von Mises stress could be visualized. This method seems to be relatively simple and suitable for direct use by neurosurgeons or neuroradiologists, and maybe a practical tool for planning treatment and follow-up of patients after neurosurgical or endovascular interventions with 3D angiography. PMID:20812022
Hemodynamics model of fluid-solid interaction in internal carotid artery aneurysms.
Bai-Nan, Xu; Fu-Yu, Wang; Lei, Liu; Xiao-Jun, Zhang; Hai-Yue, Ju
2011-01-01
The objective of this study is to present a relatively simple method to reconstruct cerebral aneurysms as 3D numerical grids. The method accurately duplicates the geometry to provide computer simulations of the blood flow. Initial images were obtained by using CT angiography and 3D digital subtraction angiography in DICOM format. The image was processed by using MIMICS software, and the 3D fluid model (blood flow) and 3D solid model (wall) were generated. The subsequent output was exported to the ANSYS workbench software to generate the volumetric mesh for further hemodynamic study. The fluid model was defined and simulated in CFX software while the solid model was calculated in ANSYS software. The force data calculated firstly in the CFX software were transferred to the ANSYS software, and after receiving the force data, total mesh displacement data were calculated in the ANSYS software. Then, the mesh displacement data were transferred back to the CFX software. The data exchange was processed in workbench software. The results of simulation could be visualized in CFX-post. Two examples of grid reconstruction and blood flow simulation for patients with internal carotid artery aneurysms were presented. The wall shear stress, wall total pressure, and von Mises stress could be visualized. This method seems to be relatively simple and suitable for direct use by neurosurgeons or neuroradiologists, and maybe a practical tool for planning treatment and follow-up of patients after neurosurgical or endovascular interventions with 3D angiography.
Preliminary description of the area navigation software for a microcomputer-based Loran-C receiver
NASA Technical Reports Server (NTRS)
Oguri, F.
1983-01-01
The development of new software implementation of this software on a microcomputer (MOS 6502) to provide high quality navigation information is described. This software development provides Area/Route Navigation (RNAV) information from Time Differences (TDs) in raw form using an elliptical Earth model and a spherical model. The software is prepared for the microcomputer based Loran-C receiver. To compute navigation infomation, a (MOS 6502) microcomputer and a mathematical chip (AM 9511A) were combined with the Loran-C receiver. Final data reveals that this software does indeed provide accurate information with reasonable execution times.
NCAR global model topography generation software for unstructured grids
NASA Astrophysics Data System (ADS)
Lauritzen, P. H.; Bacmeister, J. T.; Callaghan, P. F.; Taylor, M. A.
2015-06-01
It is the purpose of this paper to document the NCAR global model topography generation software for unstructured grids. Given a model grid, the software computes the fraction of the grid box covered by land, the gridbox mean elevation, and associated sub-grid scale variances commonly used for gravity wave and turbulent mountain stress parameterizations. The software supports regular latitude-longitude grids as well as unstructured grids; e.g. icosahedral, Voronoi, cubed-sphere and variable resolution grids. As an example application and in the spirit of documenting model development, exploratory simulations illustrating the impacts of topographic smoothing with the NCAR-DOE CESM (Community Earth System Model) CAM5.2-SE (Community Atmosphere Model version 5.2 - Spectral Elements dynamical core) are shown.
Requirements model for an e-Health awareness portal
NASA Astrophysics Data System (ADS)
Hussain, Azham; Mkpojiogu, Emmanuel O. C.; Nawi, Mohd Nasrun M.
2016-08-01
Requirements engineering is at the heart and foundation of software engineering process. Poor quality requirements inevitably lead to poor quality software solutions. Also, poor requirement modeling is tantamount to designing a poor quality product. So, quality assured requirements development collaborates fine with usable products in giving the software product the needed quality it demands. In the light of the foregoing, the requirements for an e-Ebola Awareness Portal were modeled with a good attention given to these software engineering concerns. The requirements for the e-Health Awareness Portal are modeled as a contribution to the fight against Ebola and helps in the fulfillment of the United Nation's Millennium Development Goal No. 6. In this study requirements were modeled using UML 2.0 modeling technique.
Model-Based Verification and Validation of Spacecraft Avionics
NASA Technical Reports Server (NTRS)
Khan, M. Omair; Sievers, Michael; Standley, Shaun
2012-01-01
Verification and Validation (V&V) at JPL is traditionally performed on flight or flight-like hardware running flight software. For some time, the complexity of avionics has increased exponentially while the time allocated for system integration and associated V&V testing has remained fixed. There is an increasing need to perform comprehensive system level V&V using modeling and simulation, and to use scarce hardware testing time to validate models; the norm for thermal and structural V&V for some time. Our approach extends model-based V&V to electronics and software through functional and structural models implemented in SysML. We develop component models of electronics and software that are validated by comparison with test results from actual equipment. The models are then simulated enabling a more complete set of test cases than possible on flight hardware. SysML simulations provide access and control of internal nodes that may not be available in physical systems. This is particularly helpful in testing fault protection behaviors when injecting faults is either not possible or potentially damaging to the hardware. We can also model both hardware and software behaviors in SysML, which allows us to simulate hardware and software interactions. With an integrated model and simulation capability we can evaluate the hardware and software interactions and identify problems sooner. The primary missing piece is validating SysML model correctness against hardware; this experiment demonstrated such an approach is possible.
Performance Evaluation of 3d Modeling Software for Uav Photogrammetry
NASA Astrophysics Data System (ADS)
Yanagi, H.; Chikatsu, H.
2016-06-01
UAV (Unmanned Aerial Vehicle) photogrammetry, which combines UAV and freely available internet-based 3D modeling software, is widely used as a low-cost and user-friendly photogrammetry technique in the fields such as remote sensing and geosciences. In UAV photogrammetry, only the platform used in conventional aerial photogrammetry is changed. Consequently, 3D modeling software contributes significantly to its expansion. However, the algorithms of the 3D modelling software are black box algorithms. As a result, only a few studies have been able to evaluate their accuracy using 3D coordinate check points. With this motive, Smart3DCapture and Pix4Dmapper were downloaded from the Internet and commercial software PhotoScan was also employed; investigations were performed in this paper using check points and images obtained from UAV.
Bushland Evapotranspiration and Agricultural Remote Sensing System (BEARS) software
NASA Astrophysics Data System (ADS)
Gowda, P. H.; Moorhead, J.; Brauer, D. K.
2017-12-01
Evapotranspiration (ET) is a major component of the hydrologic cycle. ET data are used for a variety of water management and research purposes such as irrigation scheduling, water and crop modeling, streamflow, water availability, and many more. Remote sensing products have been widely used to create spatially representative ET data sets which provide important information from field to regional scales. As UAV capabilities increase, remote sensing use is likely to also increase. For that purpose, scientists at the USDA-ARS research laboratory in Bushland, TX developed the Bushland Evapotranspiration and Agricultural Remote Sensing System (BEARS) software. The BEARS software is a Java based software that allows users to process remote sensing data to generate ET outputs using predefined models, or enter custom equations and models. The capability to define new equations and build new models expands the applicability of the BEARS software beyond ET mapping to any remote sensing application. The software also includes an image viewing tool that allows users to visualize outputs, as well as draw an area of interest using various shapes. This software is freely available from the USDA-ARS Conservation and Production Research Laboratory website.
Modeling of a 3DTV service in the software-defined networking architecture
NASA Astrophysics Data System (ADS)
Wilczewski, Grzegorz
2014-11-01
In this article a newly developed concept towards modeling of a multimedia service offering stereoscopic motion imagery is presented. Proposed model is based on the approach of utilization of Software-defined Networking or Software Defined Networks architecture (SDN). The definition of 3D television service spanning SDN concept is identified, exposing basic characteristic of a 3DTV service in a modern networking organization layout. Furthermore, exemplary functionalities of the proposed 3DTV model are depicted. It is indicated that modeling of a 3DTV service in the Software-defined Networking architecture leads to multiplicity of improvements, especially towards flexibility of a service supporting heterogeneity of end user devices.
An object-oriented description method of EPMM process
NASA Astrophysics Data System (ADS)
Jiang, Zuo; Yang, Fan
2017-06-01
In order to use the object-oriented mature tools and language in software process model, make the software process model more accord with the industrial standard, it’s necessary to study the object-oriented modelling of software process. Based on the formal process definition in EPMM, considering the characteristics that Petri net is mainly formal modelling tool and combining the Petri net modelling with the object-oriented modelling idea, this paper provides this implementation method to convert EPMM based on Petri net into object models based on object-oriented description.
Theoretical and software considerations for nonlinear dynamic analysis
NASA Technical Reports Server (NTRS)
Schmidt, R. J.; Dodds, R. H., Jr.
1983-01-01
In the finite element method for structural analysis, it is generally necessary to discretize the structural model into a very large number of elements to accurately evaluate displacements, strains, and stresses. As the complexity of the model increases, the number of degrees of freedom can easily exceed the capacity of present-day software system. Improvements of structural analysis software including more efficient use of existing hardware and improved structural modeling techniques are discussed. One modeling technique that is used successfully in static linear and nonlinear analysis is multilevel substructuring. This research extends the use of multilevel substructure modeling to include dynamic analysis and defines the requirements for a general purpose software system capable of efficient nonlinear dynamic analysis. The multilevel substructuring technique is presented, the analytical formulations and computational procedures for dynamic analysis and nonlinear mechanics are reviewed, and an approach to the design and implementation of a general purpose structural software system is presented.
Integrated Functional and Executional Modelling of Software Using Web-Based Databases
NASA Technical Reports Server (NTRS)
Kulkarni, Deepak; Marietta, Roberta
1998-01-01
NASA's software subsystems undergo extensive modification and updates over the operational lifetimes. It is imperative that modified software should satisfy safety goals. This report discusses the difficulties encountered in doing so and discusses a solution based on integrated modelling of software, use of automatic information extraction tools, web technology and databases. To appear in an article of Journal of Database Management.
From Product- to Service-Oriented Strategies in the Enterprise Software Market
ERIC Educational Resources Information Center
Xin, Mingdi
2009-01-01
The enterprise software market is seeing the rise of a new business model--selling Software-as-a-Service (SaaS), in which a standard piece of software is owned and managed remotely by the vendor and delivered as a service over the Internet. Despite the hype, questions remain regarding the rise of this new service model and how it would impact the…
Kralj, Damir; Kern, Josipa; Tonkovic, Stanko; Koncar, Miroslav
2015-09-09
Family medicine practices (FMPs) make the basis for the Croatian health care system. Use of electronic health record (EHR) software is mandatory and it plays an important role in running these practices, but important functional features still remain uneven and largely left to the will of the software developers. The objective of this study was to develop a novel and comprehensive model for functional evaluation of the EHR software in FMPs, based on current world standards, models and projects, as well as on actual user satisfaction and requirements. Based on previous theoretical and experimental research in this area, we made the initial framework model consisting of six basic categories as a base for online survey questionnaire. Family doctors assessed perceived software quality by using a five-point Likert-type scale. Using exploratory factor analysis and appropriate statistical methods over the collected data, the final optimal structure of the novel model was formed. Special attention was focused on the validity and quality of the novel model. The online survey collected a total of 384 cases. The obtained results indicate both the quality of the assessed software and the quality in use of the novel model. The intense ergonomic orientation of the novel measurement model was particularly emphasised. The resulting novel model is multiple validated, comprehensive and universal. It could be used to assess the user-perceived quality of almost all forms of the ambulatory EHR software and therefore useful to all stakeholders in this area of the health care informatisation.
Software Past, Present, and Future: Views from Government, Industry and Academia
NASA Technical Reports Server (NTRS)
Holcomb, Lee; Page, Jerry; Evangelist, Michael
2000-01-01
Views from the NASA CIO NASA Software Engineering Workshop on software development from the past, present, and future are presented. The topics include: 1) Software Past; 2) Software Present; 3) NASA's Largest Software Challenges; 4) 8330 Software Projects in Industry Standish Groups 1994 Report; 5) Software Future; 6) Capability Maturity Model (CMM): Software Engineering Institute (SEI) levels; 7) System Engineering Quality Also Part of the Problem; 8) University Environment Trends Will Increase the Problem in Software Engineering; and 9) NASA Software Engineering Goals.
Impact of Agile Software Development Model on Software Maintainability
ERIC Educational Resources Information Center
Gawali, Ajay R.
2012-01-01
Software maintenance and support costs account for up to 60% of the overall software life cycle cost and often burdens tightly budgeted information technology (IT) organizations. Agile software development approach delivers business value early, but implications on software maintainability are still unknown. The purpose of this quantitative study…
NASA Technical Reports Server (NTRS)
Stephan, Amy; Erikson, Carol A.
1991-01-01
As an initial attempt to introduce expert system technology into an onboard environment, a model based diagnostic system using the TRW MARPLE software tool was integrated with prototype flight hardware and its corresponding control software. Because this experiment was designed primarily to test the effectiveness of the model based reasoning technique used, the expert system ran on a separate hardware platform, and interactions between the control software and the model based diagnostics were limited. While this project met its objective of showing that model based reasoning can effectively isolate failures in flight hardware, it also identified the need for an integrated development path for expert system and control software for onboard applications. In developing expert systems that are ready for flight, artificial intelligence techniques must be evaluated to determine whether they offer a real advantage onboard, identify which diagnostic functions should be performed by the expert systems and which are better left to the procedural software, and work closely with both the hardware and the software developers from the beginning of a project to produce a well designed and thoroughly integrated application.
Space-Shuttle Emulator Software
NASA Technical Reports Server (NTRS)
Arnold, Scott; Askew, Bill; Barry, Matthew R.; Leigh, Agnes; Mermelstein, Scott; Owens, James; Payne, Dan; Pemble, Jim; Sollinger, John; Thompson, Hiram;
2007-01-01
A package of software has been developed to execute a raw binary image of the space shuttle flight software for simulation of the computational effects of operation of space shuttle avionics. This software can be run on inexpensive computer workstations. Heretofore, it was necessary to use real flight computers to perform such tests and simulations. The package includes a program that emulates the space shuttle orbiter general- purpose computer [consisting of a central processing unit (CPU), input/output processor (IOP), master sequence controller, and buscontrol elements]; an emulator of the orbiter display electronics unit and models of the associated cathode-ray tubes, keyboards, and switch controls; computational models of the data-bus network; computational models of the multiplexer-demultiplexer components; an emulation of the pulse-code modulation master unit; an emulation of the payload data interleaver; a model of the master timing unit; a model of the mass memory unit; and a software component that ensures compatibility of telemetry and command services between the simulated space shuttle avionics and a mission control center. The software package is portable to several host platforms.
Development of new vibration energy flow analysis software and its applications to vehicle systems
NASA Astrophysics Data System (ADS)
Kim, D.-J.; Hong, S.-Y.; Park, Y.-H.
2005-09-01
The Energy flow analysis (EFA) offers very promising results in predicting the noise and vibration responses of system structures in medium-to-high frequency ranges. We have developed the Energy flow finite element method (EFFEM) based software, EFADSC++ R4, for the vibration analysis. The software can analyze the system structures composed of beam, plate, spring-damper, rigid body elements and many other components developed, and has many useful functions in analysis. For convenient use of the software, the main functions of the whole software are modularized into translator, model-converter, and solver. The translator module makes it possible to use finite element (FE) model for the vibration analysis. The model-converter module changes FE model into energy flow finite element (EFFE) model, and generates joint elements to cover the vibrational attenuation in the complex structures composed of various elements and can solve the joint element equations by using the wave tra! nsmission approach very quickly. The solver module supports the various direct and iterative solvers for multi-DOF structures. The predictions of vibration for real vehicles by using the developed software were performed successfully.
A software development and evolution model based on decision-making
NASA Technical Reports Server (NTRS)
Wild, J. Christian; Dong, Jinghuan; Maly, Kurt
1991-01-01
Design is a complex activity whose purpose is to construct an artifact which satisfies a set of constraints and requirements. However the design process is not well understood. The software design and evolution process is the focus of interest, and a three dimensional software development space organized around a decision-making paradigm is presented. An initial instantiation of this model called 3DPM(sub p) which was partly implemented, is presented. Discussion of the use of this model in software reuse and process management is given.
Model-based engineering for medical-device software.
Ray, Arnab; Jetley, Raoul; Jones, Paul L; Zhang, Yi
2010-01-01
This paper demonstrates the benefits of adopting model-based design techniques for engineering medical device software. By using a patient-controlled analgesic (PCA) infusion pump as a candidate medical device, the authors show how using models to capture design information allows for i) fast and efficient construction of executable device prototypes ii) creation of a standard, reusable baseline software architecture for a particular device family, iii) formal verification of the design against safety requirements, and iv) creation of a safety framework that reduces verification costs for future versions of the device software. 1.
Making statistical inferences about software reliability
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1988-01-01
Failure times of software undergoing random debugging can be modelled as order statistics of independent but nonidentically distributed exponential random variables. Using this model inferences can be made about current reliability and, if debugging continues, future reliability. This model also shows the difficulty inherent in statistical verification of very highly reliable software such as that used by digital avionics in commercial aircraft.
Modeling and analysis of visual digital impact model for a Chinese human thorax.
Zhu, Jin; Wang, Kai-Ming; Li, Shu; Liu, Hai-Yan; Jing, Xiao; Li, Xiao-Fang; Liu, Yi-He
2017-01-01
To establish a three-dimensional finite element model of the human chest for engineering research on individual protection. Computed tomography (CT) scanning data were used for three-dimensional reconstruction with the medical image reconstruction software Mimics. The finite element method (FEM) preprocessing software ANSYS ICEM CFD was used for cell mesh generation, and the relevant material behavior parameters of all of the model's parts were specified. The finite element model was constructed with the FEM software, and the model availability was verified based on previous cadaver experimental data. A finite element model approximating the anatomical structure of the human chest was established, and the model's simulation results conformed to the results of the cadaver experiment overall. Segment data of the human body and specialized software can be utilized for FEM model reconstruction to satisfy the need for numerical analysis of shocks to the human chest in engineering research on body mechanics.
Model-Driven Useware Engineering
NASA Astrophysics Data System (ADS)
Meixner, Gerrit; Seissler, Marc; Breiner, Kai
User-oriented hardware and software development relies on a systematic development process based on a comprehensive analysis focusing on the users' requirements and preferences. Such a development process calls for the integration of numerous disciplines, from psychology and ergonomics to computer sciences and mechanical engineering. Hence, a correspondingly interdisciplinary team must be equipped with suitable software tools to allow it to handle the complexity of a multimodal and multi-device user interface development approach. An abstract, model-based development approach seems to be adequate for handling this complexity. This approach comprises different levels of abstraction requiring adequate tool support. Thus, in this chapter, we present the current state of our model-based software tool chain. We introduce the use model as the core model of our model-based process, transformation processes, and a model-based architecture, and we present different software tools that provide support for creating and maintaining the models or performing the necessary model transformations.
2016 KIVA-hpFE Development: A Robust and Accurate Engine Modeling Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrington, David Bradley; Waters, Jiajia
Los Alamos National Laboratory and its collaborators are facilitating engine modeling by improving accuracy and robustness of the modeling, and improving the robustness of software. We also continue to improve the physical modeling methods. We are developing and implementing new mathematical algorithms, those that represent the physics within an engine. We provide software that others may use directly or that they may alter with various models e.g., sophisticated chemical kinetics, different turbulent closure methods or other fuel injection and spray systems.
A model of cloud application assignments in software-defined storages
NASA Astrophysics Data System (ADS)
Bolodurina, Irina P.; Parfenov, Denis I.; Polezhaev, Petr N.; E Shukhman, Alexander
2017-01-01
The aim of this study is to analyze the structure and mechanisms of interaction of typical cloud applications and to suggest the approaches to optimize their placement in storage systems. In this paper, we describe a generalized model of cloud applications including the three basic layers: a model of application, a model of service, and a model of resource. The distinctive feature of the model suggested implies analyzing cloud resources from the user point of view and from the point of view of a software-defined infrastructure of the virtual data center (DC). The innovation character of this model is in describing at the same time the application data placements, as well as the state of the virtual environment, taking into account the network topology. The model of software-defined storage has been developed as a submodel within the resource model. This model allows implementing the algorithm for control of cloud application assignments in software-defined storages. Experimental researches returned this algorithm decreases in cloud application response time and performance growth in user request processes. The use of software-defined data storages allows the decrease in the number of physical store devices, which demonstrates the efficiency of our algorithm.
Software Engineering Education Directory
1990-04-01
and Engineering (CMSC 735) Codes: GPEV2 * Textiooks: IEEE Tutoria on Models and Metrics for Software Management and Engameeing by Basi, Victor R...Software Engineering (Comp 227) Codes: GPRY5 Textbooks: IEEE Tutoria on Software Design Techniques by Freeman, Peter and Wasserman, Anthony 1. Software
Proceedings of the Thirteenth Annual Software Engineering Workshop
NASA Technical Reports Server (NTRS)
1988-01-01
Topics covered in the workshop included studies and experiments conducted in the Software Engineering Laboratory (SEL), a cooperative effort of NASA Goddard Space Flight Center, the University of Maryland, and Computer Sciences Corporation; software models; software products; and software tools.
CrossTalk. The Journal of Defense Software Engineering. Volume 23, Number 6, Nov/Dec 2010
2010-11-01
Model of archi- tectural design. It guides developers to apply effort to their software architecture commensurate with the risks faced by...Driven Model is the promotion of risk to prominence. It is possible to apply the Risk-Driven Model to essentially any software development process...succeed without any planned architecture work, while many high-risk projects would fail without it . The Risk-Driven Model walks a middle path
NASA Technical Reports Server (NTRS)
Butler, Douglas J.; Kerstman, Eric
2010-01-01
This slide presentation reviews the goals and approach for the Integrated Medical Model (IMM). The IMM is a software decision support tool that forecasts medical events during spaceflight and optimizes medical systems during simulations. It includes information on the software capabilities, program stakeholders, use history, and the software logic.
Path generation algorithm for UML graphic modeling of aerospace test software
NASA Astrophysics Data System (ADS)
Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Chen, Chao
2018-03-01
Aerospace traditional software testing engineers are based on their own work experience and communication with software development personnel to complete the description of the test software, manual writing test cases, time-consuming, inefficient, loopholes and more. Using the high reliability MBT tools developed by our company, the one-time modeling can automatically generate test case documents, which is efficient and accurate. UML model to describe the process accurately express the need to rely on the path is reached, the existing path generation algorithm are too simple, cannot be combined into a path and branch path with loop, or too cumbersome, too complicated arrangement generates a path is meaningless, for aerospace software testing is superfluous, I rely on our experience of ten load space, tailor developed a description of aerospace software UML graphics path generation algorithm.
The software-cycle model for re-engineering and reuse
NASA Technical Reports Server (NTRS)
Bailey, John W.; Basili, Victor R.
1992-01-01
This paper reports on the progress of a study which will contribute to our ability to perform high-level, component-based programming by describing means to obtain useful components, methods for the configuration and integration of those components, and an underlying economic model of the costs and benefits associated with this approach to reuse. One goal of the study is to develop and demonstrate methods to recover reusable components from domain-specific software through a combination of tools, to perform the identification, extraction, and re-engineering of components, and domain experts, to direct the applications of those tools. A second goal of the study is to enable the reuse of those components by identifying techniques for configuring and recombining the re-engineered software. This component-recovery or software-cycle model addresses not only the selection and re-engineering of components, but also their recombination into new programs. Once a model of reuse activities has been developed, the quantification of the costs and benefits of various reuse options will enable the development of an adaptable economic model of reuse, which is the principal goal of the overall study. This paper reports on the conception of the software-cycle model and on several supporting techniques of software recovery, measurement, and reuse which will lead to the development of the desired economic model.
Reyes-López, Alfonso; Garduño-Espinosa, Juan; Muñoz-Hernández, Onofre
2018-01-01
Background Drug-drug interactions (DDIs) detected in a patient may not be clinically apparent (potential DDIs), and when they occur, they produce adverse drug reactions (ADRs), toxicity or loss of treatment efficacy. In pediatrics, there are only few publications assessing potential DDIs and their risk factors. There are no studies in children admitted to emergency departments (ED). The present study estimates the prevalence and describes the characteristics of potential DDIs in patients admitted to an ED from a tertiary care hospital in Mexico; in addition, potential DDI-associated risk factors are investigated. Methods A secondary analysis of data from 915 patients admitted to the ED of the Hospital Infantil de México “Federico Gómez” was conducted. The Medscape Drug Interaction Checker software was used to identify potential DDIs. The results are expressed as number of cases (%), means (95% CI) and medians (25-75th percentiles). Count data regressions for number of total and severity-stratified potential DDIs were performed adjusting for patient characteristics, number of administered drugs, days of stay, presence of ADRs and diagnoses. Results The prevalence of potential DDIs was 61%, with a median of 4 (2–8). A proportion of 0.2% of potential DDIs was “Contraindicated”, 7.5% were classified as “Serious”, 62.8% as “Significant” and 29.5% as “Minor”. Female gender, age, days of stay, number of administered drugs and diagnoses of Neoplasms (C00-D48), Congenital malformations (Q00-Q99), Diseases of the Blood, Blood-forming Organs and Immunity (D50-D89) and Diseases of the nervous system (G00-G99) were significantly associated with potential DDIs. Conclusion The prevalence of potential DDIs in the ED is high, and strategies should therefore be established to monitor patients’ safety during their stay, in addition to conducting investigations to estimate the real harm potential DDIs inflict on patients. PMID:29304072
Sandia Advanced MEMS Design Tools, Version 2.2.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yarberry, Victor; Allen, James; Lantz, Jeffery
2010-01-19
The Sandia National Laboratories Advanced MEMS Design Tools, Version 2.2.5, is a collection of menus, prototype drawings, and executables that provide significant productivity enhancements when using AutoCAD to design MEMS components. This release is designed for AutoCAD 2000i, 2002, or 2004 and is supported under Windows NT 4.0, Windows 2000, or XP. SUMMiT V (Sandia Ultra planar Multi level MEMS Technology) is a 5 level surface micromachine fabrication technology, which customers internal and external to Sandia can access to fabricate prototype MEMS devices. This CD contains an integrated set of electronic files that: a) Describe the SUMMiT V fabrication processmore » b) Facilitate the process of designing MEMS with the SUMMiT process (prototype file, Design Rule Checker, Standard Parts Library) New features in this version: AutoCAD 2004 support has been added. SafeExplode ? a new feature that explodes blocks without affecting polylines (avoids exploding polylines into objects that are ignored by the DRC and Visualization tools). Layer control menu ? a pull-down menu for selecting layers to isolate, freeze, or thaw. Updated tools: A check has been added to catch invalid block names. DRC features: Added username/password validation, added a method to update the user?s password. SNL_DRC_WIDTH ? a value to control the width of the DRC error lines. SNL_BIAS_VALUE ? a value use to offset selected geometry SNL_PROCESS_NAME ? a value to specify the process name Documentation changes: The documentation has been updated to include the new features. While there exist some files on the CD that are used in conjunction with software package AutoCAD, these files are not intended for use independent of the CD. Note that the customer must purchase his/her own copy of AutoCAD to use with these files.« less
A review of some problems in global-local stress analysis
NASA Technical Reports Server (NTRS)
Nelson, Richard B.
1989-01-01
The various types of local-global finite-element problems point out the need to develop a new generation of software. First, this new software needs to have a complete analysis capability, encompassing linear and nonlinear analysis of 1-, 2-, and 3-dimensional finite-element models, as well as mixed dimensional models. The software must be capable of treating static and dynamic (vibration and transient response) problems, including the stability effects of initial stress, and the software should be able to treat both elastic and elasto-plastic materials. The software should carry a set of optional diagnostics to assist the program user during model generation in order to help avoid obvious structural modeling errors. In addition, the program software should be well documented so the user has a complete technical reference for each type of element contained in the program library, including information on such topics as the type of numerical integration, use of underintegration, and inclusion of incompatible modes, etc. Some packaged information should also be available to assist the user in building mixed-dimensional models. An important advancement in finite-element software should be in the development of program modularity, so that the user can select from a menu various basic operations in matrix structural analysis.
Aspect-Oriented Model-Driven Software Product Line Engineering
NASA Astrophysics Data System (ADS)
Groher, Iris; Voelter, Markus
Software product line engineering aims to reduce development time, effort, cost, and complexity by taking advantage of the commonality within a portfolio of similar products. The effectiveness of a software product line approach directly depends on how well feature variability within the portfolio is implemented and managed throughout the development lifecycle, from early analysis through maintenance and evolution. This article presents an approach that facilitates variability implementation, management, and tracing by integrating model-driven and aspect-oriented software development. Features are separated in models and composed of aspect-oriented composition techniques on model level. Model transformations support the transition from problem to solution space models. Aspect-oriented techniques enable the explicit expression and modularization of variability on model, template, and code level. The presented concepts are illustrated with a case study of a home automation system.
On the use and the performance of software reliability growth models
NASA Technical Reports Server (NTRS)
Keiller, Peter A.; Miller, Douglas R.
1991-01-01
We address the problem of predicting future failures for a piece of software. The number of failures occurring during a finite future time interval is predicted from the number failures observed during an initial period of usage by using software reliability growth models. Two different methods for using the models are considered: straightforward use of individual models, and dynamic selection among models based on goodness-of-fit and quality-of-prediction criteria. Performance is judged by the relative error of the predicted number of failures over future finite time intervals relative to the number of failures eventually observed during the intervals. Six of the former models and eight of the latter are evaluated, based on their performance on twenty data sets. Many open questions remain regarding the use and the performance of software reliability growth models.
NASA Technical Reports Server (NTRS)
Keller, Richard M.
1991-01-01
The construction of scientific software models is an integral part of doing science, both within NASA and within the scientific community at large. Typically, model-building is a time-intensive and painstaking process, involving the design of very large, complex computer programs. Despite the considerable expenditure of resources involved, completed scientific models cannot easily be distributed and shared with the larger scientific community due to the low-level, idiosyncratic nature of the implemented code. To address this problem, we have initiated a research project aimed at constructing a software tool called the Scientific Modeling Assistant. This tool provides automated assistance to the scientist in developing, using, and sharing software models. We describe the Scientific Modeling Assistant, and also touch on some human-machine interaction issues relevant to building a successful tool of this type.
Computer-aided software development process design
NASA Technical Reports Server (NTRS)
Lin, Chi Y.; Levary, Reuven R.
1989-01-01
The authors describe an intelligent tool designed to aid managers of software development projects in planning, managing, and controlling the development process of medium- to large-scale software projects. Its purpose is to reduce uncertainties in the budget, personnel, and schedule planning of software development projects. It is based on dynamic model for the software development and maintenance life-cycle process. This dynamic process is composed of a number of time-varying, interacting developmental phases, each characterized by its intended functions and requirements. System dynamics is used as a modeling methodology. The resulting Software LIfe-Cycle Simulator (SLICS) and the hybrid expert simulation system of which it is a subsystem are described.
A Bibliography of Externally Published Works by the SEI Engineering Techniques Program
1992-08-01
media, and virtual reality * model- based engineering * programming languages * reuse * software architectures * software engineering as a discipline...Knowledge- Based Engineering Environments." IEEE Expert 3, 2 (May 1988): 18-23, 26-32. Audience: Practitioner [Klein89b] Klein, D.V. "Comparison of...Terms with Software Reuse Terminology: A Model- Based Approach." ACM SIGSOFT Software Engineering Notes 16, 2 (April 1991): 45-51. Audience: Practitioner
The Effectiveness of Software Project Management Practices: A Quantitative Measurement
2011-03-01
Assessment (SPMMA) model ( Ramli , 2007). The purpose of the SPMMA was to help a company measure the strength and weaknesses of its software project...Practices,” Fuazi and Ramli presented a model to assess software project management practices using their Software Project Management Maturity...Analysis The SPMMA was carried out on one mid-size Information Technology (IT) Company . Based on the questionnaire responses, interviews and discussions
Software Engineering Guidebook
NASA Technical Reports Server (NTRS)
Connell, John; Wenneson, Greg
1993-01-01
The Software Engineering Guidebook describes SEPG (Software Engineering Process Group) supported processes and techniques for engineering quality software in NASA environments. Three process models are supported: structured, object-oriented, and evolutionary rapid-prototyping. The guidebook covers software life-cycles, engineering, assurance, and configuration management. The guidebook is written for managers and engineers who manage, develop, enhance, and/or maintain software under the Computer Software Services Contract.
An experiment in software reliability: Additional analyses using data from automated replications
NASA Technical Reports Server (NTRS)
Dunham, Janet R.; Lauterbach, Linda A.
1988-01-01
A study undertaken to collect software error data of laboratory quality for use in the development of credible methods for predicting the reliability of software used in life-critical applications is summarized. The software error data reported were acquired through automated repetitive run testing of three independent implementations of a launch interceptor condition module of a radar tracking problem. The results are based on 100 test applications to accumulate a sufficient sample size for error rate estimation. The data collected is used to confirm the results of two Boeing studies reported in NASA-CR-165836 Software Reliability: Repetitive Run Experimentation and Modeling, and NASA-CR-172378 Software Reliability: Additional Investigations into Modeling With Replicated Experiments, respectively. That is, the results confirm the log-linear pattern of software error rates and reject the hypothesis of equal error rates per individual fault. This rejection casts doubt on the assumption that the program's failure rate is a constant multiple of the number of residual bugs; an assumption which underlies some of the current models of software reliability. data raises new questions concerning the phenomenon of interacting faults.
Reuseable Objects Software Environment (ROSE): Introduction to Air Force Software Reuse Workshop
NASA Technical Reports Server (NTRS)
Cottrell, William L.
1994-01-01
The Reusable Objects Software Environment (ROSE) is a common, consistent, consolidated implementation of software functionality using modern object oriented software engineering including designed-in reuse and adaptable requirements. ROSE is designed to minimize abstraction and reduce complexity. A planning model for the reverse engineering of selected objects through object oriented analysis is depicted. Dynamic and functional modeling are used to develop a system design, the object design, the language, and a database management system. The return on investment for a ROSE pilot program and timelines are charted.
Methodology of decreasing software complexity using ontology
NASA Astrophysics Data System (ADS)
DÄ browska-Kubik, Katarzyna
2015-09-01
In this paper a model of web application`s source code, based on the OSD ontology (Ontology for Software Development), is proposed. This model is applied to implementation and maintenance phase of software development process through the DevOntoCreator tool [5]. The aim of this solution is decreasing software complexity of that source code, using many different maintenance techniques, like creation of documentation, elimination dead code, cloned code or bugs, which were known before [1][2]. Due to this approach saving on software maintenance costs of web applications will be possible.
NASA Technical Reports Server (NTRS)
Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Ackley, Keith A.; Crump, Wes; Sanders, Les
1991-01-01
The design of the Framework Processor (FP) component of the Framework Programmable Software Development Platform (FFP) is described. The FFP is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software development environment. Guided by the model, this Framework Processor will take advantage of an integrated operating environment to provide automated support for the management and control of the software development process so that costly mistakes during the development phase can be eliminated.
Computational Simulations and the Scientific Method
NASA Technical Reports Server (NTRS)
Kleb, Bil; Wood, Bill
2005-01-01
As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.
Early experiences building a software quality prediction model
NASA Technical Reports Server (NTRS)
Agresti, W. W.; Evanco, W. M.; Smith, M. C.
1990-01-01
Early experiences building a software quality prediction model are discussed. The overall research objective is to establish a capability to project a software system's quality from an analysis of its design. The technical approach is to build multivariate models for estimating reliability and maintainability. Data from 21 Ada subsystems were analyzed to test hypotheses about various design structures leading to failure-prone or unmaintainable systems. Current design variables highlight the interconnectivity and visibility of compilation units. Other model variables provide for the effects of reusability and software changes. Reported results are preliminary because additional project data is being obtained and new hypotheses are being developed and tested. Current multivariate regression models are encouraging, explaining 60 to 80 percent of the variation in error density of the subsystems.
Quantitative software models for the estimation of cost, size, and defects
NASA Technical Reports Server (NTRS)
Hihn, J.; Bright, L.; Decker, B.; Lum, K.; Mikulski, C.; Powell, J.
2002-01-01
The presentation will provide a brief overview of the SQI measurement program as well as describe each of these models and how they are currently being used in supporting JPL project, task and software managers to estimate and plan future software systems and subsystems.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-18
... NUCLEAR REGULATORY COMMISSION [NRC-2011-0109] NUREG/CR-XXXX, Development of Quantitative Software..., ``Development of Quantitative Software Reliability Models for Digital Protection Systems of Nuclear Power Plants... of Risk Analysis, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission...
Khoshgoftaar, T M; Allen, E B; Hudepohl, J P; Aud, S J
1997-01-01
Society relies on telecommunications to such an extent that telecommunications software must have high reliability. Enhanced measurement for early risk assessment of latent defects (EMERALD) is a joint project of Nortel and Bell Canada for improving the reliability of telecommunications software products. This paper reports a case study of neural-network modeling techniques developed for the EMERALD system. The resulting neural network is currently in the prototype testing phase at Nortel. Neural-network models can be used to identify fault-prone modules for extra attention early in development, and thus reduce the risk of operational problems with those modules. We modeled a subset of modules representing over seven million lines of code from a very large telecommunications software system. The set consisted of those modules reused with changes from the previous release. The dependent variable was membership in the class of fault-prone modules. The independent variables were principal components of nine measures of software design attributes. We compared the neural-network model with a nonparametric discriminant model and found the neural-network model had better predictive accuracy.
Sharing Research Models: Using Software Engineering Practices for Facilitation
Bryant, Stephanie P.; Solano, Eric; Cantor, Susanna; Cooley, Philip C.; Wagener, Diane K.
2011-01-01
Increasingly, researchers are turning to computational models to understand the interplay of important variables on systems’ behaviors. Although researchers may develop models that meet the needs of their investigation, application limitations—such as nonintuitive user interface features and data input specifications—may limit the sharing of these tools with other research groups. By removing these barriers, other research groups that perform related work can leverage these work products to expedite their own investigations. The use of software engineering practices can enable managed application production and shared research artifacts among multiple research groups by promoting consistent models, reducing redundant effort, encouraging rigorous peer review, and facilitating research collaborations that are supported by a common toolset. This report discusses three established software engineering practices— the iterative software development process, object-oriented methodology, and Unified Modeling Language—and the applicability of these practices to computational model development. Our efforts to modify the MIDAS TranStat application to make it more user-friendly are presented as an example of how computational models that are based on research and developed using software engineering practices can benefit a broader audience of researchers. PMID:21687780
Chung, Beom Sun; Chung, Min Suk; Shin, Byeong Seok; Kwon, Koojoo
2018-02-19
The hand anatomy, including the complicated hand muscles, can be grasped by using computer-assisted learning tools with high quality two-dimensional images and three-dimensional models. The purpose of this study was to present up-to-date software tools that promote learning of stereoscopic morphology of the hand. On the basis of horizontal sectioned images and outlined images of a male cadaver, vertical planes, volume models, and surface models were elaborated. Software to browse pairs of the sectioned and outlined images in orthogonal planes and software to peel and rotate the volume models, as well as a portable document format (PDF) file to select and rotate the surface models, were produced. All of the software tools were downloadable free of charge and usable off-line. The three types of tools for viewing multiple aspects of the hand could be adequately employed according to individual needs. These new tools involving the realistic images of a cadaver and the diverse functions are expected to improve comprehensive knowledge of the hand shape. © 2018 The Korean Academy of Medical Sciences.
2018-01-01
Background The hand anatomy, including the complicated hand muscles, can be grasped by using computer-assisted learning tools with high quality two-dimensional images and three-dimensional models. The purpose of this study was to present up-to-date software tools that promote learning of stereoscopic morphology of the hand. Methods On the basis of horizontal sectioned images and outlined images of a male cadaver, vertical planes, volume models, and surface models were elaborated. Software to browse pairs of the sectioned and outlined images in orthogonal planes and software to peel and rotate the volume models, as well as a portable document format (PDF) file to select and rotate the surface models, were produced. Results All of the software tools were downloadable free of charge and usable off-line. The three types of tools for viewing multiple aspects of the hand could be adequately employed according to individual needs. Conclusion These new tools involving the realistic images of a cadaver and the diverse functions are expected to improve comprehensive knowledge of the hand shape. PMID:29441756
Verification of Decision-Analytic Models for Health Economic Evaluations: An Overview.
Dasbach, Erik J; Elbasha, Elamin H
2017-07-01
Decision-analytic models for cost-effectiveness analysis are developed in a variety of software packages where the accuracy of the computer code is seldom verified. Although modeling guidelines recommend using state-of-the-art quality assurance and control methods for software engineering to verify models, the fields of pharmacoeconomics and health technology assessment (HTA) have yet to establish and adopt guidance on how to verify health and economic models. The objective of this paper is to introduce to our field the variety of methods the software engineering field uses to verify that software performs as expected. We identify how many of these methods can be incorporated in the development process of decision-analytic models in order to reduce errors and increase transparency. Given the breadth of methods used in software engineering, we recommend a more in-depth initiative to be undertaken (e.g., by an ISPOR-SMDM Task Force) to define the best practices for model verification in our field and to accelerate adoption. Establishing a general guidance for verifying models will benefit the pharmacoeconomics and HTA communities by increasing accuracy of computer programming, transparency, accessibility, sharing, understandability, and trust of models.
The FoReVer Methodology: A MBSE Framework for Formal Verification
NASA Astrophysics Data System (ADS)
Baracchi, Laura; Mazzini, Silvia; Cimatti, Alessandro; Tonetta, Stefano; Garcia, Gerald
2013-08-01
The need for high level of confidence and operational integrity in critical space (software) systems is well recognized in the Space industry and has been addressed so far through rigorous System and Software Development Processes and stringent Verification and Validation regimes. The Model Based Space System Engineering process (MBSSE) derived in the System and Software Functional Requirement Techniques study (SSFRT) focused on the application of model based engineering technologies to support the space system and software development processes, from mission level requirements to software implementation through model refinements and translations. In this paper we report on our work in the ESA-funded FoReVer project where we aim at developing methodological, theoretical and technological support for a systematic approach to the space avionics system development, in phases 0/A/B/C. FoReVer enriches the MBSSE process with contract-based formal verification of properties, at different stages from system to software, through a step-wise refinement approach, with the support for a Software Reference Architecture.
ERIC Educational Resources Information Center
García, Isaías; Benavides, Carmen; Alaiz, Héctor; Alonso, Angel
2013-01-01
This paper describes research on the use of knowledge models (ontologies) for building computer-aided educational software in the field of control engineering. Ontologies are able to represent in the computer a very rich conceptual model of a given domain. This model can be used later for a number of purposes in different software applications. In…
Logic Model Checking of Unintended Acceleration Claims in Toyota Vehicles
NASA Technical Reports Server (NTRS)
Gamble, Ed
2012-01-01
Part of the US Department of Transportation investigation of Toyota sudden unintended acceleration (SUA) involved analysis of the throttle control software, JPL Laboratory for Reliable Software applied several techniques including static analysis and logic model checking, to the software; A handful of logic models were build, Some weaknesses were identified; however, no cause for SUA was found; The full NASA report includes numerous other analyses