Sample records for property verification task

  1. Modality Switching in a Property Verification Task: An ERP Study of What Happens When Candles Flicker after High Heels Click

    PubMed Central

    Collins, Jennifer; Pecher, Diane; Zeelenberg, René; Coulson, Seana

    2011-01-01

    The perceptual modalities associated with property words, such as flicker or click, have previously been demonstrated to affect subsequent property verification judgments (Pecher et al., 2003). Known as the conceptual modality switch effect, this finding supports the claim that brain systems for perception and action help subserve the representation of concepts. The present study addressed the cognitive and neural substrate of this effect by recording event-related potentials (ERPs) as participants performed a property verification task with visual or auditory properties in key trials. We found that for visual property verifications, modality switching was associated with an increased amplitude N400. For auditory verifications, switching led to a larger late positive complex. Observed ERP effects of modality switching suggest property words access perceptual brain systems. Moreover, the timing and pattern of the effects suggest perceptual systems impact the decision-making stage in the verification of auditory properties, and the semantic stage in the verification of visual properties. PMID:21713128

  2. Modality Switching in a Property Verification Task: An ERP Study of What Happens When Candles Flicker after High Heels Click.

    PubMed

    Collins, Jennifer; Pecher, Diane; Zeelenberg, René; Coulson, Seana

    2011-01-01

    The perceptual modalities associated with property words, such as flicker or click, have previously been demonstrated to affect subsequent property verification judgments (Pecher et al., 2003). Known as the conceptual modality switch effect, this finding supports the claim that brain systems for perception and action help subserve the representation of concepts. The present study addressed the cognitive and neural substrate of this effect by recording event-related potentials (ERPs) as participants performed a property verification task with visual or auditory properties in key trials. We found that for visual property verifications, modality switching was associated with an increased amplitude N400. For auditory verifications, switching led to a larger late positive complex. Observed ERP effects of modality switching suggest property words access perceptual brain systems. Moreover, the timing and pattern of the effects suggest perceptual systems impact the decision-making stage in the verification of auditory properties, and the semantic stage in the verification of visual properties.

  3. Capturing Safety Requirements to Enable Effective Task Allocation Between Humans and Automaton in Increasingly Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Neogi, Natasha A.

    2016-01-01

    There is a current drive towards enabling the deployment of increasingly autonomous systems in the National Airspace System (NAS). However, shifting the traditional roles and responsibilities between humans and automation for safety critical tasks must be managed carefully, otherwise the current emergent safety properties of the NAS may be disrupted. In this paper, a verification activity to assess the emergent safety properties of a clearly defined, safety critical, operational scenario that possesses tasks that can be fluidly allocated between human and automated agents is conducted. Task allocation role sets were proposed for a human-automation team performing a contingency maneuver in a reduced crew context. A safety critical contingency procedure (engine out on takeoff) was modeled in the Soar cognitive architecture, then translated into the Hybrid Input Output formalism. Verification activities were then performed to determine whether or not the safety properties held over the increasingly autonomous system. The verification activities lead to the development of several key insights regarding the implicit assumptions on agent capability. It subsequently illustrated the usefulness of task annotations associated with specialized requirements (e.g., communication, timing etc.), and demonstrated the feasibility of this approach.

  4. Property-driven functional verification technique for high-speed vision system-on-chip processor

    NASA Astrophysics Data System (ADS)

    Nshunguyimfura, Victor; Yang, Jie; Liu, Liyuan; Wu, Nanjian

    2017-04-01

    The implementation of functional verification in a fast, reliable, and effective manner is a challenging task in a vision chip verification process. The main reason for this challenge is the stepwise nature of existing functional verification techniques. This vision chip verification complexity is also related to the fact that in most vision chip design cycles, extensive efforts are focused on how to optimize chip metrics such as performance, power, and area. Design functional verification is not explicitly considered at an earlier stage at which the most sound decisions are made. In this paper, we propose a semi-automatic property-driven verification technique. The implementation of all verification components is based on design properties. We introduce a low-dimension property space between the specification space and the implementation space. The aim of this technique is to speed up the verification process for high-performance parallel processing vision chips. Our experimentation results show that the proposed technique can effectively improve the verification effort up to 20% for the complex vision chip design while reducing the simulation and debugging overheads.

  5. Perceptual processing affects conceptual processing.

    PubMed

    Van Dantzig, Saskia; Pecher, Diane; Zeelenberg, René; Barsalou, Lawrence W

    2008-04-05

    According to the Perceptual Symbols Theory of cognition (Barsalou, 1999), modality-specific simulations underlie the representation of concepts. A strong prediction of this view is that perceptual processing affects conceptual processing. In this study, participants performed a perceptual detection task and a conceptual property-verification task in alternation. Responses on the property-verification task were slower for those trials that were preceded by a perceptual trial in a different modality than for those that were preceded by a perceptual trial in the same modality. This finding of a modality-switch effect across perceptual processing and conceptual processing supports the hypothesis that perceptual and conceptual representations are partially based on the same systems. 2008 Cognitive Science Society, Inc.

  6. Separating stages of arithmetic verification: An ERP study with a novel paradigm.

    PubMed

    Avancini, Chiara; Soltész, Fruzsina; Szűcs, Dénes

    2015-08-01

    In studies of arithmetic verification, participants typically encounter two operands and they carry out an operation on these (e.g. adding them). Operands are followed by a proposed answer and participants decide whether this answer is correct or incorrect. However, interpretation of results is difficult because multiple parallel, temporally overlapping numerical and non-numerical processes of the human brain may contribute to task execution. In order to overcome this problem here we used a novel paradigm specifically designed to tease apart the overlapping cognitive processes active during arithmetic verification. Specifically, we aimed to separate effects related to detection of arithmetic correctness, detection of the violation of strategic expectations, detection of physical stimulus properties mismatch and numerical magnitude comparison (numerical distance effects). Arithmetic correctness, physical stimulus properties and magnitude information were not task-relevant properties of the stimuli. We distinguished between a series of temporally highly overlapping cognitive processes which in turn elicited overlapping ERP effects with distinct scalp topographies. We suggest that arithmetic verification relies on two major temporal phases which include parallel running processes. Our paradigm offers a new method for investigating specific arithmetic verification processes in detail. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. A sharp image or a sharp knife: norms for the modality-exclusivity of 774 concept-property items.

    PubMed

    van Dantzig, Saskia; Cowell, Rosemary A; Zeelenberg, René; Pecher, Diane

    2011-03-01

    According to recent embodied cognition theories, mental concepts are represented by modality-specific sensory-motor systems. Much of the evidence for modality-specificity in conceptual processing comes from the property-verification task. When applying this and other tasks, it is important to select items based on their modality-exclusivity. We collected modality ratings for a set of 387 properties, each of which was paired with two different concepts, yielding a total of 774 concept-property items. For each item, participants rated the degree to which the property could be experienced through five perceptual modalities (vision, audition, touch, smell, and taste). Based on these ratings, we computed a measure of modality exclusivity, the degree to which a property is perceived exclusively through one sensory modality. In this paper, we briefly sketch the theoretical background of conceptual knowledge, discuss the use of the property-verification task in cognitive research, provide our norms and statistics, and validate the norms in a memory experiment. We conclude that our norms are important for researchers studying modality-specific effects in conceptual processing.

  8. Reachability analysis of real-time systems using time Petri nets.

    PubMed

    Wang, J; Deng, Y; Xu, G

    2000-01-01

    Time Petri nets (TPNs) are a popular Petri net model for specification and verification of real-time systems. A fundamental and most widely applied method for analyzing Petri nets is reachability analysis. The existing technique for reachability analysis of TPNs, however, is not suitable for timing property verification because one cannot derive end-to-end delay in task execution, an important issue for time-critical systems, from the reachability tree constructed using the technique. In this paper, we present a new reachability based analysis technique for TPNs for timing property analysis and verification that effectively addresses the problem. Our technique is based on a concept called clock-stamped state class (CS-class). With the reachability tree generated based on CS-classes, we can directly compute the end-to-end time delay in task execution. Moreover, a CS-class can be uniquely mapped to a traditional state class based on which the conventional reachability tree is constructed. Therefore, our CS-class-based analysis technique is more general than the existing technique. We show how to apply this technique to timing property verification of the TPN model of a command and control (C2) system.

  9. Verifying visual properties in sentence verification facilitates picture recognition memory.

    PubMed

    Pecher, Diane; Zanolie, Kiki; Zeelenberg, René

    2007-01-01

    According to the perceptual symbols theory (Barsalou, 1999), sensorimotor simulations underlie the representation of concepts. We investigated whether recognition memory for pictures of concepts was facilitated by earlier representation of visual properties of those concepts. During study, concept names (e.g., apple) were presented in a property verification task with a visual property (e.g., shiny) or with a nonvisual property (e.g., tart). Delayed picture recognition memory was better if the concept name had been presented with a visual property than if it had been presented with a nonvisual property. These results indicate that modality-specific simulations are used for concept representation.

  10. Security Verification Techniques Applied to PatchLink COTS Software

    NASA Technical Reports Server (NTRS)

    Gilliam, David P.; Powell, John D.; Bishop, Matt; Andrew, Chris; Jog, Sameer

    2006-01-01

    Verification of the security of software artifacts is a challenging task. An integrated approach that combines verification techniques can increase the confidence in the security of software artifacts. Such an approach has been developed by the Jet Propulsion Laboratory (JPL) and the University of California at Davis (UC Davis). Two security verification instruments were developed and then piloted on PatchLink's UNIX Agent, a Commercial-Off-The-Shelf (COTS) software product, to assess the value of the instruments and the approach. The two instruments are the Flexible Modeling Framework (FMF) -- a model-based verification instrument (JPL), and a Property-Based Tester (UC Davis). Security properties were formally specified for the COTS artifact and then verified using these instruments. The results were then reviewed to determine the effectiveness of the approach and the security of the COTS product.

  11. Dynamic analysis methods for detecting anomalies in asynchronously interacting systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Akshat; Solis, John Hector; Matschke, Benjamin

    2014-01-01

    Detecting modifications to digital system designs, whether malicious or benign, is problematic due to the complexity of the systems being analyzed. Moreover, static analysis techniques and tools can only be used during the initial design and implementation phases to verify safety and liveness properties. It is computationally intractable to guarantee that any previously verified properties still hold after a system, or even a single component, has been produced by a third-party manufacturer. In this paper we explore new approaches for creating a robust system design by investigating highly-structured computational models that simplify verification and analysis. Our approach avoids the needmore » to fully reconstruct the implemented system by incorporating a small verification component that dynamically detects for deviations from the design specification at run-time. The first approach encodes information extracted from the original system design algebraically into a verification component. During run-time this component randomly queries the implementation for trace information and verifies that no design-level properties have been violated. If any deviation is detected then a pre-specified fail-safe or notification behavior is triggered. Our second approach utilizes a partitioning methodology to view liveness and safety properties as a distributed decision task and the implementation as a proposed protocol that solves this task. Thus the problem of verifying safety and liveness properties is translated to that of verifying that the implementation solves the associated decision task. We develop upon results from distributed systems and algebraic topology to construct a learning mechanism for verifying safety and liveness properties from samples of run-time executions.« less

  12. Assume-Guarantee Verification of Source Code with Design-Level Assumptions

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Pasareanu, Corina S.; Cobleigh, Jamieson M.

    2004-01-01

    Model checking is an automated technique that can be used to determine whether a system satisfies certain required properties. To address the 'state explosion' problem associated with this technique, we propose to integrate assume-guarantee verification at different phases of system development. During design, developers build abstract behavioral models of the system components and use them to establish key properties of the system. To increase the scalability of model checking at this level, we have developed techniques that automatically decompose the verification task by generating component assumptions for the properties to hold. The design-level artifacts are subsequently used to guide the implementation of the system, but also to enable more efficient reasoning at the source code-level. In particular we propose to use design-level assumptions to similarly decompose the verification of the actual system implementation. We demonstrate our approach on a significant NASA application, where design-level models were used to identify; and correct a safety property violation, and design-level assumptions allowed us to check successfully that the property was presented by the implementation.

  13. Simulation verification techniques study

    NASA Technical Reports Server (NTRS)

    Schoonmaker, P. B.; Wenglinski, T. H.

    1975-01-01

    Results are summarized of the simulation verification techniques study which consisted of two tasks: to develop techniques for simulator hardware checkout and to develop techniques for simulation performance verification (validation). The hardware verification task involved definition of simulation hardware (hardware units and integrated simulator configurations), survey of current hardware self-test techniques, and definition of hardware and software techniques for checkout of simulator subsystems. The performance verification task included definition of simulation performance parameters (and critical performance parameters), definition of methods for establishing standards of performance (sources of reference data or validation), and definition of methods for validating performance. Both major tasks included definition of verification software and assessment of verification data base impact. An annotated bibliography of all documents generated during this study is provided.

  14. Verification of the FtCayuga fault-tolerant microprocessor system. Volume 2: Formal specification and correctness theorems

    NASA Technical Reports Server (NTRS)

    Bickford, Mark; Srivas, Mandayam

    1991-01-01

    Presented here is a formal specification and verification of a property of a quadruplicately redundant fault tolerant microprocessor system design. A complete listing of the formal specification of the system and the correctness theorems that are proved are given. The system performs the task of obtaining interactive consistency among the processors using a special instruction on the processors. The design is based on an algorithm proposed by Pease, Shostak, and Lamport. The property verified insures that an execution of the special instruction by the processors correctly accomplishes interactive consistency, providing certain preconditions hold, using a computer aided design verification tool, Spectool, and the theorem prover, Clio. A major contribution of the work is the demonstration of a significant fault tolerant hardware design that is mechanically verified by a theorem prover.

  15. Verification of the FtCayuga fault-tolerant microprocessor system. Volume 1: A case study in theorem prover-based verification

    NASA Technical Reports Server (NTRS)

    Srivas, Mandayam; Bickford, Mark

    1991-01-01

    The design and formal verification of a hardware system for a task that is an important component of a fault tolerant computer architecture for flight control systems is presented. The hardware system implements an algorithm for obtaining interactive consistancy (byzantine agreement) among four microprocessors as a special instruction on the processors. The property verified insures that an execution of the special instruction by the processors correctly accomplishes interactive consistency, provided certain preconditions hold. An assumption is made that the processors execute synchronously. For verification, the authors used a computer aided design hardware design verification tool, Spectool, and the theorem prover, Clio. A major contribution of the work is the demonstration of a significant fault tolerant hardware design that is mechanically verified by a theorem prover.

  16. Perceptual Processing Affects Conceptual Processing

    ERIC Educational Resources Information Center

    van Dantzig, Saskia; Pecher, Diane; Zeelenberg, Rene; Barsalou, Lawrence W.

    2008-01-01

    According to the Perceptual Symbols Theory of cognition (Barsalou, 1999), modality-specific simulations underlie the representation of concepts. A strong prediction of this view is that perceptual processing affects conceptual processing. In this study, participants performed a perceptual detection task and a conceptual property-verification task…

  17. Sensorimotor simulations underlie conceptual representations: modality-specific effects of prior activation.

    PubMed

    Pecher, Diane; Zeelenberg, René; Barsalou, Lawrence W

    2004-02-01

    According to the perceptual symbols theory (Barsalou, 1999), sensorimotor simulations underlie the representation of concepts. Simulations are componential in the sense that they vary with the context in which the concept is presented. In the present study, we investigated whether representations are affected by recent experiences with a concept. Concept names (e.g., APPLE) were presented twice in a property verification task with a different property on each occasion. The two properties were either from the same perceptual modality (e.g., green, shiny) or from different modalities (e.g., tart, shiny). All stimuli were words. There was a lag of several intervening trials between the first and second presentation. Verification times and error rates for the second presentation of the concept were higher if the properties were from different modalities than if they were from the same modality.

  18. Application of Lightweight Formal Methods to Software Security

    NASA Technical Reports Server (NTRS)

    Gilliam, David P.; Powell, John D.; Bishop, Matt

    2005-01-01

    Formal specification and verification of security has proven a challenging task. There is no single method that has proven feasible. Instead, an integrated approach which combines several formal techniques can increase the confidence in the verification of software security properties. Such an approach which species security properties in a library that can be reused by 2 instruments and their methodologies developed for the National Aeronautics and Space Administration (NASA) at the Jet Propulsion Laboratory (JPL) are described herein The Flexible Modeling Framework (FMF) is a model based verijkation instrument that uses Promela and the SPIN model checker. The Property Based Tester (PBT) uses TASPEC and a Text Execution Monitor (TEM). They are used to reduce vulnerabilities and unwanted exposures in software during the development and maintenance life cycles.

  19. Knowledge-based system verification and validation

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1990-01-01

    The objective of this task is to develop and evaluate a methodology for verification and validation (V&V) of knowledge-based systems (KBS) for space station applications with high reliability requirements. The approach consists of three interrelated tasks. The first task is to evaluate the effectiveness of various validation methods for space station applications. The second task is to recommend requirements for KBS V&V for Space Station Freedom (SSF). The third task is to recommend modifications to the SSF to support the development of KBS using effectiveness software engineering and validation techniques. To accomplish the first task, three complementary techniques will be evaluated: (1) Sensitivity Analysis (Worchester Polytechnic Institute); (2) Formal Verification of Safety Properties (SRI International); and (3) Consistency and Completeness Checking (Lockheed AI Center). During FY89 and FY90, each contractor will independently demonstrate the user of his technique on the fault detection, isolation, and reconfiguration (FDIR) KBS or the manned maneuvering unit (MMU), a rule-based system implemented in LISP. During FY91, the application of each of the techniques to other knowledge representations and KBS architectures will be addressed. After evaluation of the results of the first task and examination of Space Station Freedom V&V requirements for conventional software, a comprehensive KBS V&V methodology will be developed and documented. Development of highly reliable KBS's cannot be accomplished without effective software engineering methods. Using the results of current in-house research to develop and assess software engineering methods for KBS's as well as assessment of techniques being developed elsewhere, an effective software engineering methodology for space station KBS's will be developed, and modification of the SSF to support these tools and methods will be addressed.

  20. Large liquid rocket engine transient performance simulation system

    NASA Technical Reports Server (NTRS)

    Mason, J. R.; Southwick, R. D.

    1989-01-01

    Phase 1 of the Rocket Engine Transient Simulation (ROCETS) program consists of seven technical tasks: architecture; system requirements; component and submodel requirements; submodel implementation; component implementation; submodel testing and verification; and subsystem testing and verification. These tasks were completed. Phase 2 of ROCETS consists of two technical tasks: Technology Test Bed Engine (TTBE) model data generation; and system testing verification. During this period specific coding of the system processors was begun and the engineering representations of Phase 1 were expanded to produce a simple model of the TTBE. As the code was completed, some minor modifications to the system architecture centering on the global variable common, GLOBVAR, were necessary to increase processor efficiency. The engineering modules completed during Phase 2 are listed: INJTOO - main injector; MCHBOO - main chamber; NOZLOO - nozzle thrust calculations; PBRNOO - preburner; PIPE02 - compressible flow without inertia; PUMPOO - polytropic pump; ROTROO - rotor torque balance/speed derivative; and TURBOO - turbine. Detailed documentation of these modules is in the Appendix. In addition to the engineering modules, several submodules were also completed. These submodules include combustion properties, component performance characteristics (maps), and specific utilities. Specific coding was begun on the system configuration processor. All functions necessary for multiple module operation were completed but the SOLVER implementation is still under development. This system, the Verification Checkout Facility (VCF) allows interactive comparison of module results to store data as well as provides an intermediate checkout of the processor code. After validation using the VCF, the engineering modules and submodules were used to build a simple TTBE.

  1. Space shuttle propellant constitutive law verification tests

    NASA Technical Reports Server (NTRS)

    Thompson, James R.

    1995-01-01

    As part of the Propellants Task (Task 2.0) on the Solid Propulsion Integrity Program (SPIP), a database of material properties was generated for the Space Shuttle Redesigned Solid Rocket Motor (RSRM) PBAN-based propellant. A parallel effort on the Propellants Task was the generation of an improved constitutive theory for the PBAN propellant suitable for use in a finite element analysis (FEA) of the RSRM. The outcome of an analysis with the improved constitutive theory would be more reliable prediction of structural margins of safety. The work described in this report was performed by Materials Laboratory personnel at Thiokol Corporation/Huntsville Division under NASA contract NAS8-39619, Mod. 3. The report documents the test procedures for the refinement and verification tests for the improved Space Shuttle RSRM propellant material model, and summarizes the resulting test data. TP-H1148 propellant obtained from mix E660411 (manufactured February 1989) which had experienced ambient igloo storage in Huntsville, Alabama since January 1990, was used for these tests.

  2. [Determinants of task preferences when performance is indicative of individual characteristics: self-assessment motivation and self-verification motivation].

    PubMed

    Numazaki, M; Kudo, E

    1995-04-01

    The present study was conducted to examine determinants of information-gathering behavior with regard to one's own characteristics. Four tasks with different self-congruent and incongruent diagnosticity were presented to subjects. As self-assessment theory predicted, high diagnostic tasks were preferred to low tasks. And as self-verification theory predicted, self-congruent diagnosticity had a stronger effect on task preference than self-incongruent diagnosticity. In addition, subjects who perceived the relevant characteristics important inclined to choose self-assessment behavior more than who did not. Also, subjects who were certain of their self-concept inclined to choose self-verification behavior more than who were not. These results suggest that both self-assessment and self-verification motivations play important roles in information-gathering behavior regarding one's characteristics, and strength of the motivations is determined by the importance of relevant characteristics or the certainty of self-concept.

  3. Dissociation between arithmetic relatedness and distance effects is modulated by task properties: an ERP study comparing explicit vs. implicit arithmetic processing.

    PubMed

    Avancini, Chiara; Galfano, Giovanni; Szűcs, Dénes

    2014-12-01

    Event-related potential (ERP) studies have detected several characteristic consecutive amplitude modulations in both implicit and explicit mental arithmetic tasks. Implicit tasks typically focused on the arithmetic relatedness effect (in which performance is affected by semantic associations between numbers) while explicit tasks focused on the distance effect (in which performance is affected by the numerical difference of to-be-compared numbers). Both task types elicit morphologically similar ERP waves which were explained in functionally similar terms. However, to date, the relationship between these tasks has not been investigated explicitly and systematically. In order to fill this gap, here we examined whether ERP effects and their underlying cognitive processes in implicit and explicit mental arithmetic tasks differ from each other. The same group of participants performed both an implicit number-matching task (in which arithmetic knowledge is task-irrelevant) and an explicit arithmetic-verification task (in which arithmetic knowledge is task-relevant). 129-channel ERP data differed substantially between tasks. In the number-matching task, the arithmetic relatedness effect appeared as a negativity over left-frontal electrodes whereas the distance effect was more prominent over right centro-parietal electrodes. In the verification task, all probe types elicited similar N2b waves over right fronto-central electrodes and typical centro-parietal N400 effects over central electrodes. The distance effect appeared as an early-rising, long-lasting left parietal negativity. We suggest that ERP effects in the implicit task reflect access to semantic memory networks and to magnitude discrimination, respectively. In contrast, effects of expectation violation are more prominent in explicit tasks and may mask more delicate cognitive processes. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  4. Dissociation between arithmetic relatedness and distance effects is modulated by task properties: An ERP study comparing explicit vs. implicit arithmetic processing

    PubMed Central

    Avancini, Chiara; Galfano, Giovanni; Szűcs, Dénes

    2014-01-01

    Event-related potential (ERP) studies have detected several characteristic consecutive amplitude modulations in both implicit and explicit mental arithmetic tasks. Implicit tasks typically focused on the arithmetic relatedness effect (in which performance is affected by semantic associations between numbers) while explicit tasks focused on the distance effect (in which performance is affected by the numerical difference of to-be-compared numbers). Both task types elicit morphologically similar ERP waves which were explained in functionally similar terms. However, to date, the relationship between these tasks has not been investigated explicitly and systematically. In order to fill this gap, here we examined whether ERP effects and their underlying cognitive processes in implicit and explicit mental arithmetic tasks differ from each other. The same group of participants performed both an implicit number-matching task (in which arithmetic knowledge is task-irrelevant) and an explicit arithmetic-verification task (in which arithmetic knowledge is task-relevant). 129-channel ERP data differed substantially between tasks. In the number-matching task, the arithmetic relatedness effect appeared as a negativity over left-frontal electrodes whereas the distance effect was more prominent over right centro-parietal electrodes. In the verification task, all probe types elicited similar N2b waves over right fronto-central electrodes and typical centro-parietal N400 effects over central electrodes. The distance effect appeared as an early-rising, long-lasting left parietal negativity. We suggest that ERP effects in the implicit task reflect access to semantic memory networks and to magnitude discrimination, respectively. In contrast, effects of expectation violation are more prominent in explicit tasks and may mask more delicate cognitive processes. PMID:25450162

  5. Gender, Legitimation, and Identity Verification in Groups

    ERIC Educational Resources Information Center

    Burke, Peter J.; Stets, Jan E.; Cerven, Christine

    2007-01-01

    Drawing upon identity theory, expectation states theory, and legitimation theory, we examine how the task leader identity in task-oriented groups is more likely to be verified for persons with high status characteristics. We hypothesize that identity verification will be accomplished more readily for male group members and legitimated task leaders…

  6. 75 FR 34439 - Defense Science Board Task Force on Nuclear Treaty Monitoring and Verification

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-17

    ... DEPARTMENT OF DEFENSE Office of the Secretary Defense Science Board Task Force on Nuclear Treaty... meetings. SUMMARY: The Defense Science Board Task Force on Nuclear Treaty Monitoring and Verification will... Applications International Corporation, 4001 North Fairfax Drive, Suite 300, Arlington, VA. FOR FURTHER...

  7. A Verification Method of Inter-Task Cooperation in Embedded Real-time Systems and its Evaluation

    NASA Astrophysics Data System (ADS)

    Yoshida, Toshio

    In software development process of embedded real-time systems, the design of the task cooperation process is very important. The cooperating process of such tasks is specified by task cooperation patterns. Adoption of unsuitable task cooperation patterns has fatal influence on system performance, quality, and extendibility. In order to prevent repetitive work caused by the shortage of task cooperation performance, it is necessary to verify task cooperation patterns in an early software development stage. However, it is very difficult to verify task cooperation patterns in an early software developing stage where task program codes are not completed yet. Therefore, we propose a verification method using task skeleton program codes and a real-time kernel that has a function of recording all events during software execution such as system calls issued by task program codes, external interrupts, and timer interrupt. In order to evaluate the proposed verification method, we applied it to the software development process of a mechatronics control system.

  8. Using software security analysis to verify the secure socket layer (SSL) protocol

    NASA Technical Reports Server (NTRS)

    Powell, John D.

    2004-01-01

    nal Aeronautics and Space Administration (NASA) have tens of thousands of networked computer systems and applications. Software Security vulnerabilities present risks such as lost or corrupted data, information the3, and unavailability of critical systems. These risks represent potentially enormous costs to NASA. The NASA Code Q research initiative 'Reducing Software Security Risk (RSSR) Trough an Integrated Approach '' offers, among its capabilities, formal verification of software security properties, through the use of model based verification (MBV) to address software security risks. [1,2,3,4,5,6] MBV is a formal approach to software assurance that combines analysis of software, via abstract models, with technology, such as model checkers, that provide automation of the mechanical portions of the analysis process. This paper will discuss: The need for formal analysis to assure software systems with respect to software and why testing alone cannot provide it. The means by which MBV with a Flexible Modeling Framework (FMF) accomplishes the necessary analysis task. An example of FMF style MBV in the verification of properties over the Secure Socket Layer (SSL) communication protocol as a demonstration.

  9. Attention and implicit memory in the category-verification and lexical decision tasks.

    PubMed

    Mulligan, Neil W; Peterson, Daniel

    2008-05-01

    Prior research on implicit memory appeared to support 3 generalizations: Conceptual tests are affected by divided attention, perceptual tasks are affected by certain divided-attention manipulations, and all types of priming are affected by selective attention. These generalizations are challenged in experiments using the implicit tests of category verification and lexical decision. First, both tasks were unaffected by divided-attention tasks known to impact other priming tasks. Second, both tasks were unaffected by a manipulation of selective attention in which colored words were either named or their colors identified. Thus, category verification, unlike other conceptual tasks, appears unaffected by divided attention, and some selective-attention tasks, and lexical decision, unlike other perceptual tasks, appears unaffected by a difficult divided-attention task and some selective-attention tasks. Finally, both tasks were affected by a selective-attention task in which attention was manipulated across objects (rather than within objects), indicating some susceptibility to selective attention. The results contradict an analysis on the basis of the conceptual-perceptual distinction and other more specific hypotheses but are consistent with the distinction between production and identification priming.

  10. 75 FR 43943 - Defense Science Board; Task Force on Nuclear Treaty Monitoring and Verification

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-27

    ... DEPARTMENT OF DEFENSE Office of the Secretary Defense Science Board; Task Force on Nuclear Treaty... meetings. SUMMARY: The Defense Science Board Task Force on Nuclear Treaty Monitoring and Verification will... held September 13-14, and 25-26, 2010. ADDRESSES: The meetings will be held at Science Applications...

  11. Sentence Verification, Sentence Recognition, and the Semantic-Episodic Distinction

    ERIC Educational Resources Information Center

    Shoben, Edward J.; And Others

    1978-01-01

    In an attempt to assess the validity of the distinction between episodic and semantic memory, this research examined the influence of two variables on sentence verification (presumably a semantic memory task) and sentence recognition (presumably an episodic memory task). ( Editor)

  12. Using Automated Theorem Provers to Certify Auto-Generated Aerospace Software

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Fischer, Bernd; Schumann, Johann

    2004-01-01

    We describe a system for the automated certification of safety properties of NASA software. The system uses Hoare-style program verification technology to generate proof obligations which are then processed by an automated first-order theorem prover (ATP). For full automation, however, the obligations must be aggressively preprocessed and simplified We describe the unique requirements this places on the ATP and demonstrate how the individual simplification stages, which are implemented by rewriting, influence the ability of the ATP to solve the proof tasks. Experiments on more than 25,000 tasks were carried out using Vampire, Spass, and e-setheo.

  13. Mitigating errors caused by interruptions during medication verification and administration: interventions in a simulated ambulatory chemotherapy setting.

    PubMed

    Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia

    2014-11-01

    Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  14. Mitigating errors caused by interruptions during medication verification and administration: interventions in a simulated ambulatory chemotherapy setting

    PubMed Central

    Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia

    2014-01-01

    Background Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. Objective The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. Methods The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Results Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Conclusions Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. PMID:24906806

  15. The Interaction between Surface Color and Color Knowledge: Behavioral and Electrophysiological Evidence

    ERIC Educational Resources Information Center

    Bramao, Ines; Faisca, Luis; Forkstam, Christian; Inacio, Filomena; Araujo, Susana; Petersson, Karl Magnus; Reis, Alexandra

    2012-01-01

    In this study, we used event-related potentials (ERPs) to evaluate the contribution of surface color and color knowledge information in object identification. We constructed two color-object verification tasks--a surface and a knowledge verification task--using high color diagnostic objects; both typical and atypical color versions of the same…

  16. The formal verification of generic interpreters

    NASA Technical Reports Server (NTRS)

    Windley, P.; Levitt, K.; Cohen, G. C.

    1991-01-01

    The task assignment 3 of the design and validation of digital flight control systems suitable for fly-by-wire applications is studied. Task 3 is associated with formal verification of embedded systems. In particular, results are presented that provide a methodological approach to microprocessor verification. A hierarchical decomposition strategy for specifying microprocessors is also presented. A theory of generic interpreters is presented that can be used to model microprocessor behavior. The generic interpreter theory abstracts away the details of instruction functionality, leaving a general model of what an interpreter does.

  17. RF model of the distribution system as a communication channel, phase 2. Volume 2: Task reports

    NASA Technical Reports Server (NTRS)

    Rustay, R. C.; Gajjar, J. T.; Rankin, R. W.; Wentz, R. C.; Wooding, R.

    1982-01-01

    Based on the established feasibility of predicting, via a model, the propagation of Power Line Frequency on radial type distribution feeders, verification studies comparing model predictions against measurements were undertaken using more complicated feeder circuits and situations. Detailed accounts of the major tasks are presented. These include: (1) verification of model; (2) extension, implementation, and verification of perturbation theory; (3) parameter sensitivity; (4) transformer modeling; and (5) compensation of power distribution systems for enhancement of power line carrier communication reliability.

  18. Validation of a finite element method framework for cardiac mechanics applications

    NASA Astrophysics Data System (ADS)

    Danan, David; Le Rolle, Virginie; Hubert, Arnaud; Galli, Elena; Bernard, Anne; Donal, Erwan; Hernández, Alfredo I.

    2017-11-01

    Modeling cardiac mechanics is a particularly challenging task, mainly because of the poor understanding of the underlying physiology, the lack of observability and the complexity of the mechanical properties of myocardial tissues. The choice of cardiac mechanic solvers, especially, implies several difficulties, notably due to the potential instability arising from the nonlinearities inherent to the large deformation framework. Furthermore, the verification of the obtained simulations is a difficult task because there is no analytic solutions for these kinds of problems. Hence, the objective of this work is to provide a quantitative verification of a cardiac mechanics implementation based on two published benchmark problems. The first problem consists in deforming a bar whereas the second problem concerns the inflation of a truncated ellipsoid-shaped ventricle, both in the steady state case. Simulations were obtained by using the finite element software GETFEM++. Results were compared to the consensus solution published by 11 groups and the proposed solutions were indistinguishable. The validation of the proposed mechanical model implementation is an important step toward the proposition of a global model of cardiac electro-mechanical activity.

  19. A common neural substrate for perceiving and knowing about color

    PubMed Central

    Simmons, W. Kyle; Ramjee, Vimal; Beauchamp, Michael S.; McRae, Ken; Martin, Alex; Barsalou, Lawrence W.

    2013-01-01

    Functional neuroimaging research has demonstrated that retrieving information about object-associated colors activates the left fusiform gyrus in posterior temporal cortex. Although regions near the fusiform have previously been implicated in color perception, it remains unclear whether color knowledge retrieval actually activates the color perception system. Evidence to this effect would be particularly strong if color perception cortex was activated by color knowledge retrieval triggered strictly with linguistic stimuli. To address this question, subjects performed two tasks while undergoing fMRI. First, subjects performed a property verification task using only words to assess conceptual knowledge. On each trial, subjects verified whether a named color or motor property was true of a named object (e.g., TAXI-yellow, HAIR-combed). Next, subjects performed a color perception task. A region of the left fusiform gyrus that was highly responsive during color perception also showed greater activity for retrieving color than motor property knowledge. These data provide the first evidence for a direct overlap in the neural bases of color perception and stored information about object-associated color, and they significantly add to accumulating evidence that conceptual knowledge is grounded in the brain’s modality-specific systems. PMID:17575989

  20. A common neural substrate for perceiving and knowing about color.

    PubMed

    Simmons, W Kyle; Ramjee, Vimal; Beauchamp, Michael S; McRae, Ken; Martin, Alex; Barsalou, Lawrence W

    2007-09-20

    Functional neuroimaging research has demonstrated that retrieving information about object-associated colors activates the left fusiform gyrus in posterior temporal cortex. Although regions near the fusiform have previously been implicated in color perception, it remains unclear whether color knowledge retrieval actually activates the color perception system. Evidence to this effect would be particularly strong if color perception cortex was activated by color knowledge retrieval triggered strictly with linguistic stimuli. To address this question, subjects performed two tasks while undergoing fMRI. First, subjects performed a property verification task using only words to assess conceptual knowledge. On each trial, subjects verified whether a named color or motor property was true of a named object (e.g., TAXI-yellow, HAIR-combed). Next, subjects performed a color perception task. A region of the left fusiform gyrus that was highly responsive during color perception also showed greater activity for retrieving color than motor property knowledge. These data provide the first evidence for a direct overlap in the neural bases of color perception and stored information about object-associated color, and they significantly add to accumulating evidence that conceptual knowledge is grounded in the brain's modality-specific systems.

  1. Two-Level Verification of Data Integrity for Data Storage in Cloud Computing

    NASA Astrophysics Data System (ADS)

    Xu, Guangwei; Chen, Chunlin; Wang, Hongya; Zang, Zhuping; Pang, Mugen; Jiang, Ping

    Data storage in cloud computing can save capital expenditure and relive burden of storage management for users. As the lose or corruption of files stored may happen, many researchers focus on the verification of data integrity. However, massive users often bring large numbers of verifying tasks for the auditor. Moreover, users also need to pay extra fee for these verifying tasks beyond storage fee. Therefore, we propose a two-level verification of data integrity to alleviate these problems. The key idea is to routinely verify the data integrity by users and arbitrate the challenge between the user and cloud provider by the auditor according to the MACs and ϕ values. The extensive performance simulations show that the proposed scheme obviously decreases auditor's verifying tasks and the ratio of wrong arbitration.

  2. Software Tools for Formal Specification and Verification of Distributed Real-Time Systems.

    DTIC Science & Technology

    1997-09-30

    set of software tools for specification and verification of distributed real time systems using formal methods. The task of this SBIR Phase II effort...to be used by designers of real - time systems for early detection of errors. The mathematical complexity of formal specification and verification has

  3. Computer-aided injection molding system

    NASA Astrophysics Data System (ADS)

    Wang, K. K.; Shen, S. F.; Cohen, C.; Hieber, C. A.; Isayev, A. I.

    1982-10-01

    Achievements are reported in cavity-filling simulation, modeling viscoelastic effects, measuring and predicting frozen-in birefringence in molded parts, measuring residual stresses and associated mechanical properties of molded parts, and developing an interactive mold-assembly design program and an automatic NC maching data generation and verification program. The Cornell Injection Molding Program (CIMP) consortium is discussed as are computer user manuals that have been published by the consortium. Major tasks which should be addressed in future efforts are listed, including: (1) predict and experimentally determine the post-fillin behavior of thermoplastics; (2) simulate and experimentally investigate the injection molding of thermosets and filled materials; and (3) further investigate residual stresses, orientation and mechanical properties.

  4. Science verification of operational aerosol and cloud products for TROPOMI on Sentinel-5 precursor

    NASA Astrophysics Data System (ADS)

    Lelli, Luca; Gimeno-Garcia, Sebastian; Sanders, Abram; Sneep, Maarten; Rozanov, Vladimir V.; Kokhanvosky, Alexander A.; Loyola, Diego; Burrows, John P.

    2016-04-01

    With the approaching launch of the Sentinel-5 precursor (S-5P) satellite, scheduled by mid 2016, one preparatory task of the L2 working group (composed by the Institute of Environmental Physics IUP Bremen, the Royal Netherlands Meteorological Institute KNMI De Bilt, and the German Aerospace Center DLR Oberpfaffenhofen) has been the assessment of biases among aerosol and cloud products, that are going to be inferred by the respective algorithms from measurements of the platform's payload TROPOspheric Monitoring Instrument (TROPOMI). The instrument will measure terrestrial radiance with varying moderate spectral resolutions from the ultraviolet throughout the shortwave infrared. Specifically, all the operational and verification algorithms involved in this comparison exploit the sensitivity of molecular oxygen absorption (the A-band, 755-775 nm, with a resolution of 0.54 nm) to changes in optical and geometrical parameters of tropospheric scattering layers. Therefore, aerosol layer height (ALH) and thickness (AOT), cloud top height (CTH), thickness (COT) and albedo (CA) are the targeted properties. First, the verification of these properties has been accomplished upon synchronisation of the respective forward radiative transfer models for a variety of atmospheric scenarios. Then, biases against independent techniques have been evaluated with real measurements of selected GOME-2 orbits. Global seasonal bias assessment has been carried out for CTH, CA and COT, whereas the verification of ALH and AOT is based on the analysis of the ash plume emitted by the icelandic volcanic eruption Eyjafjallajökull in May 2010 and selected dust scenes off the Saharan west coast sensed by SCIAMACHY in year 2009.

  5. International Energy Agency Ocean Energy Systems Task 10 Wave Energy Converter Modeling Verification and Validation: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Fabian F; Yu, Yi-Hsiang; Nielsen, Kim

    This is the first joint reference paper for the Ocean Energy Systems (OES) Task 10 Wave Energy Converter modeling verification and validation group. The group is established under the OES Energy Technology Network program under the International Energy Agency. OES was founded in 2001 and Task 10 was proposed by Bob Thresher (National Renewable Energy Laboratory) in 2015 and approved by the OES Executive Committee EXCO in 2016. The kickoff workshop took place in September 2016, wherein the initial baseline task was defined. Experience from similar offshore wind validation/verification projects (OC3-OC5 conducted within the International Energy Agency Wind Task 30)more » [1], [2] showed that a simple test case would help the initial cooperation to present results in a comparable way. A heaving sphere was chosen as the first test case. The team of project participants simulated different numerical experiments, such as heave decay tests and regular and irregular wave cases. The simulation results are presented and discussed in this paper.« less

  6. Task planning with uncertainty for robotic systems. Thesis

    NASA Technical Reports Server (NTRS)

    Cao, Tiehua

    1993-01-01

    In a practical robotic system, it is important to represent and plan sequences of operations and to be able to choose an efficient sequence from them for a specific task. During the generation and execution of task plans, different kinds of uncertainty may occur and erroneous states need to be handled to ensure the efficiency and reliability of the system. An approach to task representation, planning, and error recovery for robotic systems is demonstrated. Our approach to task planning is based on an AND/OR net representation, which is then mapped to a Petri net representation of all feasible geometric states and associated feasibility criteria for net transitions. Task decomposition of robotic assembly plans based on this representation is performed on the Petri net for robotic assembly tasks, and the inheritance of properties of liveness, safeness, and reversibility at all levels of decomposition are explored. This approach provides a framework for robust execution of tasks through the properties of traceability and viability. Uncertainty in robotic systems are modeled by local fuzzy variables, fuzzy marking variables, and global fuzzy variables which are incorporated in fuzzy Petri nets. Analysis of properties and reasoning about uncertainty are investigated using fuzzy reasoning structures built into the net. Two applications of fuzzy Petri nets, robot task sequence planning and sensor-based error recovery, are explored. In the first application, the search space for feasible and complete task sequences with correct precedence relationships is reduced via the use of global fuzzy variables in reasoning about subgoals. In the second application, sensory verification operations are modeled by mutually exclusive transitions to reason about local and global fuzzy variables on-line and automatically select a retry or an alternative error recovery sequence when errors occur. Task sequencing and task execution with error recovery capability for one and multiple soft components in robotic systems are investigated.

  7. Students' Use of Technological Tools for Verification Purposes in Geometry Problem Solving

    ERIC Educational Resources Information Center

    Papadopoulos, Ioannis; Dagdilelis, Vassilios

    2008-01-01

    Despite its importance in mathematical problem solving, verification receives rather little attention by the students in classrooms, especially at the primary school level. Under the hypotheses that (a) non-standard tasks create a feeling of uncertainty that stimulates the students to proceed to verification processes and (b) computational…

  8. Preliminary report for using X-rays as verification and authentication tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Esch, Ernst Ingo; Desimone, David J.; Lakis, Rollin Evan

    2016-04-06

    We examined x-rays for the use as authentication and verification tool in treaty verification. Several x-ray pictures were taken to determine the quality and feasibility of x-rays for these tasks. This document describes the capability of the used x-ray system and outlines its parameters and possible use.

  9. Face verification with balanced thresholds.

    PubMed

    Yan, Shuicheng; Xu, Dong; Tang, Xiaoou

    2007-01-01

    The process of face verification is guided by a pre-learned global threshold, which, however, is often inconsistent with class-specific optimal thresholds. It is, hence, beneficial to pursue a balance of the class-specific thresholds in the model-learning stage. In this paper, we present a new dimensionality reduction algorithm tailored to the verification task that ensures threshold balance. This is achieved by the following aspects. First, feasibility is guaranteed by employing an affine transformation matrix, instead of the conventional projection matrix, for dimensionality reduction, and, hence, we call the proposed algorithm threshold balanced transformation (TBT). Then, the affine transformation matrix, constrained as the product of an orthogonal matrix and a diagonal matrix, is optimized to improve the threshold balance and classification capability in an iterative manner. Unlike most algorithms for face verification which are directly transplanted from face identification literature, TBT is specifically designed for face verification and clarifies the intrinsic distinction between these two tasks. Experiments on three benchmark face databases demonstrate that TBT significantly outperforms the state-of-the-art subspace techniques for face verification.

  10. Comprehending how visual context influences incremental sentence processing: insights from ERPs and picture-sentence verification

    PubMed Central

    Knoeferle, Pia; Urbach, Thomas P.; Kutas, Marta

    2010-01-01

    To re-establish picture-sentence verification – discredited possibly for its over-reliance on post-sentence response time (RT) measures - as a task for situated comprehension, we collected event-related brain potentials (ERPs) as participants read a subject-verb-object sentence, and RTs indicating whether or not the verb matched a previously depicted action. For mismatches (vs matches), speeded RTs were longer, verb N400s over centro-parietal scalp larger, and ERPs to the object noun more negative. RTs (congruence effect) correlated inversely with the centro-parietal verb N400s, and positively with the object ERP congruence effects. Verb N400s, object ERPs, and verbal working memory scores predicted more variance in RT effects (50%) than N400s alone. Thus, (1) verification processing is not all post-sentence; (2) simple priming cannot account for these results; and (3) verification tasks can inform studies of situated comprehension. PMID:20701712

  11. Timing analysis by model checking

    NASA Technical Reports Server (NTRS)

    Naydich, Dimitri; Guaspari, David

    2000-01-01

    The safety of modern avionics relies on high integrity software that can be verified to meet hard real-time requirements. The limits of verification technology therefore determine acceptable engineering practice. To simplify verification problems, safety-critical systems are commonly implemented under the severe constraints of a cyclic executive, which make design an expensive trial-and-error process highly intolerant of change. Important advances in analysis techniques, such as rate monotonic analysis (RMA), have provided a theoretical and practical basis for easing these onerous restrictions. But RMA and its kindred have two limitations: they apply only to verifying the requirement of schedulability (that tasks meet their deadlines) and they cannot be applied to many common programming paradigms. We address both these limitations by applying model checking, a technique with successful industrial applications in hardware design. Model checking algorithms analyze finite state machines, either by explicit state enumeration or by symbolic manipulation. Since quantitative timing properties involve a potentially unbounded state variable (a clock), our first problem is to construct a finite approximation that is conservative for the properties being analyzed-if the approximation satisfies the properties of interest, so does the infinite model. To reduce the potential for state space explosion we must further optimize this finite model. Experiments with some simple optimizations have yielded a hundred-fold efficiency improvement over published techniques.

  12. Systematic Model-in-the-Loop Test of Embedded Control Systems

    NASA Astrophysics Data System (ADS)

    Krupp, Alexander; Müller, Wolfgang

    Current model-based development processes offer new opportunities for verification automation, e.g., in automotive development. The duty of functional verification is the detection of design flaws. Current functional verification approaches exhibit a major gap between requirement definition and formal property definition, especially when analog signals are involved. Besides lack of methodical support for natural language formalization, there does not exist a standardized and accepted means for formal property definition as a target for verification planning. This article addresses several shortcomings of embedded system verification. An Enhanced Classification Tree Method is developed based on the established Classification Tree Method for Embeded Systems CTM/ES which applies a hardware verification language to define a verification environment.

  13. An Empirical Evaluation of Automated Theorem Provers in Software Certification

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Fischer, Bernd; Schumann, Johann

    2004-01-01

    We describe a system for the automated certification of safety properties of NASA software. The system uses Hoare-style program verification technology to generate proof obligations which are then processed by an automated first-order theorem prover (ATP). We discuss the unique requirements this application places on the ATPs, focusing on automation, proof checking, and usability. For full automation, however, the obligations must be aggressively preprocessed and simplified, and we demonstrate how the individual simplification stages, which are implemented by rewriting, influence the ability of the ATPs to solve the proof tasks. Our results are based on 13 certification experiments that lead to more than 25,000 proof tasks which have each been attempted by Vampire, Spass, e-setheo, and Otter. The proofs found by Otter have been proof-checked by IVY.

  14. Combining Task Execution and Background Knowledge for the Verification of Medical Guidelines

    NASA Astrophysics Data System (ADS)

    Hommersom, Arjen; Groot, Perry; Lucas, Peter; Balser, Michael; Schmitt, Jonathan

    The use of a medical guideline can be seen as the execution of computational tasks, sequentially or in parallel, in the face of patient data. It has been shown that many of such guidelines can be represented as a 'network of tasks', i.e., as a number of steps that have a specific function or goal. To investigate the quality of such guidelines we propose a formalization of criteria for good practice medicine a guideline should comply to. We use this theory in conjunction with medical background knowledge to verify the quality of a guideline dealing with diabetes mellitus type 2 using the interactive theorem prover KIV. Verification using task execution and background knowledge is a novel approach to quality checking of medical guidelines.

  15. Microcode Verification Project.

    DTIC Science & Technology

    1980-05-01

    numerical constant. The internal syntax for these minimum and maximum values is REALMIN and REALMAX. ISPSSIMP ISPSSIMP is the file simplifying bitstring...To be fair , it is quito clear that much of the ILbor Il tile verification task can be reduced If verification and. code development are carried out...basi.a of and the language we have chosen for both encoding our descriptions of machines and reasoning about the course of computations. Internally , our

  16. Influence of the Redundant Verification and the Non-Redundant Verification on the Hydraulic Tomography

    NASA Astrophysics Data System (ADS)

    Wei, T. B.; Chen, Y. L.; Lin, H. R.; Huang, S. Y.; Yeh, T. C. J.; Wen, J. C.

    2016-12-01

    In the groundwater study, it estimated the heterogeneous spatial distribution of hydraulic Properties, there were many scholars use to hydraulic tomography (HT) from field site pumping tests to estimate inverse of heterogeneous spatial distribution of hydraulic Properties, to prove the most of most field site aquifer was heterogeneous hydrogeological parameters spatial distribution field. Many scholars had proposed a method of hydraulic tomography to estimate heterogeneous spatial distribution of hydraulic Properties of aquifer, the Huang et al. [2011] was used the non-redundant verification analysis of pumping wells changed, observation wells fixed on the inverse and the forward, to reflect the feasibility of the heterogeneous spatial distribution of hydraulic Properties of field site aquifer of the non-redundant verification analysis on steady-state model.From post literature, finding only in steady state, non-redundant verification analysis of pumping well changed location and observation wells fixed location for inverse and forward. But the studies had not yet pumping wells fixed or changed location, and observation wells fixed location for redundant verification or observation wells change location for non-redundant verification of the various combinations may to explore of influences of hydraulic tomography method. In this study, it carried out redundant verification method and non-redundant verification method for forward to influences of hydraulic tomography method in transient. And it discuss above mentioned in NYUST campus sites the actual case, to prove the effectiveness of hydraulic tomography methods, and confirmed the feasibility on inverse and forward analysis from analysis results.Keywords: Hydraulic Tomography, Redundant Verification, Heterogeneous, Inverse, Forward

  17. Distance Metric Learning Using Privileged Information for Face Verification and Person Re-Identification.

    PubMed

    Xu, Xinxing; Li, Wen; Xu, Dong

    2015-12-01

    In this paper, we propose a new approach to improve face verification and person re-identification in the RGB images by leveraging a set of RGB-D data, in which we have additional depth images in the training data captured using depth cameras such as Kinect. In particular, we extract visual features and depth features from the RGB images and depth images, respectively. As the depth features are available only in the training data, we treat the depth features as privileged information, and we formulate this task as a distance metric learning with privileged information problem. Unlike the traditional face verification and person re-identification tasks that only use visual features, we further employ the extra depth features in the training data to improve the learning of distance metric in the training process. Based on the information-theoretic metric learning (ITML) method, we propose a new formulation called ITML with privileged information (ITML+) for this task. We also present an efficient algorithm based on the cyclic projection method for solving the proposed ITML+ formulation. Extensive experiments on the challenging faces data sets EUROCOM and CurtinFaces for face verification as well as the BIWI RGBD-ID data set for person re-identification demonstrate the effectiveness of our proposed approach.

  18. Verification of monitor unit calculations for non-IMRT clinical radiotherapy: report of AAPM Task Group 114.

    PubMed

    Stern, Robin L; Heaton, Robert; Fraser, Martin W; Goddu, S Murty; Kirby, Thomas H; Lam, Kwok Leung; Molineu, Andrea; Zhu, Timothy C

    2011-01-01

    The requirement of an independent verification of the monitor units (MU) or time calculated to deliver the prescribed dose to a patient has been a mainstay of radiation oncology quality assurance. The need for and value of such a verification was obvious when calculations were performed by hand using look-up tables, and the verification was achieved by a second person independently repeating the calculation. However, in a modern clinic using CT/MR/PET simulation, computerized 3D treatment planning, heterogeneity corrections, and complex calculation algorithms such as convolution/superposition and Monte Carlo, the purpose of and methodology for the MU verification have come into question. In addition, since the verification is often performed using a simpler geometrical model and calculation algorithm than the primary calculation, exact or almost exact agreement between the two can no longer be expected. Guidelines are needed to help the physicist set clinically reasonable action levels for agreement. This report addresses the following charges of the task group: (1) To re-evaluate the purpose and methods of the "independent second check" for monitor unit calculations for non-IMRT radiation treatment in light of the complexities of modern-day treatment planning. (2) To present recommendations on how to perform verification of monitor unit calculations in a modern clinic. (3) To provide recommendations on establishing action levels for agreement between primary calculations and verification, and to provide guidance in addressing discrepancies outside the action levels. These recommendations are to be used as guidelines only and shall not be interpreted as requirements.

  19. Expanding the universe of categorical syllogisms: a challenge for reasoning researchers.

    PubMed

    Roberts, Maxwell J

    2005-11-01

    Syllogistic reasoning, in which people identify conclusions from quantified premise pairs, remains a benchmark task whose patterns of data must be accounted for by general theories of deductive reasoning. However, psychologists have confined themselves to administering only the 64 premise pairs historically identified by Aristotle. By utilizing all combinations of negations, the present article identifies an expanded set of 576 premise pairs and gives the valid conclusions that they support. Many of these have interesting properties, and the identification of predictions and their verification will be an important next step for all proponents of such theories.

  20. Verification of the New FAST v8 Capabilities for the Modeling of Fixed-Bottom Offshore Wind Turbines: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barahona, B.; Jonkman, J.; Damiani, R.

    2014-12-01

    Coupled dynamic analysis has an important role in the design of offshore wind turbines because the systems are subject to complex operating conditions from the combined action of waves and wind. The aero-hydro-servo-elastic tool FAST v8 is framed in a novel modularization scheme that facilitates such analysis. Here, we present the verification of new capabilities of FAST v8 to model fixed-bottom offshore wind turbines. We analyze a series of load cases with both wind and wave loads and compare the results against those from the previous international code comparison projects-the International Energy Agency (IEA) Wind Task 23 Subtask 2 Offshoremore » Code Comparison Collaboration (OC3) and the IEA Wind Task 30 OC3 Continued (OC4) projects. The verification is performed using the NREL 5-MW reference turbine supported by monopile, tripod, and jacket substructures. The substructure structural-dynamics models are built within the new SubDyn module of FAST v8, which uses a linear finite-element beam model with Craig-Bampton dynamic system reduction. This allows the modal properties of the substructure to be synthesized and coupled to hydrodynamic loads and tower dynamics. The hydrodynamic loads are calculated using a new strip theory approach for multimember substructures in the updated HydroDyn module of FAST v8. These modules are linked to the rest of FAST through the new coupling scheme involving mapping between module-independent spatial discretizations and a numerically rigorous implicit solver. The results show that the new structural dynamics, hydrodynamics, and coupled solutions compare well to the results from the previous code comparison projects.« less

  1. Cross-Language Phonological Activation of Meaning: Evidence from Category Verification

    ERIC Educational Resources Information Center

    Friesen, Deanna C.; Jared, Debra

    2012-01-01

    The study investigated phonological processing in bilingual reading for meaning. English-French and French-English bilinguals performed a category verification task in either their first or second language. Interlingual homophones (words that share phonology across languages but not orthography or meaning) and single language control words served…

  2. Speed and Accuracy in the Processing of False Statements About Semantic Information.

    ERIC Educational Resources Information Center

    Ratcliff, Roger

    1982-01-01

    A standard reaction time procedure and a response signal procedure were used on data from eight experiments on semantic verifications. Results suggest that simple models of the semantic verification task that assume a single yes/no dimension on which discrimination is made are not correct. (Author/PN)

  3. Automation bias and verification complexity: a systematic review.

    PubMed

    Lyell, David; Coiera, Enrico

    2017-03-01

    While potentially reducing decision errors, decision support systems can introduce new types of errors. Automation bias (AB) happens when users become overreliant on decision support, which reduces vigilance in information seeking and processing. Most research originates from the human factors literature, where the prevailing view is that AB occurs only in multitasking environments. This review seeks to compare the human factors and health care literature, focusing on the apparent association of AB with multitasking and task complexity. EMBASE, Medline, Compendex, Inspec, IEEE Xplore, Scopus, Web of Science, PsycINFO, and Business Source Premiere from 1983 to 2015. Evaluation studies where task execution was assisted by automation and resulted in errors were included. Participants needed to be able to verify automation correctness and perform the task manually. Tasks were identified and grouped. Task and automation type and presence of multitasking were noted. Each task was rated for its verification complexity. Of 890 papers identified, 40 met the inclusion criteria; 6 were in health care. Contrary to the prevailing human factors view, AB was found in single tasks, typically involving diagnosis rather than monitoring, and with high verification complexity. The literature is fragmented, with large discrepancies in how AB is reported. Few studies reported the statistical significance of AB compared to a control condition. AB appears to be associated with the degree of cognitive load experienced in decision tasks, and appears to not be uniquely associated with multitasking. Strategies to minimize AB might focus on cognitive load reduction. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  4. Robot initiative in a team learning task increases the rhythm of interaction but not the perceived engagement.

    PubMed

    Ivaldi, Serena; Anzalone, Salvatore M; Rousseau, Woody; Sigaud, Olivier; Chetouani, Mohamed

    2014-01-01

    We hypothesize that the initiative of a robot during a collaborative task with a human can influence the pace of interaction, the human response to attention cues, and the perceived engagement. We propose an object learning experiment where the human interacts in a natural way with the humanoid iCub. Through a two-phases scenario, the human teaches the robot about the properties of some objects. We compare the effect of the initiator of the task in the teaching phase (human or robot) on the rhythm of the interaction in the verification phase. We measure the reaction time of the human gaze when responding to attention utterances of the robot. Our experiments show that when the robot is the initiator of the learning task, the pace of interaction is higher and the reaction to attention cues faster. Subjective evaluations suggest that the initiating role of the robot, however, does not affect the perceived engagement. Moreover, subjective and third-person evaluations of the interaction task suggest that the attentive mechanism we implemented in the humanoid robot iCub is able to arouse engagement and make the robot's behavior readable.

  5. Robot initiative in a team learning task increases the rhythm of interaction but not the perceived engagement

    PubMed Central

    Ivaldi, Serena; Anzalone, Salvatore M.; Rousseau, Woody; Sigaud, Olivier; Chetouani, Mohamed

    2014-01-01

    We hypothesize that the initiative of a robot during a collaborative task with a human can influence the pace of interaction, the human response to attention cues, and the perceived engagement. We propose an object learning experiment where the human interacts in a natural way with the humanoid iCub. Through a two-phases scenario, the human teaches the robot about the properties of some objects. We compare the effect of the initiator of the task in the teaching phase (human or robot) on the rhythm of the interaction in the verification phase. We measure the reaction time of the human gaze when responding to attention utterances of the robot. Our experiments show that when the robot is the initiator of the learning task, the pace of interaction is higher and the reaction to attention cues faster. Subjective evaluations suggest that the initiating role of the robot, however, does not affect the perceived engagement. Moreover, subjective and third-person evaluations of the interaction task suggest that the attentive mechanism we implemented in the humanoid robot iCub is able to arouse engagement and make the robot's behavior readable. PMID:24596554

  6. Target Capturing Control for Space Robots with Unknown Mass Properties: A Self-Tuning Method Based on Gyros and Cameras.

    PubMed

    Li, Zhenyu; Wang, Bin; Liu, Hong

    2016-08-30

    Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme.

  7. Target Capturing Control for Space Robots with Unknown Mass Properties: A Self-Tuning Method Based on Gyros and Cameras

    PubMed Central

    Li, Zhenyu; Wang, Bin; Liu, Hong

    2016-01-01

    Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme. PMID:27589748

  8. Formalization of the Integral Calculus in the PVS Theorem Prover

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.

    2004-01-01

    The PVS Theorem prover is a widely used formal verification tool used for the analysis of safety-critical systems. The PVS prover, though fully equipped to support deduction in a very general logic framework, namely higher-order logic, it must nevertheless, be augmented with the definitions and associated theorems for every branch of mathematics and Computer Science that is used in a verification. This is a formidable task, ultimately requiring the contributions of researchers and developers all over the world. This paper reports on the formalization of the integral calculus in the PVS theorem prover. All of the basic definitions and theorems covered in a first course on integral calculus have been completed.The theory and proofs were based on Rosenlicht's classic text on real analysis and follow the traditional epsilon-delta method. The goal of this work was to provide a practical set of PVS theories that could be used for verification of hybrid systems that arise in air traffic management systems and other aerospace applications. All of the basic linearity, integrability, boundedness, and continuity properties of the integral calculus were proved. The work culminated in the proof of the Fundamental Theorem Of Calculus. There is a brief discussion about why mechanically checked proofs are so much longer than standard mathematics textbook proofs.

  9. Shuttle payload interface verification equipment study. Volume 2: Technical document, part 1

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The technical analysis is reported that was performed during the shuttle payload interface verification equipment study. It describes: (1) the background and intent of the study; (2) study approach and philosophy covering all facets of shuttle payload/cargo integration; (3)shuttle payload integration requirements; (4) preliminary design of the horizontal IVE; (5) vertical IVE concept; and (6) IVE program development plans, schedule and cost. Also included is a payload integration analysis task to identify potential uses in addition to payload interface verification.

  10. Clinical Skills Verification in General Psychiatry: Recommendations of the ABPN Task Force on Rater Training

    ERIC Educational Resources Information Center

    Jibson, Michael D.; Broquet, Karen E.; Anzia, Joan Meyer; Beresin, Eugene V.; Hunt, Jeffrey I.; Kaye, David; Rao, Nyapati Raghu; Rostain, Anthony Leon; Sexson, Sandra B.; Summers, Richard F.

    2012-01-01

    Objective: The American Board of Psychiatry and Neurology (ABPN) announced in 2007 that general psychiatry training programs must conduct Clinical Skills Verification (CSV), consisting of observed clinical interviews and case presentations during residency, as one requirement to establish graduates' eligibility to sit for the written certification…

  11. Customized Nudging to Improve FAFSA Completion and Income Verification

    ERIC Educational Resources Information Center

    Page, Lindsay; Castleman, Benjamin L.

    2016-01-01

    For most students from low- or moderate-income families, successfully completing the Free Application for Federal Student Aid (FAFSA) is a crucial gateway on the path to college access. However, FAFSA filing and income verification tasks pose substantial barriers to college access for low-income students. In this paper, the authors report on a…

  12. Verification in Referral-Based Crowdsourcing

    PubMed Central

    Naroditskiy, Victor; Rahwan, Iyad; Cebrian, Manuel; Jennings, Nicholas R.

    2012-01-01

    Online social networks offer unprecedented potential for rallying a large number of people to accomplish a given task. Here we focus on information gathering tasks where rare information is sought through “referral-based crowdsourcing”: the information request is propagated recursively through invitations among members of a social network. Whereas previous work analyzed incentives for the referral process in a setting with only correct reports, misreporting is known to be both pervasive in crowdsourcing applications, and difficult/costly to filter out. A motivating example for our work is the DARPA Red Balloon Challenge where the level of misreporting was very high. In order to undertake a formal study of verification, we introduce a model where agents can exert costly effort to perform verification and false reports can be penalized. This is the first model of verification and it provides many directions for future research, which we point out. Our main theoretical result is the compensation scheme that minimizes the cost of retrieving the correct answer. Notably, this optimal compensation scheme coincides with the winning strategy of the Red Balloon Challenge. PMID:23071530

  13. Formally verifying Ada programs which use real number types

    NASA Technical Reports Server (NTRS)

    Sutherland, David

    1986-01-01

    Formal verification is applied to programs which use real number arithmetic operations (mathematical programs). Formal verification of a program P consists of creating a mathematical model of F, stating the desired properties of P in a formal logical language, and proving that the mathematical model has the desired properties using a formal proof calculus. The development and verification of the mathematical model are discussed.

  14. Transferring control demands across incidental learning tasks – stronger sequence usage in serial reaction task after shortcut option in letter string checking

    PubMed Central

    Gaschler, Robert; Marewski, Julian N.; Wenke, Dorit; Frensch, Peter A.

    2014-01-01

    After incidentally learning about a hidden regularity, participants can either continue to solve the task as instructed or, alternatively, apply a shortcut. Past research suggests that the amount of conflict implied by adopting a shortcut seems to bias the decision for vs. against continuing instruction-coherent task processing. We explored whether this decision might transfer from one incidental learning task to the next. Theories that conceptualize strategy change in incidental learning as a learning-plus-decision phenomenon suggest that high demands to adhere to instruction-coherent task processing in Task 1 will impede shortcut usage in Task 2, whereas low control demands will foster it. We sequentially applied two established incidental learning tasks differing in stimuli, responses and hidden regularity (the alphabet verification task followed by the serial reaction task, SRT). While some participants experienced a complete redundancy in the task material of the alphabet verification task (low demands to adhere to instructions), for others the redundancy was only partial. Thus, shortcut application would have led to errors (high demands to follow instructions). The low control demand condition showed the strongest usage of the fixed and repeating sequence of responses in the SRT. The transfer results are in line with the learning-plus-decision view of strategy change in incidental learning, rather than with resource theories of self-control. PMID:25506336

  15. Task Listings Resulting from the Vocational Competency Measures Project. Memorandum Report.

    ERIC Educational Resources Information Center

    American Institutes for Research in the Behavioral Sciences, Palo Alto, CA.

    This memorandum report consists of 14 task listings resulting from the Vocational Competency Measures Project. (The Vocational Competency Measures Project was a test development project that involved the writing and verification of task listings for 14 vocational occupational areas through over 225 interviews conducted in 27 states.) Provided in…

  16. Swarm Verification

    NASA Technical Reports Server (NTRS)

    Holzmann, Gerard J.; Joshi, Rajeev; Groce, Alex

    2008-01-01

    Reportedly, supercomputer designer Seymour Cray once said that he would sooner use two strong oxen to plow a field than a thousand chickens. Although this is undoubtedly wise when it comes to plowing a field, it is not so clear for other types of tasks. Model checking problems are of the proverbial "search the needle in a haystack" type. Such problems can often be parallelized easily. Alas, none of the usual divide and conquer methods can be used to parallelize the working of a model checker. Given that it has become easier than ever to gain access to large numbers of computers to perform even routine tasks it is becoming more and more attractive to find alternate ways to use these resources to speed up model checking tasks. This paper describes one such method, called swarm verification.

  17. Formal methods for dependable real-time systems

    NASA Technical Reports Server (NTRS)

    Rushby, John

    1993-01-01

    The motivation for using formal methods to specify and reason about real time properties is outlined and approaches that were proposed and used are sketched. The formal verifications of clock synchronization algorithms are concluded as showing that mechanically supported reasoning about complex real time behavior is feasible. However, there was significant increase in the effectiveness of verification systems since those verifications were performed, at it is to be expected that verifications of comparable difficulty will become fairly routine. The current challenge lies in developing perspicuous and economical approaches to the formalization and specification of real time properties.

  18. Cooperative Networked Control of Dynamical Peer-to-Peer Vehicle Systems

    DTIC Science & Technology

    2007-12-28

    dynamic deployment and task allocation;verification and hybrid systems; and information management for cooperative control. The activity of the...32 5.3 Decidability Results on Discrete and Hybrid Systems ...... .................. 33 5.4 Switched Systems...solved. Verification and hybrid systems. The program has produced significant advances in the theory of hybrid input-output automata, (HIOA) and the

  19. Fostering group identification and creativity in diverse groups: the role of individuation and self-verification.

    PubMed

    Swann, William B; Kwan, Virginia S Y; Polzer, Jeffrey T; Milton, Laurie P

    2003-11-01

    A longitudinal study examined the interplay of identity negotiation processes and diversity in small groups of master's of business administration (MBA) students. When perceivers formed relatively positive impressions of other group members, higher diversity predicted more individuation of targets. When perceivers formed relatively neutral impressions of other group members, however, higher diversity predicted less individuation of targets. Individuation at the outset of the semester predicted self-verification effects several weeks later, and self-verification, in turn, predicted group identification and creative task performance. The authors conclude that contrary to self-categorization theory, fostering individuation and self-verification in diverse groups may maximize group identification and productivity.

  20. Deductive Evaluation: Implicit Code Verification With Low User Burden

    NASA Technical Reports Server (NTRS)

    Di Vito, Ben L.

    2016-01-01

    We describe a framework for symbolically evaluating C code using a deductive approach that discovers and proves program properties. The framework applies Floyd-Hoare verification principles in its treatment of loops, with a library of iteration schemes serving to derive loop invariants. During evaluation, theorem proving is performed on-the-fly, obviating the generation of verification conditions normally needed to establish loop properties. A PVS-based prototype is presented along with results for sample C functions.

  1. Effect of verification cores on tip capacity of drilled shafts.

    DOT National Transportation Integrated Search

    2009-02-01

    This research addressed two key issues: : 1) Will verification cores holes fill during concrete backfilling? If so, what are the mechanical properties of the : filling material? In dry conditions, verification core holes always completely fill with c...

  2. Review of waste package verification tests. Semiannual report, October 1982-March 1983

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soo, P.

    1983-08-01

    The current study is part of an ongoing task to specify tests that may be used to verify that engineered waste package/repository systems comply with NRC radionuclide containment and controlled release performance objectives. Work covered in this report analyzes verification tests for borosilicate glass waste forms and bentonite- and zeolite-based packing mateials (discrete backfills). 76 references.

  3. Multibody modeling and verification

    NASA Technical Reports Server (NTRS)

    Wiens, Gloria J.

    1989-01-01

    A summary of a ten week project on flexible multibody modeling, verification and control is presented. Emphasis was on the need for experimental verification. A literature survey was conducted for gathering information on the existence of experimental work related to flexible multibody systems. The first portion of the assigned task encompassed the modeling aspects of flexible multibodies that can undergo large angular displacements. Research in the area of modeling aspects were also surveyed, with special attention given to the component mode approach. Resulting from this is a research plan on various modeling aspects to be investigated over the next year. The relationship between the large angular displacements, boundary conditions, mode selection, and system modes is of particular interest. The other portion of the assigned task was the generation of a test plan for experimental verification of analytical and/or computer analysis techniques used for flexible multibody systems. Based on current and expected frequency ranges of flexible multibody systems to be used in space applications, an initial test article was selected and designed. A preliminary TREETOPS computer analysis was run to ensure frequency content in the low frequency range, 0.1 to 50 Hz. The initial specifications of experimental measurement and instrumentation components were also generated. Resulting from this effort is the initial multi-phase plan for a Ground Test Facility of Flexible Multibody Systems for Modeling Verification and Control. The plan focusses on the Multibody Modeling and Verification (MMV) Laboratory. General requirements of the Unobtrusive Sensor and Effector (USE) and the Robot Enhancement (RE) laboratories were considered during the laboratory development.

  4. Mars 2020 Model Based Systems Engineering Pilot

    NASA Technical Reports Server (NTRS)

    Dukes, Alexandra Marie

    2017-01-01

    The pilot study is led by the Integration Engineering group in NASA's Launch Services Program (LSP). The Integration Engineering (IE) group is responsible for managing the interfaces between the spacecraft and launch vehicle. This pilot investigates the utility of Model-Based Systems Engineering (MBSE) with respect to managing and verifying interface requirements. The main objectives of the pilot are to model several key aspects of the Mars 2020 integrated operations and interface requirements based on the design and verification artifacts from Mars Science Laboratory (MSL) and to demonstrate how MBSE could be used by LSP to gain further insight on the interface between the spacecraft and launch vehicle as well as to enhance how LSP manages the launch service. The method used to accomplish this pilot started through familiarization of SysML, MagicDraw, and the Mars 2020 and MSL systems through books, tutorials, and NASA documentation. MSL was chosen as the focus of the model since its processes and verifications translate easily to the Mars 2020 mission. The study was further focused by modeling specialized systems and processes within MSL in order to demonstrate the utility of MBSE for the rest of the mission. The systems chosen were the In-Flight Disconnect (IFD) system and the Mass Properties process. The IFD was chosen as a system of focus since it is an interface between the spacecraft and launch vehicle which can demonstrate the usefulness of MBSE from a system perspective. The Mass Properties process was chosen as a process of focus since the verifications for mass properties occur throughout the lifecycle and can demonstrate the usefulness of MBSE from a multi-discipline perspective. Several iterations of both perspectives have been modeled and evaluated. While the pilot study will continue for another 2 weeks, pros and cons of using MBSE for LSP IE have been identified. A pro of using MBSE includes an integrated view of the disciplines, requirements, and verifications leading up to launch. The model allows IE to understand the relationships between disciplines throughout test activities and verifications. Additionally, the relationships between disciplines and integration tasks are generally consistent. The model allows for the generic relationships and tasks to be captured and used throughout multiple mission models should LSP further pursue MBSE. A con of MBSE is the amount of time it takes upfront to understand MBSE and create a useful model. The upfront time it takes to create a useful model is heavily discussed in MBSE literature and is a consistent con throughout the known applications of MBSE. The need to understand SysML and the software chosen also poses the possibility of a "bottleneck" or one person being the sole MBSE user for the working group. The utility of MBSE will continue to be evaluated through the remainder of the study. In conclusion, the original objectives of the pilot study were to use artifacts from MSL to model key aspects of Mars 2020 and demonstrate how MBSE could be used by LSP to gain insight into the spacecraft and launch vehicle interfaces. Progress has been made in modeling and identifying the utility of MBSE to LSP IE and will continue to be made until the pilot study's conclusion in mid-August. The results of this study will produce initial models, modeling instructions and examples, and a summary of MBSE's utility for future use by LSP.

  5. Lageos assembly operation plan

    NASA Technical Reports Server (NTRS)

    Brueger, J.

    1975-01-01

    Guidelines and constraints procedures for LAGEOS assembly, operation, and design performance are given. Special attention was given to thermal, optical, and dynamic analysis and testing. The operation procedures illustrate the interrelation and sequence of tasks in a flow diagram. The diagram also includes quality assurance functions for verification of operation tasks.

  6. Verification of models for ballistic movement time and endpoint variability.

    PubMed

    Lin, Ray F; Drury, Colin G

    2013-01-01

    A hand control movement is composed of several ballistic movements. The time required in performing a ballistic movement and its endpoint variability are two important properties in developing movement models. The purpose of this study was to test potential models for predicting these two properties. Twelve participants conducted ballistic movements of specific amplitudes using a drawing tablet. The measured data of movement time and endpoint variability were then used to verify the models. This study was successful with Hoffmann and Gan's movement time model (Hoffmann, 1981; Gan and Hoffmann 1988) predicting more than 90.7% data variance for 84 individual measurements. A new theoretically developed ballistic movement variability model, proved to be better than Howarth, Beggs, and Bowden's (1971) model, predicting on average 84.8% of stopping-variable error and 88.3% of aiming-variable errors. These two validated models will help build solid theoretical movement models and evaluate input devices. This article provides better models for predicting end accuracy and movement time of ballistic movements that are desirable in rapid aiming tasks, such as keying in numbers on a smart phone. The models allow better design of aiming tasks, for example button sizes on mobile phones for different user populations.

  7. Independent Verification Final Summary Report for the David Witherspoon, Inc. 1630 Site Knoxville, Tennessee

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    P.C. Weaver

    2009-04-29

    The primary objective of the independent verification was to determine if BJC performed the appropriate actions to meet the specified “hot spot” cleanup criteria of 500 picocuries per gram (pCi/g) uranium-238 (U-238) in surface soil. Specific tasks performed by the independent verification team (IVT) to satisfy this objective included: 1) performing radiological walkover surveys, and 2) collecting soil samples for independent analyses. The independent verification (IV) efforts were designed to evaluate radioactive contaminants (specifically U-238) in the exposed surfaces below one foot of the original site grade, given that the top one foot layer of soil on the site wasmore » removed in its entirety.« less

  8. TeleOperator/telePresence System (TOPS) Concept Verification Model (CVM) development

    NASA Technical Reports Server (NTRS)

    Shimamoto, Mike S.

    1993-01-01

    The development of an anthropomorphic, undersea manipulator system, the TeleOperator/telePresence System (TOPS) Concept Verification Model (CVM) is described. The TOPS system's design philosophy, which results from NRaD's experience in undersea vehicles and manipulator systems development and operations, is presented. The TOPS design approach, task teams, manipulator, and vision system development and results, conclusions, and recommendations are presented.

  9. CTEF 2.0 - Assessment and Improvement of Command Team Effectiveness: Verification of Model and Instrument (CTEF 2.0 - Diagnostic et Amelioration de l’Efficacite d’un Team de Commandement: Verification du Modele et de l’Instrument)

    DTIC Science & Technology

    2010-09-01

    what level of detail is needed to build their teams, and they can add more detailed items from the model in order to tap deeper in the performance of...of a project on ‘Command Team Effectiveness’ by Task Group 127 for the RTO Human Factors and Medicine Panel (RTG HFM-127). Published...vérification du modèle et de l’instrument) This Technical Report documents the findings of a project on ‘Command Team Effectiveness’ by Task Group

  10. The Effect of Job Performance Aids on Quality Assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fosshage, Erik

    Job performance aids (JPAs) have been studied for many decades in a variety of disciplines and for many different types of tasks, yet this is the first known research experiment using JPAs in a quality assurance (QA) context. The objective of this thesis was to assess whether a JPA has an effect on the performance of a QA observer performing the concurrent dual verification technique for a basic assembly task. The JPA used in this study was a simple checklist, and the design borrows heavily from prior research on task analysis and other human factors principles. The assembly task andmore » QA construct of concurrent dual verification are consistent with those of a high consequence manufacturing environment. Results showed that the JPA had only a limited effect on QA performance in the context of this experiment. However, there were three important and unexpected findings that may draw interest from a variety of practitioners. First, a novel testing methodology sensitive enough to measure the effects of a JPA on performance was created. Second, the discovery that there are different probabilities of detection for different types of error in a QA context may be the most far-reaching results. Third, these results highlight the limitations of concurrent dual verification as a control against defects. It is hoped that both the methodology and results of this study are an effective baseline from which to launch future research activities.« less

  11. Energetic arousal and language: predictions from the computational theory of quantifiers processing.

    PubMed

    Zajenkowski, Marcin

    2013-10-01

    The author examines the relationship between energetic arousal (EA) and the processing of sentences containing natural-language quantifiers. Previous studies and theories have shown that energy may differentially affect various cognitive functions. Recent investigations devoted to quantifiers strongly support the theory that various types of quantifiers involve different cognitive functions in the sentence-picture verification task. In the present study, 201 students were presented with a sentence-picture verification task consisting of simple propositions containing a quantifier that referred to the color of a car on display. Color pictures of cars accompanied the propositions. In addition, the level of participants' EA was measured before and after the verification task. It was found that EA and performance on proportional quantifiers (e.g., "More than half of the cars are red") are in an inverted U-shaped relationship. This result may be explained by the fact that proportional sentences engage working memory to a high degree, and previous models of EA-cognition associations have been based on the assumption that tasks that require parallel attentional and memory processes are best performed when energy is moderate. The research described in the present article has several applications, as it shows the optimal human conditions for verbal comprehension. For instance, it may be important in workplace design to control the level of arousal experienced by office staff when work is mostly related to the processing of complex texts. Energy level may be influenced by many factors, such as noise, time of day, or thermal conditions.

  12. Quantum money with classical verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gavinsky, Dmitry

    We propose and construct a quantum money scheme that allows verification through classical communication with a bank. This is the first demonstration that a secure quantum money scheme exists that does not require quantum communication for coin verification. Our scheme is secure against adaptive adversaries - this property is not directly related to the possibility of classical verification, nevertheless none of the earlier quantum money constructions is known to possess it.

  13. Quantum money with classical verification

    NASA Astrophysics Data System (ADS)

    Gavinsky, Dmitry

    2014-12-01

    We propose and construct a quantum money scheme that allows verification through classical communication with a bank. This is the first demonstration that a secure quantum money scheme exists that does not require quantum communication for coin verification. Our scheme is secure against adaptive adversaries - this property is not directly related to the possibility of classical verification, nevertheless none of the earlier quantum money constructions is known to possess it.

  14. Molecular Verification of Cryptops hortensis (Scolopendromorpha: Cryptopidae) in theNearctic Region

    DTIC Science & Technology

    2018-01-29

    Journal Article 3. DATES COVERED (From – To) March – April 2016 4. TITLE AND SUBTITLE Molecular Verification of Cryptops hortensis...PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) USAF School of Aerospace Medicine ...Public Health and Preventive Medicine Dept/PHR 2510 Fifth St., Bldg. 840 Wright-Patterson AFB, OH 45433-7913 8. PERFORMING ORGANIZATION REPORT

  15. Using Small-Step Refinement for Algorithm Verification in Computer Science Education

    ERIC Educational Resources Information Center

    Simic, Danijela

    2015-01-01

    Stepwise program refinement techniques can be used to simplify program verification. Programs are better understood since their main properties are clearly stated, and verification of rather complex algorithms is reduced to proving simple statements connecting successive program specifications. Additionally, it is easy to analyse similar…

  16. UIVerify: A Web-Based Tool for Verification and Automatic Generation of User Interfaces

    NASA Technical Reports Server (NTRS)

    Shiffman, Smadar; Degani, Asaf; Heymann, Michael

    2004-01-01

    In this poster, we describe a web-based tool for verification and automatic generation of user interfaces. The verification component of the tool accepts as input a model of a machine and a model of its interface, and checks that the interface is adequate (correct). The generation component of the tool accepts a model of a given machine and the user's task, and then generates a correct and succinct interface. This write-up will demonstrate the usefulness of the tool by verifying the correctness of a user interface to a flight-control system. The poster will include two more examples of using the tool: verification of the interface to an espresso machine, and automatic generation of a succinct interface to a large hypothetical machine.

  17. Attention and Implicit Memory in the Category-Verification and Lexical Decision Tasks

    ERIC Educational Resources Information Center

    Mulligan, Neil W.; Peterson, Daniel

    2008-01-01

    Prior research on implicit memory appeared to support 3 generalizations: Conceptual tests are affected by divided attention, perceptual tasks are affected by certain divided-attention manipulations, and all types of priming are affected by selective attention. These generalizations are challenged in experiments using the implicit tests of category…

  18. Hyperproperties

    DTIC Science & Technology

    2016-01-14

    hyperproperty and a liveness hyperproperty. A verification technique for safety hyperproperties is given and is shown to generalize prior tech- niques for...liveness properties are affiliated with specific verification methods. An analogous theory for security policies would be appealing. The fact that security...verified by using invariance arguments. Our verification methodology generalizes prior work on using invariance arguments to verify information-flow

  19. 41 CFR 302-17.10 - Claims for payment and supporting documentation and verification.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... supporting documentation and verification. 302-17.10 Section 302-17.10 Public Contracts and Property... INCOME TAX (RIT) ALLOWANCE § 302-17.10 Claims for payment and supporting documentation and verification..., net earnings (or loss) from self-employment income shown on attached Schedule SE (Form 1040): Form(s)W...

  20. Simulation verification techniques study. Task report 4: Simulation module performance parameters and performance standards

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Shuttle simulation software modules in the environment, crew station, vehicle configuration and vehicle dynamics categories are discussed. For each software module covered, a description of the module functions and operational modes, its interfaces with other modules, its stored data, inputs, performance parameters and critical performance parameters is given. Reference data sources which provide standards of performance are identified for each module. Performance verification methods are also discussed briefly.

  1. Expert system verification and validation study. Phase 2: Requirements Identification. Delivery 2: Current requirements applicability

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The second phase of a task is described which has the ultimate purpose of ensuring that adequate Expert Systems (ESs) Verification and Validation (V and V) tools and techniques are available for Space Station Freedom Program Knowledge Based Systems development. The purpose of this phase is to recommend modifications to current software V and V requirements which will extend the applicability of the requirements to NASA ESs.

  2. 6th Annual CMMI Technology Conference and User Group

    DTIC Science & Technology

    2006-11-17

    Operationally Oriented; Customer Focused Proven Approach – Level of Detail Beginner Decision Table (DT) is a tabular representation with tailoring options to...written to reflect the experience of the author Software Engineering led the process charge in the ’80s – Used Flowcharts – CASE tools – “data...Postpo ned PCR. Verification Steps • EPG configuration audits • EPG configuration status reports Flowcharts and Entry, Task, Verification and eXit

  3. Component Verification and Certification in NASA Missions

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Penix, John; Norvig, Peter (Technical Monitor)

    2001-01-01

    Software development for NASA missions is a particularly challenging task. Missions are extremely ambitious scientifically, have very strict time frames, and must be accomplished with a maximum degree of reliability. Verification technologies must therefore be pushed far beyond their current capabilities. Moreover, reuse and adaptation of software architectures and components must be incorporated in software development within and across missions. This paper discusses NASA applications that we are currently investigating from these perspectives.

  4. Application of computer vision to automatic prescription verification in pharmaceutical mail order

    NASA Astrophysics Data System (ADS)

    Alouani, Ali T.

    2005-05-01

    In large volume pharmaceutical mail order, before shipping out prescriptions, licensed pharmacists ensure that the drug in the bottle matches the information provided in the patient prescription. Typically, the pharmacist has about 2 sec to complete the prescription verification process of one prescription. Performing about 1800 prescription verification per hour is tedious and can generate human errors as a result of visual and brain fatigue. Available automatic drug verification systems are limited to a single pill at a time. This is not suitable for large volume pharmaceutical mail order, where a prescription can have as many as 60 pills and where thousands of prescriptions are filled every day. In an attempt to reduce human fatigue, cost, and limit human error, the automatic prescription verification system (APVS) was invented to meet the need of large scale pharmaceutical mail order. This paper deals with the design and implementation of the first prototype online automatic prescription verification machine to perform the same task currently done by a pharmacist. The emphasis here is on the visual aspects of the machine. The system has been successfully tested on 43,000 prescriptions.

  5. Formally verifying human–automation interaction as part of a system model: limitations and tradeoffs

    PubMed Central

    Bass, Ellen J.

    2011-01-01

    Both the human factors engineering (HFE) and formal methods communities are concerned with improving the design of safety-critical systems. This work discusses a modeling effort that leveraged methods from both fields to perform formal verification of human–automation interaction with a programmable device. This effort utilizes a system architecture composed of independent models of the human mission, human task behavior, human-device interface, device automation, and operational environment. The goals of this architecture were to allow HFE practitioners to perform formal verifications of realistic systems that depend on human–automation interaction in a reasonable amount of time using representative models, intuitive modeling constructs, and decoupled models of system components that could be easily changed to support multiple analyses. This framework was instantiated using a patient controlled analgesia pump in a two phased process where models in each phase were verified using a common set of specifications. The first phase focused on the mission, human-device interface, and device automation; and included a simple, unconstrained human task behavior model. The second phase replaced the unconstrained task model with one representing normative pump programming behavior. Because models produced in the first phase were too large for the model checker to verify, a number of model revisions were undertaken that affected the goals of the effort. While the use of human task behavior models in the second phase helped mitigate model complexity, verification time increased. Additional modeling tools and technological developments are necessary for model checking to become a more usable technique for HFE. PMID:21572930

  6. Using SysML for verification and validation planning on the Large Synoptic Survey Telescope (LSST)

    NASA Astrophysics Data System (ADS)

    Selvy, Brian M.; Claver, Charles; Angeli, George

    2014-08-01

    This paper provides an overview of the tool, language, and methodology used for Verification and Validation Planning on the Large Synoptic Survey Telescope (LSST) Project. LSST has implemented a Model Based Systems Engineering (MBSE) approach as a means of defining all systems engineering planning and definition activities that have historically been captured in paper documents. Specifically, LSST has adopted the Systems Modeling Language (SysML) standard and is utilizing a software tool called Enterprise Architect, developed by Sparx Systems. Much of the historical use of SysML has focused on the early phases of the project life cycle. Our approach is to extend the advantages of MBSE into later stages of the construction project. This paper details the methodology employed to use the tool to document the verification planning phases, including the extension of the language to accommodate the project's needs. The process includes defining the Verification Plan for each requirement, which in turn consists of a Verification Requirement, Success Criteria, Verification Method(s), Verification Level, and Verification Owner. Each Verification Method for each Requirement is defined as a Verification Activity and mapped into Verification Events, which are collections of activities that can be executed concurrently in an efficient and complementary way. Verification Event dependency and sequences are modeled using Activity Diagrams. The methodology employed also ties in to the Project Management Control System (PMCS), which utilizes Primavera P6 software, mapping each Verification Activity as a step in a planned activity. This approach leads to full traceability from initial Requirement to scheduled, costed, and resource loaded PMCS task-based activities, ensuring all requirements will be verified.

  7. Expert system verification and validation guidelines/workshop task. Deliverable no. 1: ES V/V guidelines

    NASA Technical Reports Server (NTRS)

    French, Scott W.

    1991-01-01

    The goals are to show that verifying and validating a software system is a required part of software development and has a direct impact on the software's design and structure. Workshop tasks are given in the areas of statistics, integration/system test, unit and architectural testing, and a traffic controller problem.

  8. Generalizing Atoms in Constraint Logic

    NASA Technical Reports Server (NTRS)

    Page, C. David, Jr.; Frisch, Alan M.

    1991-01-01

    This paper studies the generalization of atomic formulas, or atoms, that are augmented with constraints on or among their terms. The atoms may also be viewed as definite clauses whose antecedents express the constraints. Atoms are generalized relative to a body of background information about the constraints. This paper first examines generalization of atoms with only monadic constraints. The paper develops an algorithm for the generalization task and discusses algorithm complexity. It then extends the algorithm to apply to atoms with constraints of arbitrary arity. The paper also presents semantic properties of the generalizations computed by the algorithms, making the algorithms applicable to such problems as abduction, induction, and knowledge base verification. The paper emphasizes the application to induction and presents a pac-learning result for constrained atoms.

  9. Verification and validation of a Work Domain Analysis with turing machine task analysis.

    PubMed

    Rechard, J; Bignon, A; Berruet, P; Morineau, T

    2015-03-01

    While the use of Work Domain Analysis as a methodological framework in cognitive engineering is increasing rapidly, verification and validation of work domain models produced by this method are becoming a significant issue. In this article, we propose the use of a method based on Turing machine formalism named "Turing Machine Task Analysis" to verify and validate work domain models. The application of this method on two work domain analyses, one of car driving which is an "intentional" domain, and the other of a ship water system which is a "causal domain" showed the possibility of highlighting improvements needed by these models. More precisely, the step by step analysis of a degraded task scenario in each work domain model pointed out unsatisfactory aspects in the first modelling, like overspecification, underspecification, omission of work domain affordances, or unsuitable inclusion of objects in the work domain model. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  10. Personal Verification/Identification via Analysis of the Peripheral ECG Leads: Influence of the Personal Health Status on the Accuracy

    PubMed Central

    Bortolan, Giovanni

    2015-01-01

    Traditional means for identity validation (PIN codes, passwords), and physiological and behavioral biometric characteristics (fingerprint, iris, and speech) are susceptible to hacker attacks and/or falsification. This paper presents a method for person verification/identification based on correlation of present-to-previous limb ECG leads: I (r I), II (r II), calculated from them first principal ECG component (r PCA), linear and nonlinear combinations between r I, r II, and r PCA. For the verification task, the one-to-one scenario is applied and threshold values for r I, r II, and r PCA and their combinations are derived. The identification task supposes one-to-many scenario and the tested subject is identified according to the maximal correlation with a previously recorded ECG in a database. The population based ECG-ILSA database of 540 patients (147 healthy subjects, 175 patients with cardiac diseases, and 218 with hypertension) has been considered. In addition a common reference PTB dataset (14 healthy individuals) with short time interval between the two acquisitions has been taken into account. The results on ECG-ILSA database were satisfactory with healthy people, and there was not a significant decrease in nonhealthy patients, demonstrating the robustness of the proposed method. With PTB database, the method provides an identification accuracy of 92.9% and a verification sensitivity and specificity of 100% and 89.9%. PMID:26568954

  11. Personal Verification/Identification via Analysis of the Peripheral ECG Leads: Influence of the Personal Health Status on the Accuracy.

    PubMed

    Jekova, Irena; Bortolan, Giovanni

    2015-01-01

    Traditional means for identity validation (PIN codes, passwords), and physiological and behavioral biometric characteristics (fingerprint, iris, and speech) are susceptible to hacker attacks and/or falsification. This paper presents a method for person verification/identification based on correlation of present-to-previous limb ECG leads: I (r I), II (r II), calculated from them first principal ECG component (r PCA), linear and nonlinear combinations between r I, r II, and r PCA. For the verification task, the one-to-one scenario is applied and threshold values for r I, r II, and r PCA and their combinations are derived. The identification task supposes one-to-many scenario and the tested subject is identified according to the maximal correlation with a previously recorded ECG in a database. The population based ECG-ILSA database of 540 patients (147 healthy subjects, 175 patients with cardiac diseases, and 218 with hypertension) has been considered. In addition a common reference PTB dataset (14 healthy individuals) with short time interval between the two acquisitions has been taken into account. The results on ECG-ILSA database were satisfactory with healthy people, and there was not a significant decrease in nonhealthy patients, demonstrating the robustness of the proposed method. With PTB database, the method provides an identification accuracy of 92.9% and a verification sensitivity and specificity of 100% and 89.9%.

  12. Definition of ground test for verification of large space structure control

    NASA Technical Reports Server (NTRS)

    Glaese, John R.

    1994-01-01

    Under this contract, the Large Space Structure Ground Test Verification (LSSGTV) Facility at the George C. Marshall Space Flight Center (MSFC) was developed. Planning in coordination with NASA was finalized and implemented. The contract was modified and extended with several increments of funding to procure additional hardware and to continue support for the LSSGTV facility. Additional tasks were defined for the performance of studies in the dynamics, control and simulation of tethered satellites. When the LSSGTV facility development task was completed, support and enhancement activities were funded through a new competitive contract won by LCD. All work related to LSSGTV performed under NAS8-35835 has been completed and documented. No further discussion of these activities will appear in this report. This report summarizes the tether dynamics and control studies performed.

  13. Experimental verification of multipartite entanglement in quantum networks

    PubMed Central

    McCutcheon, W.; Pappa, A.; Bell, B. A.; McMillan, A.; Chailloux, A.; Lawson, T.; Mafu, M.; Markham, D.; Diamanti, E.; Kerenidis, I.; Rarity, J. G.; Tame, M. S.

    2016-01-01

    Multipartite entangled states are a fundamental resource for a wide range of quantum information processing tasks. In particular, in quantum networks, it is essential for the parties involved to be able to verify if entanglement is present before they carry out a given distributed task. Here we design and experimentally demonstrate a protocol that allows any party in a network to check if a source is distributing a genuinely multipartite entangled state, even in the presence of untrusted parties. The protocol remains secure against dishonest behaviour of the source and other parties, including the use of system imperfections to their advantage. We demonstrate the verification protocol in a three- and four-party setting using polarization-entangled photons, highlighting its potential for realistic photonic quantum communication and networking applications. PMID:27827361

  14. Optimized Temporal Monitors for SystemC

    NASA Technical Reports Server (NTRS)

    Tabakov, Deian; Rozier, Kristin Y.; Vardi, Moshe Y.

    2012-01-01

    SystemC is a modeling language built as an extension of C++. Its growing popularity and the increasing complexity of designs have motivated research efforts aimed at the verification of SystemC models using assertion-based verification (ABV), where the designer asserts properties that capture the design intent in a formal language such as PSL or SVA. The model then can be verified against the properties using runtime or formal verification techniques. In this paper we focus on automated generation of runtime monitors from temporal properties. Our focus is on minimizing runtime overhead, rather than monitor size or monitor-generation time. We identify four issues in monitor generation: state minimization, alphabet representation, alphabet minimization, and monitor encoding. We conduct extensive experimentation and identify a combination of settings that offers the best performance in terms of runtime overhead.

  15. Authoring and verification of clinical guidelines: a model driven approach.

    PubMed

    Pérez, Beatriz; Porres, Ivan

    2010-08-01

    The goal of this research is to provide a framework to enable authoring and verification of clinical guidelines. The framework is part of a larger research project aimed at improving the representation, quality and application of clinical guidelines in daily clinical practice. The verification process of a guideline is based on (1) model checking techniques to verify guidelines against semantic errors and inconsistencies in their definition, (2) combined with Model Driven Development (MDD) techniques, which enable us to automatically process manually created guideline specifications and temporal-logic statements to be checked and verified regarding these specifications, making the verification process faster and cost-effective. Particularly, we use UML statecharts to represent the dynamics of guidelines and, based on this manually defined guideline specifications, we use a MDD-based tool chain to automatically process them to generate the input model of a model checker. The model checker takes the resulted model together with the specific guideline requirements, and verifies whether the guideline fulfils such properties. The overall framework has been implemented as an Eclipse plug-in named GBDSSGenerator which, particularly, starting from the UML statechart representing a guideline, allows the verification of the guideline against specific requirements. Additionally, we have established a pattern-based approach for defining commonly occurring types of requirements in guidelines. We have successfully validated our overall approach by verifying properties in different clinical guidelines resulting in the detection of some inconsistencies in their definition. The proposed framework allows (1) the authoring and (2) the verification of clinical guidelines against specific requirements defined based on a set of property specification patterns, enabling non-experts to easily write formal specifications and thus easing the verification process. Copyright 2010 Elsevier Inc. All rights reserved.

  16. Design for Verification: Enabling Verification of High Dependability Software-Intensive Systems

    NASA Technical Reports Server (NTRS)

    Mehlitz, Peter C.; Penix, John; Markosian, Lawrence Z.; Koga, Dennis (Technical Monitor)

    2003-01-01

    Strategies to achieve confidence that high-dependability applications are correctly implemented include testing and automated verification. Testing deals mainly with a limited number of expected execution paths. Verification usually attempts to deal with a larger number of possible execution paths. While the impact of architecture design on testing is well known, its impact on most verification methods is not as well understood. The Design for Verification approach considers verification from the application development perspective, in which system architecture is designed explicitly according to the application's key properties. The D4V-hypothesis is that the same general architecture and design principles that lead to good modularity, extensibility and complexity/functionality ratio can be adapted to overcome some of the constraints on verification tools, such as the production of hand-crafted models and the limits on dynamic and static analysis caused by state space explosion.

  17. A methodology for the rigorous verification of plasma simulation codes

    NASA Astrophysics Data System (ADS)

    Riva, Fabio

    2016-10-01

    The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.

  18. Cryogenic Fluid Management Experiment (CFME) trunnion verification testing

    NASA Technical Reports Server (NTRS)

    Bailey, W. J.; Fester, D. A.

    1983-01-01

    The Cryogenic Fluid Management Experiment (CFME) was designed to characterize subcritical liquid hydrogen storage and expulsion in the low-g space environment. The CFME has now become the storage and supply tank for the Cryogenic Fluid Management Facility, which includes transfer line and receiver tanks, as well. The liquid hydrogen storage and supply vessel is supported within a vacuum jacket to two fiberglass/epoxy composite trunnions which were analyzed and designed. Analysis using the limited available data indicated the trunnion was the most fatigue critical component in the storage vessel. Before committing the complete storage tank assembly to environmental testing, an experimental assessment was performed to verify the capability of the trunnion design to withstand expected vibration and loading conditions. Three tasks were conducted to evaluate trunnion integrity. The first determined the fatigue properties of the trunnion composite laminate materials. Tests at both ambient and liquid hydrogen temperatures showed composite material fatigue properties far in excess of those expected. Next, an assessment of the adequacy of the trunnion designs was performed (based on the tested material properties).

  19. Results from an Independent View on The Validation of Safety-Critical Space Systems

    NASA Astrophysics Data System (ADS)

    Silva, N.; Lopes, R.; Esper, A.; Barbosa, R.

    2013-08-01

    The Independent verification and validation (IV&V) has been a key process for decades, and is considered in several international standards. One of the activities described in the “ESA ISVV Guide” is the independent test verification (stated as Integration/Unit Test Procedures and Test Data Verification). This activity is commonly overlooked since customers do not really see the added value of checking thoroughly the validation team work (could be seen as testing the tester's work). This article presents the consolidated results of a large set of independent test verification activities, including the main difficulties, results obtained and advantages/disadvantages for the industry of these activities. This study will support customers in opting-in or opting-out for this task in future IV&V contracts since we provide concrete results from real case studies in the space embedded systems domain.

  20. Engineering of the LISA Pathfinder mission—making the experiment a practical reality

    NASA Astrophysics Data System (ADS)

    Warren, Carl; Dunbar, Neil; Backler, Mike

    2009-05-01

    LISA Pathfinder represents a unique challenge in the development of scientific spacecraft—not only is the LISA Test Package (LTP) payload a complex integrated development, placing stringent requirements on its developers and the spacecraft, but the payload also acts as the core sensor and actuator for the spacecraft, making the tasks of control design, software development and system verification unusually difficult. The micro-propulsion system which provides the remaining actuation also presents substantial development and verification challenges. As the mission approaches the system critical design review, flight hardware is completing verification and the process of verification using software and hardware simulators and test benches is underway. Preparation for operations has started, but critical milestones for LTP and field effect electric propulsion (FEEP) lie ahead. This paper summarizes the status of the present development and outlines the key challenges that must be overcome on the way to launch.

  1. Non-Lethal Weapons Effectiveness Assessment Development and Verification Study (Etude d’evaluation, de developpement et de verification de l’efficacite des armes non letales)

    DTIC Science & Technology

    2009-10-01

    will guarantee a solid base for the future. The content of this publication has been reproduced directly from material supplied by RTO or the...intensity threat involving a local population wanting to break into the camp to steal material and food supplies ; and • A higher intensity threat...combatant evacuation opeations, distribute emergency supplies , and evacuate/ relocate refugees and displaced persons. Specified NLW-relevant tasks are

  2. Release Fixed Heel Point (FHP) Accommodation Model Verification and Validation (V and V) Plan - Rev A

    DTIC Science & Technology

    2017-01-23

    5e. TASK NUMBER N/A 5f. WORK UNIT NUMBER N/A 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) AND ADDRESS(ES) RDECOM-TARDEC-ACT Attn...occupant work space, central 90% of the Soldier population, encumbrance, posture and position, verification and validation, computer aided design...factors engineers could benefit by working with vehicle designers to perform virtual assessments in CAD when there is not enough time and/or funding to

  3. Abstraction and Assume-Guarantee Reasoning for Automated Software Verification

    NASA Technical Reports Server (NTRS)

    Chaki, S.; Clarke, E.; Giannakopoulou, D.; Pasareanu, C. S.

    2004-01-01

    Compositional verification and abstraction are the key techniques to address the state explosion problem associated with model checking of concurrent software. A promising compositional approach is to prove properties of a system by checking properties of its components in an assume-guarantee style. This article proposes a framework for performing abstraction and assume-guarantee reasoning of concurrent C code in an incremental and fully automated fashion. The framework uses predicate abstraction to extract and refine finite state models of software and it uses an automata learning algorithm to incrementally construct assumptions for the compositional verification of the abstract models. The framework can be instantiated with different assume-guarantee rules. We have implemented our approach in the COMFORT reasoning framework and we show how COMFORT out-performs several previous software model checking approaches when checking safety properties of non-trivial concurrent programs.

  4. Control of embankment settlement field verification on PCPT prediction methods.

    DOT National Transportation Integrated Search

    2011-07-01

    Piezocone penetration tests (PCPT) have been widely used by geotechnical engineers for subsurface investigation and evaluation of different soil properties such as strength and deformation characteristics of the soil. This report focuses on the verif...

  5. Components of Processing Deficit Among Paranoid and Nonparanoid Schizophrenics

    ERIC Educational Resources Information Center

    Neufeld, Richard W. J.

    1977-01-01

    Paranoid and nonparanoid schizophrenics were compared to normals in their performance on a sentence verification task. Results were related to past evidence and hypotheses about central processing performance among schizophrenics. (Editor/RK)

  6. Effects of target typicality on categorical search.

    PubMed

    Maxfield, Justin T; Stalder, Westri D; Zelinsky, Gregory J

    2014-10-01

    The role of target typicality in a categorical visual search task was investigated by cueing observers with a target name, followed by a five-item target present/absent search array in which the target images were rated in a pretest to be high, medium, or low in typicality with respect to the basic-level target cue. Contrary to previous work, we found that search guidance was better for high-typicality targets compared to low-typicality targets, as measured by both the proportion of immediate target fixations and the time to fixate the target. Consistent with previous work, we also found an effect of typicality on target verification times, the time between target fixation and the search judgment; as target typicality decreased, verification times increased. To model these typicality effects, we trained Support Vector Machine (SVM) classifiers on the target categories, and tested these on the corresponding specific targets used in the search task. This analysis revealed significant differences in classifier confidence between the high-, medium-, and low-typicality groups, paralleling the behavioral results. Collectively, these findings suggest that target typicality broadly affects both search guidance and verification, and that differences in typicality can be predicted by distance from an SVM classification boundary. © 2014 ARVO.

  7. Model Checking Satellite Operational Procedures

    NASA Astrophysics Data System (ADS)

    Cavaliere, Federico; Mari, Federico; Melatti, Igor; Minei, Giovanni; Salvo, Ivano; Tronci, Enrico; Verzino, Giovanni; Yushtein, Yuri

    2011-08-01

    We present a model checking approach for the automatic verification of satellite operational procedures (OPs). Building a model for a complex system as a satellite is a hard task. We overcome this obstruction by using a suitable simulator (SIMSAT) for the satellite. Our approach aims at improving OP quality assurance by automatic exhaustive exploration of all possible simulation scenarios. Moreover, our solution decreases OP verification costs by using a model checker (CMurphi) to automatically drive the simulator. We model OPs as user-executed programs observing the simulator telemetries and sending telecommands to the simulator. In order to assess feasibility of our approach we present experimental results on a simple meaningful scenario. Our results show that we can save up to 90% of verification time.

  8. Design and Verification of Critical Pressurised Windows for Manned Spaceflight

    NASA Astrophysics Data System (ADS)

    Lamoure, Richard; Busto, Lara; Novo, Francisco; Sinnema, Gerben; Leal, Mendes M.

    2014-06-01

    The Window Design for Manned Spaceflight (WDMS) project was tasked with establishing the state-of-art and explore possible improvements to the current structural integrity verification and fracture control methodologies for manned spacecraft windows.A critical review of the state-of-art in spacecraft window design, materials and verification practice was conducted. Shortcomings of the methodology in terms of analysis, inspection and testing were identified. Schemes for improving verification practices and reducing conservatism whilst maintaining the required safety levels were then proposed.An experimental materials characterisation programme was defined and carried out with the support of the 'Glass and Façade Technology Research Group', at the University of Cambridge. Results of the sample testing campaign were analysed, post-processed and subsequently applied to the design of a breadboard window demonstrator.Two Fused Silica glass window panes were procured and subjected to dedicated analyses, inspection and testing comprising both qualification and acceptance programmes specifically tailored to the objectives of the activity.Finally, main outcomes have been compiled into a Structural Verification Guide for Pressurised Windows in manned spacecraft, incorporating best practices and lessons learned throughout this project.

  9. Verification of a Viscous Computational Aeroacoustics Code using External Verification Analysis

    NASA Technical Reports Server (NTRS)

    Ingraham, Daniel; Hixon, Ray

    2015-01-01

    The External Verification Analysis approach to code verification is extended to solve the three-dimensional Navier-Stokes equations with constant properties, and is used to verify a high-order computational aeroacoustics (CAA) code. After a brief review of the relevant literature, the details of the EVA approach are presented and compared to the similar Method of Manufactured Solutions (MMS). Pseudocode representations of EVA's algorithms are included, along with the recurrence relations needed to construct the EVA solution. The code verification results show that EVA was able to convincingly verify a high-order, viscous CAA code without the addition of MMS-style source terms, or any other modifications to the code.

  10. Verification of a Viscous Computational Aeroacoustics Code Using External Verification Analysis

    NASA Technical Reports Server (NTRS)

    Ingraham, Daniel; Hixon, Ray

    2015-01-01

    The External Verification Analysis approach to code verification is extended to solve the three-dimensional Navier-Stokes equations with constant properties, and is used to verify a high-order computational aeroacoustics (CAA) code. After a brief review of the relevant literature, the details of the EVA approach are presented and compared to the similar Method of Manufactured Solutions (MMS). Pseudocode representations of EVA's algorithms are included, along with the recurrence relations needed to construct the EVA solution. The code verification results show that EVA was able to convincingly verify a high-order, viscous CAA code without the addition of MMS-style source terms, or any other modifications to the code.

  11. Precision pointing and control of flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Bantell, M. H., Jr.

    1987-01-01

    The problem and long term objectives for the precision pointing and control of flexible spacecraft are given. The four basic objectives are stated in terms of two principle tasks. Under Task 1, robust low order controllers, improved structural modeling methods for control applications and identification methods for structural dynamics are being developed. Under Task 2, a lab test experiment for verification of control laws and system identification algorithms is being developed. For Task 1, work has focused on robust low order controller design and some initial considerations for structural modeling in control applications. For Task 2, work has focused on experiment design and fabrication, along with sensor selection and initial digital controller implementation. Conclusions are given.

  12. Verification hybrid control of a wheeled mobile robot and manipulator

    NASA Astrophysics Data System (ADS)

    Muszynska, Magdalena; Burghardt, Andrzej; Kurc, Krzysztof; Szybicki, Dariusz

    2016-04-01

    In this article, innovative approaches to realization of the wheeled mobile robots and manipulator tracking are presented. Conceptions include application of the neural-fuzzy systems to compensation of the controlled system's nonlinearities in the tracking control task. Proposed control algorithms work on-line, contain structure, that adapt to the changeable work conditions of the controlled systems, and do not require the preliminary learning. The algorithm was verification on the real object which was a Scorbot - ER 4pc robotic manipulator and a Pioneer - 2DX mobile robot.

  13. Characterization of exposure and dose of man made vitreous fiber in experimental studies.

    PubMed Central

    Hamilton, R D; Miiller, W C; Christensen, D R; Anderson, R; Hesterberg, T W

    1994-01-01

    The use of fibrous test materials in in vivo experiments introduces a number of significant problems not associated with nonfibrous particulates. The key to all aspects of the experiment is the accurate characterization of the test material in terms of fiber length, diameter, particulate content, and chemistry. All data related to fiber properties must be collected in a statistically sound manner to eliminate potential bias. Procedures similar to those outlined by the National Institute of Occupational Safety and Health (NIOSH) or the World Health Organization (WHO) must be the basis of any fiber characterization. The test material to which the animal is exposed must be processed to maximize the amount of respirable fiber and to minimize particulate content. The complex relationship among the characteristics of the test material, the properties of the delivery system, and the actual dose that reaches the target tissue in the lung makes verification of dose essential. In the case of man-made vitreous fibers (MMVF), dose verification through recovery of fiber from exposed animals is a complex task. The potential for high fiber solubility makes many of the conventional techniques for tissue preservation and digestion inappropriate. Processes based on the minimum use of aggressive chemicals, such as cold storage and low temperature ashing, are potentially useful for a wide range of inorganic fibers. Any processes used to assess fiber exposure and dose must be carefully validated to establish that the chemical and physical characteristics of the fibers have not been changed and that the dose to the target tissue is completely and accurately described. PMID:7882912

  14. Travtek Evaluation Task C3: Camera Car Study

    DOT National Transportation Integrated Search

    1998-11-01

    A "biometric" technology is an automatic method for the identification, or identity verification, of an individual based on physiological or behavioral characteristics. The primary objective of the study summarized in this tech brief was to make reco...

  15. A Single-Boundary Accumulator Model of Response Times in an Addition Verification Task

    PubMed Central

    Faulkenberry, Thomas J.

    2017-01-01

    Current theories of mathematical cognition offer competing accounts of the interplay between encoding and calculation in mental arithmetic. Additive models propose that manipulations of problem format do not interact with the cognitive processes used in calculation. Alternatively, interactive models suppose that format manipulations have a direct effect on calculation processes. In the present study, we tested these competing models by fitting participants' RT distributions in an arithmetic verification task with a single-boundary accumulator model (the shifted Wald distribution). We found that in addition to providing a more complete description of RT distributions, the accumulator model afforded a potentially more sensitive test of format effects. Specifically, we found that format affected drift rate, which implies that problem format has a direct impact on calculation processes. These data give further support for an interactive model of mental arithmetic. PMID:28769853

  16. Microscopy as a statistical, Rényi-Ulam, half-lie game: a new heuristic search strategy to accelerate imaging.

    PubMed

    Drumm, Daniel W; Greentree, Andrew D

    2017-11-07

    Finding a fluorescent target in a biological environment is a common and pressing microscopy problem. This task is formally analogous to the canonical search problem. In ideal (noise-free, truthful) search problems, the well-known binary search is optimal. The case of half-lies, where one of two responses to a search query may be deceptive, introduces a richer, Rényi-Ulam problem and is particularly relevant to practical microscopy. We analyse microscopy in the contexts of Rényi-Ulam games and half-lies, developing a new family of heuristics. We show the cost of insisting on verification by positive result in search algorithms; for the zero-half-lie case bisectioning with verification incurs a 50% penalty in the average number of queries required. The optimal partitioning of search spaces directly following verification in the presence of random half-lies is determined. Trisectioning with verification is shown to be the most efficient heuristic of the family in a majority of cases.

  17. Comparative Cognitive Task Analyses of Experimental Science and Instructional Laboratory Courses

    NASA Astrophysics Data System (ADS)

    Wieman, Carl

    2015-09-01

    Undergraduate instructional labs in physics generate intense opinions. Their advocates are passionate as to their importance for teaching physics as an experimental activity and providing "hands-on" learning experiences, while their detractors (often but not entirely students) offer harsh criticisms that they are pointless, confusing and unsatisfying, and "cookbook." Here, both to help understand the reason for such discrepant views and to aid in the design of instructional lab courses, I compare the mental tasks or types of thinking ("cognitive task analysis") associated with a physicist doing tabletop experimental research with the cognitive tasks of students in an introductory physics instructional lab involving traditional verification/confirmation exercises.

  18. A service-oriented architecture for integrating the modeling and formal verification of genetic regulatory networks

    PubMed Central

    2009-01-01

    Background The study of biological networks has led to the development of increasingly large and detailed models. Computer tools are essential for the simulation of the dynamical behavior of the networks from the model. However, as the size of the models grows, it becomes infeasible to manually verify the predictions against experimental data or identify interesting features in a large number of simulation traces. Formal verification based on temporal logic and model checking provides promising methods to automate and scale the analysis of the models. However, a framework that tightly integrates modeling and simulation tools with model checkers is currently missing, on both the conceptual and the implementational level. Results We have developed a generic and modular web service, based on a service-oriented architecture, for integrating the modeling and formal verification of genetic regulatory networks. The architecture has been implemented in the context of the qualitative modeling and simulation tool GNA and the model checkers NUSMV and CADP. GNA has been extended with a verification module for the specification and checking of biological properties. The verification module also allows the display and visual inspection of the verification results. Conclusions The practical use of the proposed web service is illustrated by means of a scenario involving the analysis of a qualitative model of the carbon starvation response in E. coli. The service-oriented architecture allows modelers to define the model and proceed with the specification and formal verification of the biological properties by means of a unified graphical user interface. This guarantees a transparent access to formal verification technology for modelers of genetic regulatory networks. PMID:20042075

  19. Classical workflow nets and workflow nets with reset arcs: using Lyapunov stability for soundness verification

    NASA Astrophysics Data System (ADS)

    Clempner, Julio B.

    2017-01-01

    This paper presents a novel analytical method for soundness verification of workflow nets and reset workflow nets, using the well-known stability results of Lyapunov for Petri nets. We also prove that the soundness property is decidable for workflow nets and reset workflow nets. In addition, we provide evidence of several outcomes related with properties such as boundedness, liveness, reversibility and blocking using stability. Our approach is validated theoretically and by a numerical example related to traffic signal-control synchronisation.

  20. High-resolution face verification using pore-scale facial features.

    PubMed

    Li, Dong; Zhou, Huiling; Lam, Kin-Man

    2015-08-01

    Face recognition methods, which usually represent face images using holistic or local facial features, rely heavily on alignment. Their performances also suffer a severe degradation under variations in expressions or poses, especially when there is one gallery per subject only. With the easy access to high-resolution (HR) face images nowadays, some HR face databases have recently been developed. However, few studies have tackled the use of HR information for face recognition or verification. In this paper, we propose a pose-invariant face-verification method, which is robust to alignment errors, using the HR information based on pore-scale facial features. A new keypoint descriptor, namely, pore-Principal Component Analysis (PCA)-Scale Invariant Feature Transform (PPCASIFT)-adapted from PCA-SIFT-is devised for the extraction of a compact set of distinctive pore-scale facial features. Having matched the pore-scale features of two-face regions, an effective robust-fitting scheme is proposed for the face-verification task. Experiments show that, with one frontal-view gallery only per subject, our proposed method outperforms a number of standard verification methods, and can achieve excellent accuracy even the faces are under large variations in expression and pose.

  1. Expediting Combinatorial Data Set Analysis by Combining Human and Algorithmic Analysis.

    PubMed

    Stein, Helge Sören; Jiao, Sally; Ludwig, Alfred

    2017-01-09

    A challenge in combinatorial materials science remains the efficient analysis of X-ray diffraction (XRD) data and its correlation to functional properties. Rapid identification of phase-regions and proper assignment of corresponding crystal structures is necessary to keep pace with the improved methods for synthesizing and characterizing materials libraries. Therefore, a new modular software called htAx (high-throughput analysis of X-ray and functional properties data) is presented that couples human intelligence tasks used for "ground-truth" phase-region identification with subsequent unbiased verification by an algorithm to efficiently analyze which phases are present in a materials library. Identified phases and phase-regions may then be correlated to functional properties in an expedited manner. For the functionality of htAx to be proven, two previously published XRD benchmark data sets of the materials systems Al-Cr-Fe-O and Ni-Ti-Cu are analyzed by htAx. The analysis of ∼1000 XRD patterns takes less than 1 day with htAx. The proposed method reliably identifies phase-region boundaries and robustly identifies multiphase structures. The method also addresses the problem of identifying regions with previously unpublished crystal structures using a special daisy ternary plot.

  2. Western Shallow Oil Zone, Elk Hills Field, Kern County, California:

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carey, K.B.

    1987-09-01

    The general Reservoir Study of the Western Shallow Oil Zone was prepared by Evans, Carey and Crozier as Task Assignment 009 with the United States Department of Energy. This study, Appendix II addresses the first Wilhelm Sands and its sub unites and pools. Basic pressure, production and assorted technical data were provided by the US Department of Energy staff at Elk Hills. These data were accepted as furnished with no attempt being made by Evans, Carey and Crozier for independent verification. This study has identified the petrophysical properties and the past productive performance of the reservoir. Primary reserves have beenmore » determined and general means of enhancing future recovery have been suggested. It is hoped that this volume can now additionally serve as a take off point for exploitation engineers to develop specific programs toward the end.« less

  3. Software Development Technologies for Reactive, Real-Time, and Hybrid Systems: Summary of Research

    NASA Technical Reports Server (NTRS)

    Manna, Zohar

    1998-01-01

    This research is directed towards the implementation of a comprehensive deductive-algorithmic environment (toolkit) for the development and verification of high assurance reactive systems, especially concurrent, real-time, and hybrid systems. For this, we have designed and implemented the STCP (Stanford Temporal Prover) verification system. Reactive systems have an ongoing interaction with their environment, and their computations are infinite sequences of states. A large number of systems can be seen as reactive systems, including hardware, concurrent programs, network protocols, and embedded systems. Temporal logic provides a convenient language for expressing properties of reactive systems. A temporal verification methodology provides procedures for proving that a given system satisfies a given temporal property. The research covered necessary theoretical foundations as well as implementation and application issues.

  4. Air traffic surveillance and control using hybrid estimation and protocol-based conflict resolution

    NASA Astrophysics Data System (ADS)

    Hwang, Inseok

    The continued growth of air travel and recent advances in new technologies for navigation, surveillance, and communication have led to proposals by the Federal Aviation Administration (FAA) to provide reliable and efficient tools to aid Air Traffic Control (ATC) in performing their tasks. In this dissertation, we address four problems frequently encountered in air traffic surveillance and control; multiple target tracking and identity management, conflict detection, conflict resolution, and safety verification. We develop a set of algorithms and tools to aid ATC; These algorithms have the provable properties of safety, computational efficiency, and convergence. Firstly, we develop a multiple-maneuvering-target tracking and identity management algorithm which can keep track of maneuvering aircraft in noisy environments and of their identities. Secondly, we propose a hybrid probabilistic conflict detection algorithm between multiple aircraft which uses flight mode estimates as well as aircraft current state estimates. Our algorithm is based on hybrid models of aircraft, which incorporate both continuous dynamics and discrete mode switching. Thirdly, we develop an algorithm for multiple (greater than two) aircraft conflict avoidance that is based on a closed-form analytic solution and thus provides guarantees of safety. Finally, we consider the problem of safety verification of control laws for safety critical systems, with application to air traffic control systems. We approach safety verification through reachability analysis, which is a computationally expensive problem. We develop an over-approximate method for reachable set computation using polytopic approximation methods and dynamic optimization. These algorithms may be used either in a fully autonomous way, or as supporting tools to increase controllers' situational awareness and to reduce their work load.

  5. Generating Phenotypical Erroneous Human Behavior to Evaluate Human-automation Interaction Using Model Checking

    PubMed Central

    Bolton, Matthew L.; Bass, Ellen J.; Siminiceanu, Radu I.

    2012-01-01

    Breakdowns in complex systems often occur as a result of system elements interacting in unanticipated ways. In systems with human operators, human-automation interaction associated with both normative and erroneous human behavior can contribute to such failures. Model-driven design and analysis techniques provide engineers with formal methods tools and techniques capable of evaluating how human behavior can contribute to system failures. This paper presents a novel method for automatically generating task analytic models encompassing both normative and erroneous human behavior from normative task models. The generated erroneous behavior is capable of replicating Hollnagel’s zero-order phenotypes of erroneous action for omissions, jumps, repetitions, and intrusions. Multiple phenotypical acts can occur in sequence, thus allowing for the generation of higher order phenotypes. The task behavior model pattern capable of generating erroneous behavior can be integrated into a formal system model so that system safety properties can be formally verified with a model checker. This allows analysts to prove that a human-automation interactive system (as represented by the model) will or will not satisfy safety properties with both normative and generated erroneous human behavior. We present benchmarks related to the size of the statespace and verification time of models to show how the erroneous human behavior generation process scales. We demonstrate the method with a case study: the operation of a radiation therapy machine. A potential problem resulting from a generated erroneous human action is discovered. A design intervention is presented which prevents this problem from occurring. We discuss how our method could be used to evaluate larger applications and recommend future paths of development. PMID:23105914

  6. Using Pupil Diameter Changes for Measuring Mental Workload under Mental Processing

    NASA Astrophysics Data System (ADS)

    Batmaz, Ihsan; Ozturk, Mustafa

    In this study, it is aimed to evaluate the mental workload by using a practical way which based on measuring pupil diameter changes that occurs under mental processing. To determine the mental effort required for each task, the video record of subjects` eyes are taken while they are performed different tasks and pupils were measured from the records. A group of university student, one female 9 males participated to the experiment. Additionally, NASA-TLX questionnaire is applied for the related mental tasks. For verification of results obtained from both indices, the correlation coefficient is calculated task base. The results show that there is weak and negative correlation between the indices on task base except 3rd task. By investigating pupil diameter measurements data too, it is founded that pupil dilates under mental workload during performing related tasks. For all tasks, pupil diameters of response periods increased according to reference baseline period.

  7. Optimal Verification of Entangled States with Local Measurements

    NASA Astrophysics Data System (ADS)

    Pallister, Sam; Linden, Noah; Montanaro, Ashley

    2018-04-01

    Consider the task of verifying that a given quantum device, designed to produce a particular entangled state, does indeed produce that state. One natural approach would be to characterize the output state by quantum state tomography, or alternatively, to perform some kind of Bell test, tailored to the state of interest. We show here that neither approach is optimal among local verification strategies for 2-qubit states. We find the optimal strategy in this case and show that quadratically fewer total measurements are needed to verify to within a given fidelity than in published results for quantum state tomography, Bell test, or fidelity estimation protocols. We also give efficient verification protocols for any stabilizer state. Additionally, we show that requiring that the strategy be constructed from local, nonadaptive, and noncollective measurements only incurs a constant-factor penalty over a strategy without these restrictions.

  8. Expert system verification and validation survey, delivery 4

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The purpose is to determine the state-of-the-practice in Verification and Validation (V and V) of Expert Systems (ESs) on current NASA and Industry applications. This is the first task of a series which has the ultimate purpose of ensuring that adequate ES V and V tools and techniques are available for Space Station Knowledge Based Systems development. The strategy for determining the state-of-the-practice is to check how well each of the known ES V and V issues are being addressed and to what extent they have impacted the development of ESs.

  9. Damage Detection and Verification System (DDVS) for In-Situ Health Monitoring

    NASA Technical Reports Server (NTRS)

    Williams, Martha K.; Lewis, Mark; Szafran, J.; Shelton, C.; Ludwig, L.; Gibson, T.; Lane, J.; Trautwein, T.

    2015-01-01

    Project presentation for Game Changing Program Smart Book Release. Detection and Verification System (DDVS) expands the Flat Surface Damage Detection System (FSDDS) sensory panels damage detection capabilities and includes an autonomous inspection capability utilizing cameras and dynamic computer vision algorithms to verify system health. Objectives of this formulation task are to establish the concept of operations, formulate the system requirements for a potential ISS flight experiment, and develop a preliminary design of an autonomous inspection capability system that will be demonstrated as a proof-of-concept ground based damage detection and inspection system.

  10. Safety Verification of a Fault Tolerant Reconfigurable Autonomous Goal-Based Robotic Control System

    NASA Technical Reports Server (NTRS)

    Braman, Julia M. B.; Murray, Richard M; Wagner, David A.

    2007-01-01

    Fault tolerance and safety verification of control systems are essential for the success of autonomous robotic systems. A control architecture called Mission Data System (MDS), developed at the Jet Propulsion Laboratory, takes a goal-based control approach. In this paper, a method for converting goal network control programs into linear hybrid systems is developed. The linear hybrid system can then be verified for safety in the presence of failures using existing symbolic model checkers. An example task is simulated in MDS and successfully verified using HyTech, a symbolic model checking software for linear hybrid systems.

  11. Logic Model Checking of Time-Periodic Real-Time Systems

    NASA Technical Reports Server (NTRS)

    Florian, Mihai; Gamble, Ed; Holzmann, Gerard

    2012-01-01

    In this paper we report on the work we performed to extend the logic model checker SPIN with built-in support for the verification of periodic, real-time embedded software systems, as commonly used in aircraft, automobiles, and spacecraft. We first extended the SPIN verification algorithms to model priority based scheduling policies. Next, we added a library to support the modeling of periodic tasks. This library was used in a recent application of the SPIN model checker to verify the engine control software of an automobile, to study the feasibility of software triggers for unintended acceleration events.

  12. Task Force Report: Assessment of Nuclear Monitoring and Verification Technologies

    DTIC Science & Technology

    2014-01-01

    environment in which implemented  Project demands on, and assess capabilities of, International Atomic Energy Agency in next 15-20 years with expected...the Department of  Energy   (DOE) and  the  Intelligence  Community  (IC)  to  support  future  monitoring  and  verification  of  nuclear...could be  enabled by  expansion of  the  role of  the  International Atomic  Energy   Agency  (IAEA)  for  assuming  responsibility  for  the

  13. Expert system verification and validation survey. Delivery 2: Survey results

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The purpose is to determine the state-of-the-practice in Verification and Validation (V and V) of Expert Systems (ESs) on current NASA and industry applications. This is the first task of the series which has the ultimate purpose of ensuring that adequate ES V and V tools and techniques are available for Space Station Knowledge Based Systems development. The strategy for determining the state-of-the-practice is to check how well each of the known ES V and V issues are being addressed and to what extent they have impacted the development of ESs.

  14. Expert system verification and validation survey. Delivery 5: Revised

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The purpose is to determine the state-of-the-practice in Verification and Validation (V and V) of Expert Systems (ESs) on current NASA and Industry applications. This is the first task of a series which has the ultimate purpose of ensuring that adequate ES V and V tools and techniques are available for Space Station Knowledge Based Systems development. The strategy for determining the state-of-the-practice is to check how well each of the known ES V and V issues are being addressed and to what extent they have impacted the development of ESs.

  15. Expert system verification and validation survey. Delivery 3: Recommendations

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The purpose is to determine the state-of-the-practice in Verification and Validation (V and V) of Expert Systems (ESs) on current NASA and Industry applications. This is the first task of a series which has the ultimate purpose of ensuring that adequate ES V and V tools and techniques are available for Space Station Knowledge Based Systems development. The strategy for determining the state-of-the-practice is to check how well each of the known ES V and V issues are being addressed and to what extent they have impacted the development of ESs.

  16. Visualization design and verification of Ada tasking using timing diagrams

    NASA Technical Reports Server (NTRS)

    Vidale, R. F.; Szulewski, P. A.; Weiss, J. B.

    1986-01-01

    The use of timing diagrams is recommended in the design and testing of multi-task Ada programs. By displaying the task states vs. time, timing diagrams can portray the simultaneous threads of data flow and control which characterize tasking programs. This description of the system's dynamic behavior from conception to testing is a necessary adjunct to other graphical techniques, such as structure charts, which essentially give a static view of the system. A series of steps is recommended which incorporates timing diagrams into the design process. Finally, a description is provided of a prototype Ada Execution Analyzer (AEA) which automates the production of timing diagrams from VAX/Ada debugger output.

  17. Proceedings of the NASA Workshop on Registration and Rectification

    NASA Technical Reports Server (NTRS)

    Bryant, N. A. (Editor)

    1982-01-01

    Issues associated with the registration and rectification of remotely sensed data. Near and long range applications research tasks and some medium range technology augmentation research areas are recommended. Image sharpness, feature extraction, inter-image mapping, error analysis, and verification methods are addressed.

  18. Property-Based Monitoring of Analog and Mixed-Signal Systems

    NASA Astrophysics Data System (ADS)

    Havlicek, John; Little, Scott; Maler, Oded; Nickovic, Dejan

    In the recent past, there has been a steady growth of the market for consumer embedded devices such as cell phones, GPS and portable multimedia systems. In embedded systems, digital, analog and software components are combined on a single chip, resulting in increasingly complex designs that introduce richer functionality on smaller devices. As a consequence, the potential insertion of errors into a design becomes higher, yielding an increasing need for automated analog and mixed-signal validation tools. In the purely digital setting, formal verification based on properties expressed in industrial specification languages such as PSL and SVA is nowadays successfully integrated in the design flow. On the other hand, the validation of analog and mixed-signal systems still largely depends on simulation-based, ad-hoc methods. In this tutorial, we consider some ingredients of the standard verification methodology that can be successfully exported from digital to analog and mixed-signal setting, in particular property-based monitoring techniques. Property-based monitoring is a lighter approach to the formal verification, where the system is seen as a "black-box" that generates sets of traces, whose correctness is checked against a property, that is its high-level specification. Although incomplete, monitoring is effectively used to catch faults in systems, without guaranteeing their full correctness.

  19. 77 FR 28401 - Information Collection Activities: Legacy Data Verification Process (LDVP); Submitted for Office...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-14

    ... and natural gas resources in a manner that is consistent with the need to make such resources... to prevent or minimize the likelihood of blowouts, loss of well control, fires, spillages, physical... the environment or to property, or endanger life or health.'' BSEE's Legacy Data Verification Process...

  20. LH2 on-orbit storage tank support trunnion design and verification

    NASA Technical Reports Server (NTRS)

    Bailey, W. J.; Fester, D. A.; Toth, J. M., Jr.

    1985-01-01

    A detailed fatigue analysis was conducted to provide verification of the trunnion design in the reusable Cryogenic Fluid Management Facility for Shuttle flights and to assess the performance capability of the trunnion E-glass/S-glass epoxy composite material. Basic material property data at ambient and liquid hydrogen temperatures support the adequacy of the epoxy composite for seven-mission requirement. Testing of trunnions fabricated to the flight design has verified adequate strength and fatigue properties of the design to meet the requirements of seven Shuttle flights.

  1. Validation and verification of a virtual environment for training naval submarine officers

    NASA Astrophysics Data System (ADS)

    Zeltzer, David L.; Pioch, Nicholas J.

    1996-04-01

    A prototype virtual environment (VE) has been developed for training a submarine officer of the desk (OOD) to perform in-harbor navigation on a surfaced submarine. The OOD, stationed on the conning tower of the vessel, is responsible for monitoring the progress of the boat as it negotiates a marked channel, as well as verifying the navigational suggestions of the below- deck piloting team. The VE system allows an OOD trainee to view a particular harbor and associated waterway through a head-mounted display, receive spoken reports from a simulated piloting team, give spoken commands to the helmsman, and receive verbal confirmation of command execution from the helm. The task analysis of in-harbor navigation, and the derivation of application requirements are briefly described. This is followed by a discussion of the implementation of the prototype. This implementation underwent a series of validation and verification assessment activities, including operational validation, data validation, and software verification of individual software modules as well as the integrated system. Validation and verification procedures are discussed with respect to the OOD application in particular, and with respect to VE applications in general.

  2. A Verification-Driven Approach to Traceability and Documentation for Auto-Generated Mathematical Software

    NASA Technical Reports Server (NTRS)

    Denney, Ewen W.; Fischer, Bernd

    2009-01-01

    Model-based development and automated code generation are increasingly used for production code in safety-critical applications, but since code generators are typically not qualified, the generated code must still be fully tested, reviewed, and certified. This is particularly arduous for mathematical and control engineering software which requires reviewers to trace subtle details of textbook formulas and algorithms to the code, and to match requirements (e.g., physical units or coordinate frames) not represented explicitly in models or code. Both tasks are complicated by the often opaque nature of auto-generated code. We address these problems by developing a verification-driven approach to traceability and documentation. We apply the AUTOCERT verification system to identify and then verify mathematical concepts in the code, based on a mathematical domain theory, and then use these verified traceability links between concepts, code, and verification conditions to construct a natural language report that provides a high-level structured argument explaining why and how the code uses the assumptions and complies with the requirements. We have applied our approach to generate review documents for several sub-systems of NASA s Project Constellation.

  3. The use of robots for arms control treaty verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michalowski, S.J.

    1991-01-01

    Many aspects of the superpower relationship now present a new set of challenges and opportunities, including the vital area of arms control. This report addresses one such possibility: the use of robots for the verification of arms control treaties. The central idea of this report is far from commonly-accepted. In fact, it was only encountered once in bibliographic review phase of the project. Nonetheless, the incentive for using robots is simple and coincides with that of industrial applications: to replace or supplement human activity in the performance of tasks for which human participation is unnecessary, undesirable, impossible, too dangerous ormore » too expensive. As in industry, robots should replace workers (in this case, arms control inspectors) only when questions of efficiency, reliability, safety, security and cost-effectiveness have been answered satisfactorily. In writing this report, it is not our purpose to strongly advocate the application of robots in verification. Rather, we wish to explore the significant aspects, pro and con, of applying experience from the field of flexible automation to the complex task of assuring arms control treaty compliance. We want to establish a framework for further discussion of this topic and to define criteria for evaluating future proposals. The authors' expertise is in robots, not arms control. His practical experience has been in developing systems for use in the rehabilitation of severely disabled persons (such as quadriplegics), who can use robots for assistance during activities of everyday living, as well as in vocational applications. This creates a special interest in implementations that, in some way, include a human operator in the control scheme of the robot. As we hope to show in this report, such as interactive systems offer the greatest promise of making a contribution to the challenging problems of treaty verification. 15 refs.« less

  4. Simulation verification techniques study: Simulation self test hardware design and techniques report

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The final results are presented of the hardware verification task. The basic objectives of the various subtasks are reviewed along with the ground rules under which the overall task was conducted and which impacted the approach taken in deriving techniques for hardware self test. The results of the first subtask and the definition of simulation hardware are presented. The hardware definition is based primarily on a brief review of the simulator configurations anticipated for the shuttle training program. The results of the survey of current self test techniques are presented. The data sources that were considered in the search for current techniques are reviewed, and results of the survey are presented in terms of the specific types of tests that are of interest for training simulator applications. Specifically, these types of tests are readiness tests, fault isolation tests and incipient fault detection techniques. The most applicable techniques were structured into software flows that are then referenced in discussions of techniques for specific subsystems.

  5. [Does action semantic knowledge influence mental simulation in sentence comprehension?].

    PubMed

    Mochizuki, Masaya; Naito, Katsuo

    2012-04-01

    This research investigated whether action semantic knowledge influences mental simulation during sentence comprehension. In Experiment 1, we confirmed that the words of face-related objects include the perceptual knowledge about the actions that bring the object to the face. In Experiment 2, we used an acceptability judgment task and a word-picture verification task to compare the perceptual information that is activated by the comprehension of sentences describing an action using face-related objects near the face (near-sentence) or far from the face (far-sentence). Results showed that participants took a longer time to judge the acceptability of the far-sentence than the near-sentence. Verification times were significantly faster when the actions in the pictures matched the action described in the sentences than when they were mismatched. These findings suggest that action semantic knowledge influences sentence processing, and that perceptual information corresponding to the content of the sentence is activated regardless of the action semantic knowledge at the end of the sentence processing.

  6. Upgrade Summer Severe Weather Tool

    NASA Technical Reports Server (NTRS)

    Watson, Leela

    2011-01-01

    The goal of this task was to upgrade to the existing severe weather database by adding observations from the 2010 warm season, update the verification dataset with results from the 2010 warm season, use statistical logistic regression analysis on the database and develop a new forecast tool. The AMU analyzed 7 stability parameters that showed the possibility of providing guidance in forecasting severe weather, calculated verification statistics for the Total Threat Score (TTS), and calculated warm season verification statistics for the 2010 season. The AMU also performed statistical logistic regression analysis on the 22-year severe weather database. The results indicated that the logistic regression equation did not show an increase in skill over the previously developed TTS. The equation showed less accuracy than TTS at predicting severe weather, little ability to distinguish between severe and non-severe weather days, and worse standard categorical accuracy measures and skill scores over TTS.

  7. Verification and Implementation of Operations Safety Controls for Flight Missions

    NASA Technical Reports Server (NTRS)

    Jones, Cheryl L.; Smalls, James R.; Carrier, Alicia S.

    2010-01-01

    Approximately eleven years ago, the International Space Station launched the first module from Russia, the Functional Cargo Block (FGB). Safety and Mission Assurance (S&MA) Operations (Ops) Engineers played an integral part in that endeavor by executing strict flight product verification as well as continued staffing of S&MA's console in the Mission Evaluation Room (MER) for that flight mission. How were these engineers able to conduct such a complicated task? They conducted it based on product verification that consisted of ensuring that safety requirements were adequately contained in all flight products that affected crew safety. S&MA Ops engineers apply both systems engineering and project management principles in order to gain a appropriate level of technical knowledge necessary to perform thorough reviews which cover the subsystem(s) affected. They also ensured that mission priorities were carried out with a great detail and success.

  8. Evaluation of alternatives for trichlorotrifluoroethane (CFC-113) to clean and verify liquid oxygen systems

    NASA Technical Reports Server (NTRS)

    Morris, Michelle L.

    1996-01-01

    NASA Langley Research Center (LARC) investigated several alternatives to the use of tri-chloro-tri-fluoroethane(CFC-113) in oxygen cleaning and verification. Alternatives investigated include several replacement solvents, Non-Destructive Evaluation (NDE) and Total Organic Carbon (TOC) analysis. Among the solvents, 1, 1-dichloro-1-fluoroethane (HCFC 141b) and di-chloro-penta-fluoro-propane (HCFC 225) are the most suitable alternatives for cleaning and verification. However, use of HCFC 141b is restricted, HCFC 225 introduces toxicity hazards, and the NDE and TOC methods of verification are not suitable for processes at LaRC. Therefore, the interim recommendation is to sparingly use CFC-113 for the very difficult cleaning tasks where safety is critical and to use HCFC 225 to clean components in a controlled laboratory environment. Meanwhile, evaluation must continue on now solvents and procedures to find one suited to LaRCs oxygen cleaning needs.

  9. Experimental measurement-device-independent verification of quantum steering

    NASA Astrophysics Data System (ADS)

    Kocsis, Sacha; Hall, Michael J. W.; Bennet, Adam J.; Saunders, Dylan J.; Pryde, Geoff J.

    2015-01-01

    Bell non-locality between distant quantum systems—that is, joint correlations which violate a Bell inequality—can be verified without trusting the measurement devices used, nor those performing the measurements. This leads to unconditionally secure protocols for quantum information tasks such as cryptographic key distribution. However, complete verification of Bell non-locality requires high detection efficiencies, and is not robust to typical transmission losses over long distances. In contrast, quantum or Einstein-Podolsky-Rosen steering, a weaker form of quantum correlation, can be verified for arbitrarily low detection efficiencies and high losses. The cost is that current steering-verification protocols require complete trust in one of the measurement devices and its operator, allowing only one-sided secure key distribution. Here we present measurement-device-independent steering protocols that remove this need for trust, even when Bell non-locality is not present. We experimentally demonstrate this principle for singlet states and states that do not violate a Bell inequality.

  10. Formal verification of human-automation interaction

    NASA Technical Reports Server (NTRS)

    Degani, Asaf; Heymann, Michael

    2002-01-01

    This paper discusses a formal and rigorous approach to the analysis of operator interaction with machines. It addresses the acute problem of detecting design errors in human-machine interaction and focuses on verifying the correctness of the interaction in complex and automated control systems. The paper describes a systematic methodology for evaluating whether the interface provides the necessary information about the machine to enable the operator to perform a specified task successfully and unambiguously. It also addresses the adequacy of information provided to the user via training material (e.g., user manual) about the machine's behavior. The essentials of the methodology, which can be automated and applied to the verification of large systems, are illustrated by several examples and through a case study of pilot interaction with an autopilot aboard a modern commercial aircraft. The expected application of this methodology is an augmentation and enhancement, by formal verification, of human-automation interfaces.

  11. Experimental measurement-device-independent verification of quantum steering.

    PubMed

    Kocsis, Sacha; Hall, Michael J W; Bennet, Adam J; Saunders, Dylan J; Pryde, Geoff J

    2015-01-07

    Bell non-locality between distant quantum systems--that is, joint correlations which violate a Bell inequality--can be verified without trusting the measurement devices used, nor those performing the measurements. This leads to unconditionally secure protocols for quantum information tasks such as cryptographic key distribution. However, complete verification of Bell non-locality requires high detection efficiencies, and is not robust to typical transmission losses over long distances. In contrast, quantum or Einstein-Podolsky-Rosen steering, a weaker form of quantum correlation, can be verified for arbitrarily low detection efficiencies and high losses. The cost is that current steering-verification protocols require complete trust in one of the measurement devices and its operator, allowing only one-sided secure key distribution. Here we present measurement-device-independent steering protocols that remove this need for trust, even when Bell non-locality is not present. We experimentally demonstrate this principle for singlet states and states that do not violate a Bell inequality.

  12. Highly efficient simulation environment for HDTV video decoder in VLSI design

    NASA Astrophysics Data System (ADS)

    Mao, Xun; Wang, Wei; Gong, Huimin; He, Yan L.; Lou, Jian; Yu, Lu; Yao, Qingdong; Pirsch, Peter

    2002-01-01

    With the increase of the complex of VLSI such as the SoC (System on Chip) of MPEG-2 Video decoder with HDTV scalability especially, simulation and verification of the full design, even as high as the behavior level in HDL, often proves to be very slow, costly and it is difficult to perform full verification until late in the design process. Therefore, they become bottleneck of the procedure of HDTV video decoder design, and influence it's time-to-market mostly. In this paper, the architecture of Hardware/Software Interface of HDTV video decoder is studied, and a Hardware-Software Mixed Simulation (HSMS) platform is proposed to check and correct error in the early design stage, based on the algorithm of MPEG-2 video decoding. The application of HSMS to target system could be achieved by employing several introduced approaches. Those approaches speed up the simulation and verification task without decreasing performance.

  13. Linear models to perform treaty verification tasks for enhanced information security

    DOE PAGES

    MacGahan, Christopher J.; Kupinski, Matthew A.; Brubaker, Erik M.; ...

    2016-11-12

    Linear mathematical models were applied to binary-discrimination tasks relevant to arms control verification measurements in which a host party wishes to convince a monitoring party that an item is or is not treaty accountable. These models process data in list-mode format and can compensate for the presence of variability in the source, such as uncertain object orientation and location. The Hotelling observer applies an optimal set of weights to binned detector data, yielding a test statistic that is thresholded to make a decision. The channelized Hotelling observer applies a channelizing matrix to the vectorized data, resulting in a lower dimensionalmore » vector available to the monitor to make decisions. We demonstrate how incorporating additional terms in this channelizing-matrix optimization offers benefits for treaty verification. We present two methods to increase shared information and trust between the host and monitor. The first method penalizes individual channel performance in order to maximize the information available to the monitor while maintaining optimal performance. Second, we present a method that penalizes predefined sensitive information while maintaining the capability to discriminate between binary choices. Data used in this study was generated using Monte Carlo simulations for fission neutrons, accomplished with the GEANT4 toolkit. Custom models for plutonium inspection objects were measured in simulation by a radiation imaging system. Model performance was evaluated and presented using the area under the receiver operating characteristic curve.« less

  14. Linear models to perform treaty verification tasks for enhanced information security

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacGahan, Christopher J.; Kupinski, Matthew A.; Brubaker, Erik M.

    Linear mathematical models were applied to binary-discrimination tasks relevant to arms control verification measurements in which a host party wishes to convince a monitoring party that an item is or is not treaty accountable. These models process data in list-mode format and can compensate for the presence of variability in the source, such as uncertain object orientation and location. The Hotelling observer applies an optimal set of weights to binned detector data, yielding a test statistic that is thresholded to make a decision. The channelized Hotelling observer applies a channelizing matrix to the vectorized data, resulting in a lower dimensionalmore » vector available to the monitor to make decisions. We demonstrate how incorporating additional terms in this channelizing-matrix optimization offers benefits for treaty verification. We present two methods to increase shared information and trust between the host and monitor. The first method penalizes individual channel performance in order to maximize the information available to the monitor while maintaining optimal performance. Second, we present a method that penalizes predefined sensitive information while maintaining the capability to discriminate between binary choices. Data used in this study was generated using Monte Carlo simulations for fission neutrons, accomplished with the GEANT4 toolkit. Custom models for plutonium inspection objects were measured in simulation by a radiation imaging system. Model performance was evaluated and presented using the area under the receiver operating characteristic curve.« less

  15. Linear models to perform treaty verification tasks for enhanced information security

    NASA Astrophysics Data System (ADS)

    MacGahan, Christopher J.; Kupinski, Matthew A.; Brubaker, Erik M.; Hilton, Nathan R.; Marleau, Peter A.

    2017-02-01

    Linear mathematical models were applied to binary-discrimination tasks relevant to arms control verification measurements in which a host party wishes to convince a monitoring party that an item is or is not treaty accountable. These models process data in list-mode format and can compensate for the presence of variability in the source, such as uncertain object orientation and location. The Hotelling observer applies an optimal set of weights to binned detector data, yielding a test statistic that is thresholded to make a decision. The channelized Hotelling observer applies a channelizing matrix to the vectorized data, resulting in a lower dimensional vector available to the monitor to make decisions. We demonstrate how incorporating additional terms in this channelizing-matrix optimization offers benefits for treaty verification. We present two methods to increase shared information and trust between the host and monitor. The first method penalizes individual channel performance in order to maximize the information available to the monitor while maintaining optimal performance. Second, we present a method that penalizes predefined sensitive information while maintaining the capability to discriminate between binary choices. Data used in this study was generated using Monte Carlo simulations for fission neutrons, accomplished with the GEANT4 toolkit. Custom models for plutonium inspection objects were measured in simulation by a radiation imaging system. Model performance was evaluated and presented using the area under the receiver operating characteristic curve.

  16. Advanced Risk Reduction Tool (ARRT) Special Case Study Report: Science and Engineering Technical Assessments (SETA) Program

    NASA Technical Reports Server (NTRS)

    Kirsch, Paul J.; Hayes, Jane; Zelinski, Lillian

    2000-01-01

    This special case study report presents the Science and Engineering Technical Assessments (SETA) team's findings for exploring the correlation between the underlying models of Advanced Risk Reduction Tool (ARRT) relative to how it identifies, estimates, and integrates Independent Verification & Validation (IV&V) activities. The special case study was conducted under the provisions of SETA Contract Task Order (CTO) 15 and the approved technical approach documented in the CTO-15 Modification #1 Task Project Plan.

  17. MESA: Message-Based System Analysis Using Runtime Verification

    NASA Technical Reports Server (NTRS)

    Shafiei, Nastaran; Tkachuk, Oksana; Mehlitz, Peter

    2017-01-01

    In this paper, we present a novel approach and framework for run-time verication of large, safety critical messaging systems. This work was motivated by verifying the System Wide Information Management (SWIM) project of the Federal Aviation Administration (FAA). SWIM provides live air traffic, site and weather data streams for the whole National Airspace System (NAS), which can easily amount to several hundred messages per second. Such safety critical systems cannot be instrumented, therefore, verification and monitoring has to happen using a nonintrusive approach, by connecting to a variety of network interfaces. Due to a large number of potential properties to check, the verification framework needs to support efficient formulation of properties with a suitable Domain Specific Language (DSL). Our approach is to utilize a distributed system that is geared towards connectivity and scalability and interface it at the message queue level to a powerful verification engine. We implemented our approach in the tool called MESA: Message-Based System Analysis, which leverages the open source projects RACE (Runtime for Airspace Concept Evaluation) and TraceContract. RACE is a platform for instantiating and running highly concurrent and distributed systems and enables connectivity to SWIM and scalability. TraceContract is a runtime verication tool that allows for checking traces against properties specified in a powerful DSL. We applied our approach to verify a SWIM service against several requirements.We found errors such as duplicate and out-of-order messages.

  18. Sampling and Analysis Plan for Verification Sampling of LANL-Derived Residual Radionuclides in Soils within Tract A-18-2 for Land Conveyance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruedig, Elizabeth

    Public Law 105-119 directs the U.S. Department of Energy (DOE) to convey or transfer parcels of land to the Incorporated County of Los Alamos or their designees and to the Department of Interior, Bureau of Indian Affairs, in trust for the Pueblo de San Ildefonso. Los Alamos National Security is tasked to support DOE in conveyance and/or transfer of identified land parcels no later than September 2022. Under DOE Order 458.1, Radiation Protection of the Public and the Environment (O458.1, 2013) and Los Alamos National Laboratory (LANL or the Laboratory) implementing Policy 412 (P412, 2014), real property with the potentialmore » to contain residual radioactive material must meet the criteria for clearance and release to the public. This Sampling and Analysis Plan (SAP) is a second investigation of Tract A-18-2 for the purpose of verifying the previous sampling results (LANL 2017). This sample plan requires 18 projectspecific soil samples for use in radiological clearance decisions consistent with LANL Procedure ENV-ES-TP-238 (2015a) and guidance in the Multi-Agency Radiation Survey and Site Investigation Manual (MARSSIM, 2000). The sampling work will be conducted by LANL, and samples will be evaluated by a LANL-contracted independent lab. However, there will be federal review (verification) of all steps of the sampling process.« less

  19. 48 CFR 752.245-70 - Government property-USAID reporting requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Clauses 752.245-70 Government property—USAID reporting requirements. In response to a GAO audit... USAID. Property Inventory Verifications I attest that (1) physical inventories of Government property... property in our possession are in agreement with such inventories; and (3) the total of the detailed...

  20. Symbolic Computation of Strongly Connected Components Using Saturation

    NASA Technical Reports Server (NTRS)

    Zhao, Yang; Ciardo, Gianfranco

    2010-01-01

    Finding strongly connected components (SCCs) in the state-space of discrete-state models is a critical task in formal verification of LTL and fair CTL properties, but the potentially huge number of reachable states and SCCs constitutes a formidable challenge. This paper is concerned with computing the sets of states in SCCs or terminal SCCs of asynchronous systems. Because of its advantages in many applications, we employ saturation on two previously proposed approaches: the Xie-Beerel algorithm and transitive closure. First, saturation speeds up state-space exploration when computing each SCC in the Xie-Beerel algorithm. Then, our main contribution is a novel algorithm to compute the transitive closure using saturation. Experimental results indicate that our improved algorithms achieve a clear speedup over previous algorithms in some cases. With the help of the new transitive closure computation algorithm, up to 10(exp 150) SCCs can be explored within a few seconds.

  1. A New Approach to Defining Human Touch Temperature Standards

    NASA Technical Reports Server (NTRS)

    Ungar, Eugene; Stroud, Kenneth

    2010-01-01

    Defining touch temperature limits for skin contact with both hot and cold objects is important to prevent pain and skin damage, which may affect task performance or become a safety concern. Pain and skin damage depend on the skin temperature during contact, which depends on the contact thermal conductance, the object's initial temperature, and its material properties. However, previous spacecraft standards have incorrectly defined touch temperature limits in terms of a single object temperature value for all materials, or have provided limited material-specific values which do not cover the gamut of likely designs. A new approach has been developed for updated NASA standards, which defines touch temperature limits in terms of skin temperature at pain onset for bare skin contact with hot and cold objects. The authors have developed an analytical verification method for safe hot and cold object temperatures for contact times from 1 second to infinity.

  2. A New Approach to Defining Human Touch Temperature Standards

    NASA Technical Reports Server (NTRS)

    Ungar, Eugene; Stroud, Kenneth

    2009-01-01

    Defining touch temperature limits for skin contact with both hot and cold objects is important to prevent pain and skin damage, which may affect task performance or become a safety concern. Pain and skin damage depend on the resulting skin temperature during contact, which depends on the object s initial temperature, its material properties and its ability to transfer heat. However, previous spacecraft standards have incorrectly defined touch temperature limits in terms of a single object temperature value for all materials, or have provided limited material-specific values which do not cover the gamut of most designs. A new approach is being used in new NASA standards, which defines touch temperature limits in terms of skin temperature at pain onset for bare skin contact with hot and cold objects. The authors have developed an analytical verification method for safe hot and cold object temperatures for contact times from 1 second to infinity.

  3. Development of Lightweight Material Composites to Insulate Cryogenic Tanks for 30-Day Storage in Outer Space

    NASA Technical Reports Server (NTRS)

    Krause, D. R.

    1972-01-01

    A conceptual design was developed for an MLI system which will meet the design constraints of an ILRV used for 7- to 30-day missions. The ten tasks are briefly described: (1) material survey and procurement, material property tests, and selection of composites to be considered; (2) definition of environmental parameters and tooling requirements, and thermal and structural design verification test definition; (3) definition of tanks and associated hardware to be used, and definition of MLI concepts to be considered; (4) thermal analyses, including purge, evacuation, and reentry repressurization analyses; (5) structural analyses (6) thermal degradation tests of composite and structural tests of fastener; (7) selection of MLI materials and system; (8) definition of a conceptual MLI system design; (9) evaluation of nondestructive inspection techniques and definition of procedures for repair of damaged areas; and (10) preparation of preliminary specifications.

  4. A verification strategy for web services composition using enhanced stacked automata model.

    PubMed

    Nagamouttou, Danapaquiame; Egambaram, Ilavarasan; Krishnan, Muthumanickam; Narasingam, Poonkuzhali

    2015-01-01

    Currently, Service-Oriented Architecture (SOA) is becoming the most popular software architecture of contemporary enterprise applications, and one crucial technique of its implementation is web services. Individual service offered by some service providers may symbolize limited business functionality; however, by composing individual services from different service providers, a composite service describing the intact business process of an enterprise can be made. Many new standards have been defined to decipher web service composition problem namely Business Process Execution Language (BPEL). BPEL provides an initial work for forming an Extended Markup Language (XML) specification language for defining and implementing business practice workflows for web services. The problems with most realistic approaches to service composition are the verification of composed web services. It has to depend on formal verification method to ensure the correctness of composed services. A few research works has been carried out in the literature survey for verification of web services for deterministic system. Moreover the existing models did not address the verification properties like dead transition, deadlock, reachability and safetyness. In this paper, a new model to verify the composed web services using Enhanced Stacked Automata Model (ESAM) has been proposed. The correctness properties of the non-deterministic system have been evaluated based on the properties like dead transition, deadlock, safetyness, liveness and reachability. Initially web services are composed using Business Process Execution Language for Web Service (BPEL4WS) and it is converted into ESAM (combination of Muller Automata (MA) and Push Down Automata (PDA)) and it is transformed into Promela language, an input language for Simple ProMeLa Interpreter (SPIN) tool. The model is verified using SPIN tool and the results revealed better recital in terms of finding dead transition and deadlock in contrast to the existing models.

  5. Design for Verification: Using Design Patterns to Build Reliable Systems

    NASA Technical Reports Server (NTRS)

    Mehlitz, Peter C.; Penix, John; Koga, Dennis (Technical Monitor)

    2003-01-01

    Components so far have been mainly used in commercial software development to reduce time to market. While some effort has been spent on formal aspects of components, most of this was done in the context of programming language or operating system framework integration. As a consequence, increased reliability of composed systems is mainly regarded as a side effect of a more rigid testing of pre-fabricated components. In contrast to this, Design for Verification (D4V) puts the focus on component specific property guarantees, which are used to design systems with high reliability requirements. D4V components are domain specific design pattern instances with well-defined property guarantees and usage rules, which are suitable for automatic verification. The guaranteed properties are explicitly used to select components according to key system requirements. The D4V hypothesis is that the same general architecture and design principles leading to good modularity, extensibility and complexity/functionality ratio can be adapted to overcome some of the limitations of conventional reliability assurance measures, such as too large a state space or too many execution paths.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peters, T. B.; Bannochie, C. J.

    Savannah River National Laboratory (SRNL) analyzed samples from Tank 21H in support of verification of Macrobatch (Salt Batch) 11 for the Interim Salt Disposition Program (ISDP) for processing. This document reports characterization data on the samples of Tank 21H and fulfills the requirements of Deliverable 3 of the Technical Task Request (TTR).

  7. 48 CFR 16.505 - Ordering.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... factors in the selection decision. (iii) Orders exceeding $5 million. For task or delivery orders in... procedures in 5.705. (11) When using the Governmentwide commercial purchase card as a method of payment, orders at or below the micro-purchase threshold are exempt from verification in the Central Contractor...

  8. Expert system verification and validation study. Phase 2: Requirements identification. Delivery 1: Updated survey report

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The purpose is to report the state-of-the-practice in Verification and Validation (V and V) of Expert Systems (ESs) on current NASA and Industry applications. This is the first task of a series which has the ultimate purpose of ensuring that adequate ES V and V tools and techniques are available for Space Station Knowledge Based Systems development. The strategy for determining the state-of-the-practice is to check how well each of the known ES V and V issues are being addressed and to what extent they have impacted the development of Expert Systems.

  9. Test and training simulator for ground-based teleoperated in-orbit servicing

    NASA Technical Reports Server (NTRS)

    Schaefer, Bernd E.

    1989-01-01

    For the Post-IOC(In-Orbit Construction)-Phase of COLUMBUS it is intended to use robotic devices for the routine operations of ground-based teleoperated In-Orbit Servicing. A hardware simulator for verification of the relevant in-orbit operations technologies, the Servicing Test Facility, is necessary which mainly will support the Flight Control Center for the Manned Space-Laboratories for operational specific tasks like system simulation, training of teleoperators, parallel operation simultaneously to actual in-orbit activities and for the verification of the ground operations segment for telerobotics. The present status of definition for the facility functional and operational concept is described.

  10. Classical verification of quantum circuits containing few basis changes

    NASA Astrophysics Data System (ADS)

    Demarie, Tommaso F.; Ouyang, Yingkai; Fitzsimons, Joseph F.

    2018-04-01

    We consider the task of verifying the correctness of quantum computation for a restricted class of circuits which contain at most two basis changes. This contains circuits giving rise to the second level of the Fourier hierarchy, the lowest level for which there is an established quantum advantage. We show that when the circuit has an outcome with probability at least the inverse of some polynomial in the circuit size, the outcome can be checked in polynomial time with bounded error by a completely classical verifier. This verification procedure is based on random sampling of computational paths and is only possible given knowledge of the likely outcome.

  11. Design of the software development and verification system (SWDVS) for shuttle NASA study task 35

    NASA Technical Reports Server (NTRS)

    Drane, L. W.; Mccoy, B. J.; Silver, L. W.

    1973-01-01

    An overview of the Software Development and Verification System (SWDVS) for the space shuttle is presented. The design considerations, goals, assumptions, and major features of the design are examined. A scenario that shows three persons involved in flight software development using the SWDVS in response to a program change request is developed. The SWDVS is described from the standpoint of different groups of people with different responsibilities in the shuttle program to show the functional requirements that influenced the SWDVS design. The software elements of the SWDVS that satisfy the requirements of the different groups are identified.

  12. Simulation of Laboratory Tests of Steel Arch Support

    NASA Astrophysics Data System (ADS)

    Horyl, Petr; Šňupárek, Richard; Maršálek, Pavel; Pacześniowski, Krzysztof

    2017-03-01

    The total load-bearing capacity of steel arch yielding roadways supports is among their most important characteristics. These values can be obtained in two ways: experimental measurements in a specialized laboratory or computer modelling by FEM. Experimental measurements are significantly more expensive and more time-consuming. However, for proper tuning, a computer model is very valuable and can provide the necessary verification by experiment. In the cooperating workplaces of GIG Katowice, VSB-Technical University of Ostrava and the Institute of Geonics ASCR this verification was successful. The present article discusses the conditions and results of this verification for static problems. The output is a tuned computer model, which may be used for other calculations to obtain the load-bearing capacity of other types of steel arch supports. Changes in other parameters such as the material properties of steel, size torques, friction coefficient values etc. can be determined relatively quickly by changing the properties of the investigated steel arch supports.

  13. An Italian battery for the assessment of semantic memory disorders.

    PubMed

    Catricalà, Eleonora; Della Rosa, Pasquale A; Ginex, Valeria; Mussetti, Zoe; Plebani, Valentina; Cappa, Stefano F

    2013-06-01

    We report the construction and standardization of a new comprehensive battery of tests for the assessment of semantic memory disorders. The battery is constructed on a common set of 48 stimuli, belonging to both living and non-living categories, rigidly controlled for several confounding variables, and is based on an empirically derived corpus of semantic features. It includes six tasks, in order to assess semantic memory through different modalities of input and output: two naming tasks, one with colored pictures and the other in response to an oral description, a word-picture matching task, a picture sorting task, a free generation of features task and a sentence verification task. Normative data on 106 Italian subjects pooled across homogenous subgroups for age, sex and education are reported. The new battery allows an in-depth investigation of category-specific disorders and of progressive semantic memory deficits at features level, overcoming some of the limitations of existing tests.

  14. An evaluation of reading comprehension of expository text in adults with traumatic brain injury.

    PubMed

    Sohlberg, McKay Moore; Griffiths, Gina G; Fickas, Stephen

    2014-05-01

    This project was conducted to obtain information about reading problems of adults with traumatic brain injury (TBI) with mild-to-moderate cognitive impairments and to investigate how these readers respond to reading comprehension strategy prompts integrated into digital versions of text. Participants from 2 groups, adults with TBI (n = 15) and matched controls (n = 15), read 4 different 500-word expository science passages linked to either a strategy prompt condition or a no-strategy prompt condition. The participants' reading comprehension was evaluated using sentence verification and free recall tasks. The TBI and control groups exhibited significant differences on 2 of the 5 reading comprehension measures: paraphrase statements on a sentence verification task and communication units on a free recall task. Unexpected group differences were noted on the participants' prerequisite reading skills. For the within-group comparison, participants showed significantly higher reading comprehension scores on 2 free recall measures: words per communication unit and type-token ratio. There were no significant interactions. The results help to elucidate the nature of reading comprehension in adults with TBI with mild-to-moderate cognitive impairments and endorse further evaluation of reading comprehension strategies as a potential intervention option for these individuals. Future research is needed to better understand how individual differences influence a person's reading and response to intervention.

  15. The role of visual and spatial working memory in forming mental models derived from survey and route descriptions.

    PubMed

    Meneghetti, Chiara; Labate, Enia; Pazzaglia, Francesca; Hamilton, Colin; Gyselinck, Valérie

    2017-05-01

    This study examines the involvement of spatial and visual working memory (WM) in the construction of flexible spatial models derived from survey and route descriptions. Sixty young adults listened to environment descriptions, 30 from a survey perspective and the other 30 from a route perspective, while they performed spatial (spatial tapping [ST]) and visual (dynamic visual noise [DVN]) secondary tasks - believed to overload the spatial and visual working memory (WM) components, respectively - or no secondary task (control, C). Their mental representations of the environment were tested by free recall and a verification test with both route and survey statements. Results showed that, for both recall tasks, accuracy was worse in the ST than in the C or DVN conditions. In the verification test, the effect of both ST and DVN was a decreasing accuracy for sentences testing spatial relations from the opposite perspective to the one learnt than if the perspective was the same; only ST had a stronger interference effect than the C condition for sentences from the opposite perspective from the one learnt. Overall, these findings indicate that both visual and spatial WM, and especially the latter, are involved in the construction of perspective-flexible spatial models. © 2016 The British Psychological Society.

  16. Dental Laboratory Technology. Project Report Phase I with Research Findings.

    ERIC Educational Resources Information Center

    Sappe', Hoyt; Smith, Debra S.

    This report provides results of Phase I of a project that researched the occupational area of dental laboratory technology, established appropriate committees, and conducted task verification. These results are intended to guide development of a program designed to train dental laboratory technicians. Section 1 contains general information:…

  17. UAS Integration in the NAS Project: Part Task 6 V & V Simulation: Primary Results

    NASA Technical Reports Server (NTRS)

    Rorie, Conrad; Fern, Lisa; Shively, Jay; Santiago, Confesor

    2016-01-01

    This is a presentation of the preliminary results on final V and V (Verification and Validation) activity of [RTCA (Radio Technical Commission for Aeronautics)] SC (Special Committee)-228 DAA (Detect and Avoid) HMI (Human-Machine Interface) requirements for display alerting and guidance.

  18. Phase 2 of the Array Automated Assembly Task for the Low Cost Silicon Solar Array Project

    NASA Technical Reports Server (NTRS)

    Wihl, M.; Torro, J.; Scheinine, A.; Anderson, J.

    1978-01-01

    An automated process sequence, to manufacture photovoltaic modules at a capacity of approximately 500 MW per year at a cost of approximately $0.50 per peak watt is described. Verification tests were performed and are reported along with cost predictions.

  19. Environmental Horticulture. Project Report Phase I with Research Findings.

    ERIC Educational Resources Information Center

    Bachler, Mike; Sappe', Hoyt

    This report provides results of Phase I of a project that researched the occupational area of environmental horticulture, established appropriate committees, and conducted task verification. These results are intended to guide development of a program designed to address the needs of the horticulture field. Section 1 contains general information:…

  20. Abstract for 1999 Rational Software User Conference

    NASA Technical Reports Server (NTRS)

    Dunphy, Julia; Rouquette, Nicolas; Feather, Martin; Tung, Yu-Wen

    1999-01-01

    We develop spacecraft fault-protection software at NASA/JPL. Challenges exemplified by our task: 1) high-quality systems - need for extensive validation & verification; 2) multi-disciplinary context - involves experts from diverse areas; 3) embedded systems - must adapt to external practices, notations, etc.; and 4) development pressures - NASA's mandate of "better, faster, cheaper".

  1. 76 FR 34713 - Proposed Establishment of a Federally Funded Research and Development Center-Third Notice

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-14

    ... (FFRDC) to facilitate the modernization of business processes and supporting systems and their operations... processes and supporting systems and their operations. Some of the broad task areas that will be utilized..., organizational planning, research and development, continuous process improvement, Independent Verification and...

  2. Commercial Art. Project Report Phase I with Research Findings.

    ERIC Educational Resources Information Center

    Brown, Ted; Sappe', Hoyt

    This report provides results of Phase I of a project that researched the occupational area of commercial art, established appropriate committees, and conducted task verification. These results are intended to guide development of a program designed to train commercial artists. Section 1 contains general information: purpose of Phase I; description…

  3. Instrumentation Technology. Project Report Phase I with Research Findings.

    ERIC Educational Resources Information Center

    Sappe', Hoyt; Squires, Sheila S.

    This report provides results of Phase I of a project that researched the occupational area of instrumentation technology, established appropriate committees, and conducted task verification. These results are intended to guide development of a program designed to train instrumentation technicians. Section 1 contains general information: purpose of…

  4. Advanced software techniques for data management systems. Volume 1: Study of software aspects of the phase B space shuttle avionics system

    NASA Technical Reports Server (NTRS)

    Martin, F. H.

    1972-01-01

    An overview of the executive system design task is presented. The flight software executive system, software verification, phase B baseline avionics system review, higher order languages and compilers, and computer hardware features are also discussed.

  5. Action-based verification of RTCP-nets with CADP

    NASA Astrophysics Data System (ADS)

    Biernacki, Jerzy; Biernacka, Agnieszka; Szpyrka, Marcin

    2015-12-01

    The paper presents an RTCP-nets' (real-time coloured Petri nets) coverability graphs into Aldebaran format translation algorithm. The approach provides the possibility of automatic RTCP-nets verification using model checking techniques provided by the CADP toolbox. An actual fire alarm control panel system has been modelled and several of its crucial properties have been verified to demonstrate the usability of the approach.

  6. Revisiting Training and Verification Process Implementation for Risk Reduction on New Missions at NASA Jet Propulsion Laboratory

    NASA Technical Reports Server (NTRS)

    Bryant, Larry W.; Fragoso, Ruth S.

    2007-01-01

    In 2003 we proposed an effort to develop a core program of standardized training and verification practices and standards against which the implementation of these practices could be measured. The purpose was to provide another means of risk reduction for deep space missions to preclude the likelihood of a repeat of the tragedies of the 1998 Mars missions. We identified six areas where the application of standards and standardization would benefit the overall readiness process for flight projects at JPL. These are Individual Training, Team Training, Interface and Procedure Development, Personnel Certification, Interface and procedure Verification, and Operations Readiness Testing. In this paper we will discuss the progress that has been made in the tasks of developing the proposed infrastructure in each of these areas. Specifically we will address the Position Training and Certification Standards that are now available for each operational position found on our Flight Operations Teams (FOT). We will also discuss the MGSS Baseline Flight Operations Team Training Plan which can be tailored for each new flight project at JPL. As these tasks have been progressing, the climate and emphasis for Training and for V and V at JPL has changed, and we have learned about the expansion, growth, and limitations in the roles of traditional positions at JPL such as the Project's Training Engineer, V and V Engineer, and Operations Engineer. The need to keep a tight rein on budgets has led to a merging and/or reduction in these positions which pose challenges to individual capacities and capabilities. We examine the evolution of these processes and the roles involved while taking a look at the impact or potential impact of our proposed training related infrastructure tasks. As we conclude our examination of the changes taking place for new flight projects, we see that the importance of proceeding with our proposed tasks and adapting them to the changing climate remains an important element in reducing the risk in the challenging business of space exploration.

  7. Real-Time Kennedy Space Center and Cape Canaveral Air Force Station High-Resolution Model Implementation and Verification

    NASA Technical Reports Server (NTRS)

    Shafer, Jaclyn; Watson, Leela R.

    2015-01-01

    NASA's Launch Services Program, Ground Systems Development and Operations, Space Launch System and other programs at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) use the daily and weekly weather forecasts issued by the 45th Weather Squadron (45 WS) as decision tools for their day-to-day and launch operations on the Eastern Range (ER). Examples include determining if they need to limit activities such as vehicle transport to the launch pad, protect people, structures or exposed launch vehicles given a threat of severe weather, or reschedule other critical operations. The 45 WS uses numerical weather prediction models as a guide for these weather forecasts, particularly the Air Force Weather Agency (AFWA) 1.67 km Weather Research and Forecasting (WRF) model. Considering the 45 WS forecasters' and Launch Weather Officers' (LWO) extensive use of the AFWA model, the 45 WS proposed a task at the September 2013 Applied Meteorology Unit (AMU) Tasking Meeting requesting the AMU verify this model. Due to the lack of archived model data available from AFWA, verification is not yet possible. Instead, the AMU proposed to implement and verify the performance of an ER version of the high-resolution WRF Environmental Modeling System (EMS) model configured by the AMU (Watson 2013) in real time. Implementing a real-time version of the ER WRF-EMS would generate a larger database of model output than in the previous AMU task for determining model performance, and allows the AMU more control over and access to the model output archive. The tasking group agreed to this proposal; therefore the AMU implemented the WRF-EMS model on the second of two NASA AMU modeling clusters. The AMU also calculated verification statistics to determine model performance compared to observational data. Finally, the AMU made the model output available on the AMU Advanced Weather Interactive Processing System II (AWIPS II) servers, which allows the 45 WS and AMU staff to customize the model output display on the AMU and Range Weather Operations (RWO) AWIPS II client computers and conduct real-time subjective analyses.

  8. Virtual Platform for See Robustness Verification of Bootloader Embedded Software on Board Solar Orbiter's Energetic Particle Detector

    NASA Astrophysics Data System (ADS)

    Da Silva, A.; Sánchez Prieto, S.; Polo, O.; Parra Espada, P.

    2013-05-01

    Because of the tough robustness requirements in space software development, it is imperative to carry out verification tasks at a very early development stage to ensure that the implemented exception mechanisms work properly. All this should be done long time before the real hardware is available. But even if real hardware is available the verification of software fault tolerance mechanisms can be difficult since real faulty situations must be systematically and artificially brought about which can be imposible on real hardware. To solve this problem the Alcala Space Research Group (SRG) has developed a LEON2 virtual platform (Leon2ViP) with fault injection capabilities. This way it is posible to run the exact same target binary software as runs on the physical system in a more controlled and deterministic environment, allowing a more strict requirements verification. Leon2ViP enables unmanned and tightly focused fault injection campaigns, not possible otherwise, in order to expose and diagnose flaws in the software implementation early. Furthermore, the use of a virtual hardware-in-the-loop approach makes it possible to carry out preliminary integration tests with the spacecraft emulator or the sensors. The use of Leon2ViP has meant a signicant improvement, in both time and cost, in the development and verification processes of the Instrument Control Unit boot software on board Solar Orbiter's Energetic Particle Detector.

  9. The American College of Surgeons Children's Surgery Verification and Quality Improvement Program: implications for anesthesiologists.

    PubMed

    Houck, Constance S; Deshpande, Jayant K; Flick, Randall P

    2017-06-01

    The Task Force for Children's Surgical Care, an ad-hoc multidisciplinary group of invited leaders in pediatric perioperative medicine, was assembled in May 2012 to consider approaches to optimize delivery of children's surgical care in today's competitive national healthcare environment. Over the subsequent 3 years, with support from the American College of Surgeons (ACS) and Children's Hospital Association (CHA), the group established principles regarding perioperative resource standards, quality improvement and safety processes, data collection, and verification that were used to develop an ACS-sponsored Children's Surgery Verification and Quality Improvement Program (ACS CSV). The voluntary ACS CSV was officially launched in January 2017 and more than 125 pediatric surgical programs have expressed interest in verification. ACS CSV-verified programs have specific requirements for pediatric anesthesia leadership, resources, and the availability of pediatric anesthesiologists or anesthesiologists with pediatric expertise to care for infants and young children. The present review outlines the history of the ACS CSV, key elements of the program, and the standards specific to pediatric anesthesiology. As with the pediatric trauma programs initiated more than 40 years ago, this program has the potential to significantly improve surgical care for infants and children in the United States and Canada.

  10. Trajectory Based Behavior Analysis for User Verification

    NASA Astrophysics Data System (ADS)

    Pao, Hsing-Kuo; Lin, Hong-Yi; Chen, Kuan-Ta; Fadlil, Junaidillah

    Many of our activities on computer need a verification step for authorized access. The goal of verification is to tell apart the true account owner from intruders. We propose a general approach for user verification based on user trajectory inputs. The approach is labor-free for users and is likely to avoid the possible copy or simulation from other non-authorized users or even automatic programs like bots. Our study focuses on finding the hidden patterns embedded in the trajectories produced by account users. We employ a Markov chain model with Gaussian distribution in its transitions to describe the behavior in the trajectory. To distinguish between two trajectories, we propose a novel dissimilarity measure combined with a manifold learnt tuning for catching the pairwise relationship. Based on the pairwise relationship, we plug-in any effective classification or clustering methods for the detection of unauthorized access. The method can also be applied for the task of recognition, predicting the trajectory type without pre-defined identity. Given a trajectory input, the results show that the proposed method can accurately verify the user identity, or suggest whom owns the trajectory if the input identity is not provided.

  11. Assessment of Galileo modal test results for mathematical model verification

    NASA Technical Reports Server (NTRS)

    Trubert, M.

    1984-01-01

    The modal test program for the Galileo Spacecraft was completed at the Jet Propulsion Laboratory in the summer of 1983. The multiple sine dwell method was used for the baseline test. The Galileo Spacecraft is a rather complex 2433 kg structure made of a central core on which seven major appendages representing 30 percent of the total mass are attached, resulting in a high modal density structure. The test revealed a strong nonlinearity in several major modes. This nonlinearity discovered in the course of the test necessitated running additional tests at the unusually high response levels of up to about 21 g. The high levels of response were required to obtain a model verification valid at the level of loads for which the spacecraft was designed. Because of the high modal density and the nonlinearity, correlation between the dynamic mathematical model and the test results becomes a difficult task. Significant changes in the pre-test analytical model are necessary to establish confidence in the upgraded analytical model used for the final load verification. This verification, using a test verified model, is required by NASA to fly the Galileo Spacecraft on the Shuttle/Centaur launch vehicle in 1986.

  12. Enrichment Assay Methods Development for the Integrated Cylinder Verification System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Leon E.; Misner, Alex C.; Hatchell, Brian K.

    2009-10-22

    International Atomic Energy Agency (IAEA) inspectors currently perform periodic inspections at uranium enrichment plants to verify UF6 cylinder enrichment declarations. Measurements are typically performed with handheld high-resolution sensors on a sampling of cylinders taken to be representative of the facility's entire product-cylinder inventory. Pacific Northwest National Laboratory (PNNL) is developing a concept to automate the verification of enrichment plant cylinders to enable 100 percent product-cylinder verification and potentially, mass-balance calculations on the facility as a whole (by also measuring feed and tails cylinders). The Integrated Cylinder Verification System (ICVS) could be located at key measurement points to positively identify eachmore » cylinder, measure its mass and enrichment, store the collected data in a secure database, and maintain continuity of knowledge on measured cylinders until IAEA inspector arrival. The three main objectives of this FY09 project are summarized here and described in more detail in the report: (1) Develop a preliminary design for a prototype NDA system, (2) Refine PNNL's MCNP models of the NDA system, and (3) Procure and test key pulse-processing components. Progress against these tasks to date, and next steps, are discussed.« less

  13. Design and Mechanical Evaluation of a Capacitive Sensor-Based Indexed Platform for Verification of Portable Coordinate Measuring Instruments

    PubMed Central

    Avila, Agustín Brau; Mazo, Jorge Santolaria; Martín, Juan José Aguilar

    2014-01-01

    During the last years, the use of Portable Coordinate Measuring Machines (PCMMs) in industry has increased considerably, mostly due to their flexibility for accomplishing in-line measuring tasks as well as their reduced costs and operational advantages as compared to traditional coordinate measuring machines (CMMs). However, their operation has a significant drawback derived from the techniques applied in the verification and optimization procedures of their kinematic parameters. These techniques are based on the capture of data with the measuring instrument from a calibrated gauge object, fixed successively in various positions so that most of the instrument measuring volume is covered, which results in time-consuming, tedious and expensive verification procedures. In this work the mechanical design of an indexed metrology platform (IMP) is presented. The aim of the IMP is to increase the final accuracy and to radically simplify the calibration, identification and verification of geometrical parameter procedures of PCMMs. The IMP allows us to fix the calibrated gauge object and move the measuring instrument in such a way that it is possible to cover most of the instrument working volume, reducing the time and operator fatigue to carry out these types of procedures. PMID:24451458

  14. Design and mechanical evaluation of a capacitive sensor-based indexed platform for verification of portable coordinate measuring instruments.

    PubMed

    Avila, Agustín Brau; Mazo, Jorge Santolaria; Martín, Juan José Aguilar

    2014-01-02

    During the last years, the use of Portable Coordinate Measuring Machines (PCMMs) in industry has increased considerably, mostly due to their flexibility for accomplishing in-line measuring tasks as well as their reduced costs and operational advantages as compared to traditional coordinate measuring machines (CMMs). However, their operation has a significant drawback derived from the techniques applied in the verification and optimization procedures of their kinematic parameters. These techniques are based on the capture of data with the measuring instrument from a calibrated gauge object, fixed successively in various positions so that most of the instrument measuring volume is covered, which results in time-consuming, tedious and expensive verification procedures. In this work the mechanical design of an indexed metrology platform (IMP) is presented. The aim of the IMP is to increase the final accuracy and to radically simplify the calibration, identification and verification of geometrical parameter procedures of PCMMs. The IMP allows us to fix the calibrated gauge object and move the measuring instrument in such a way that it is possible to cover most of the instrument working volume, reducing the time and operator fatigue to carry out these types of procedures.

  15. Automated Array Assembly, Phase 2

    NASA Technical Reports Server (NTRS)

    Carbajal, B. G.

    1979-01-01

    The Automated Array Assembly Task, Phase 2 of the Low Cost Silicon Solar Array Project is a process development task. The contract provides for the fabrication of modules from large area tandem junction cells (TJC). During this quarter, effort was focused on the design of a large area, approximately 36 sq cm, TJC and process verification runs. The large area TJC design was optimized for minimum I squared R power losses. In the TJM activity, the cell-module interfaces were defined, module substrates were formed and heat treated and clad metal interconnect strips were fabricated.

  16. Mathematical calibration procedure of a capacitive sensor-based indexed metrology platform

    NASA Astrophysics Data System (ADS)

    Brau-Avila, A.; Santolaria, J.; Acero, R.; Valenzuela-Galvan, M.; Herrera-Jimenez, V. M.; Aguilar, J. J.

    2017-03-01

    The demand for faster and more reliable measuring tasks for the control and quality assurance of modern production systems has created new challenges for the field of coordinate metrology. Thus, the search for new solutions in coordinate metrology systems and the need for the development of existing ones still persists. One example of such a system is the portable coordinate measuring machine (PCMM), the use of which in industry has considerably increased in recent years, mostly due to its flexibility for accomplishing in-line measuring tasks as well as its reduced cost and operational advantages compared to traditional coordinate measuring machines. Nevertheless, PCMMs have a significant drawback derived from the techniques applied in the verification and optimization procedures of their kinematic parameters. These techniques are based on the capture of data with the measuring instrument from a calibrated gauge object, fixed successively in various positions so that most of the instrument measuring volume is covered, which results in time-consuming, tedious and expensive verification and optimization procedures. In this work the mathematical calibration procedure of a capacitive sensor-based indexed metrology platform (IMP) is presented. This calibration procedure is based on the readings and geometric features of six capacitive sensors and their targets with nanometer resolution. The final goal of the IMP calibration procedure is to optimize the geometric features of the capacitive sensors and their targets in order to use the optimized data in the verification procedures of PCMMs.

  17. A Preliminary Experimental Examination of Worldview Verification, Perceived Racism, and Stress Reactivity in African Americans

    PubMed Central

    Lucas, Todd; Lumley, Mark A.; Flack, John M.; Wegner, Rhiana; Pierce, Jennifer; Goetz, Stefan

    2016-01-01

    Objective According to worldview verification theory, inconsistencies between lived experiences and worldviews are psychologically threatening. These inconsistencies may be key determinants of stress processes that influence cardiovascular health disparities. This preliminary examination considers how experiencing injustice can affect perceived racism and biological stress reactivity among African Americans. Guided by worldview verification theory, it was hypothesized that responses to receiving an unfair outcome would be moderated by fairness of the accompanying decision process, and that this effect would further depend on the consistency of the decision process with preexisting justice beliefs. Method A sample of 118 healthy African American adults completed baseline measures of justice beliefs, followed by a laboratory-based social-evaluative stressor task. Two randomized fairness manipulations were implemented during the task: participants were given either high or low levels of distributive (outcome) and procedural (decision process) justice. Glucocorticoid (cortisol) and inflammatory (C-reactive protein) biological responses were measured in oral fluids, and attributions of racism were also measured. Results The hypothesized 3-way interaction was generally obtained. Among African Americans with a strong belief in justice, perceived racism, cortisol and C-reactive protein responses to low distributive justice were higher when procedural justice was low. Among African Americans with a weak belief in justice however, these responses were higher when a low level of distributive justice was coupled with high procedural justice. Conclusions Biological and psychological processes that contribute to cardiovascular health disparities are affected by consistency between individual-level and contextual justice factors. PMID:27018728

  18. Aircraft electromagnetic compatibility

    NASA Technical Reports Server (NTRS)

    Clarke, Clifton A.; Larsen, William E.

    1987-01-01

    Illustrated are aircraft architecture, electromagnetic interference environments, electromagnetic compatibility protection techniques, program specifications, tasks, and verification and validation procedures. The environment of 400 Hz power, electrical transients, and radio frequency fields are portrayed and related to thresholds of avionics electronics. Five layers of protection for avionics are defined. Recognition is given to some present day electromagnetic compatibility weaknesses and issues which serve to reemphasize the importance of EMC verification of equipment and parts, and their ultimate EMC validation on the aircraft. Proven standards of grounding, bonding, shielding, wiring, and packaging are laid out to help provide a foundation for a comprehensive approach to successful future aircraft design and an understanding of cost effective EMC in an aircraft setting.

  19. Towards Formal Verification of a Separation Microkernel

    NASA Astrophysics Data System (ADS)

    Butterfield, Andrew; Sanan, David; Hinchey, Mike

    2013-08-01

    The best approach to verifying an IMA separation kernel is to use a (fixed) time-space partitioning kernel with a multiple independent levels of separation (MILS) architecture. We describe an activity that explores the cost and feasibility of doing a formal verification of such a kernel to the Common Criteria (CC) levels mandated by the Separation Kernel Protection Profile (SKPP). We are developing a Reference Specification of such a kernel, and are using higher-order logic (HOL) to construct formal models of this specification and key separation properties. We then plan to do a dry run of part of a formal proof of those properties using the Isabelle/HOL theorem prover.

  20. SU-E-T-406: Use of TrueBeam Developer Mode and API to Increase the Efficiency and Accuracy of Commissioning Measurements for the Varian EDGE Stereotactic Linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, S; Gulam, M; Song, K

    2014-06-01

    Purpose: The Varian EDGE machine is a new stereotactic platform, combining Calypso and VisionRT localization systems with a stereotactic linac. The system includes TrueBeam DeveloperMode, making possible the use of XML-scripting for automation of linac-related tasks. This study details the use of DeveloperMode to automate commissioning tasks for Varian EDGE, thereby improving efficiency and measurement consistency. Methods: XML-scripting was used for various commissioning tasks,including couch model verification,beam-scanning,and isocenter verification. For couch measurements, point measurements were acquired for several field sizes (2×2,4×4,10×10cm{sup 2}) at 42 gantry angles for two couch-models. Measurements were acquired with variations in couch position(rails in/out,couch shifted inmore » each of motion axes) compared to treatment planning system(TPS)-calculated values,which were logged automatically through advanced planning interface(API) scripting functionality. For beam scanning, XML-scripts were used to create custom MLC-apertures. For isocenter verification, XML-scripts were used to automate various Winston-Lutz-type tests. Results: For couch measurements, the time required for each set of angles was approximately 9 minutes. Without scripting, each set required approximately 12 minutes. Automated measurements required only one physicist, while manual measurements required at least two physicists to handle linac positions/beams and data recording. MLC apertures were generated outside of the TPS,and with the .xml file format, double-checking without use of TPS/operator console was possible. Similar time efficiency gains were found for isocenter verification measurements Conclusion: The use of XML scripting in TrueBeam DeveloperMode allows for efficient and accurate data acquisition during commissioning. The efficiency improvement is most pronounced for iterative measurements, exemplified by the time savings for couch modeling measurements(approximately 10 hours). The scripting also allowed for creation of the files in advance without requiring access to TPS. The API scripting functionality enabled efficient creation/mining of TPS data. Finally, automation reduces the potential for human error in entering linac values at the machine console,and the script provides a log of measurements acquired for each session. This research was supported in part by a grant from Varian Medical Systems, Palo Alto, CA.« less

  1. ICAN/PART: Particulate composite analyzer, user's manual and verification studies

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Murthy, Pappu L. N.; Mital, Subodh K.

    1996-01-01

    A methodology for predicting the equivalent properties and constituent microstresses for particulate matrix composites, based on the micromechanics approach, is developed. These equations are integrated into a computer code developed to predict the equivalent properties and microstresses of fiber reinforced polymer matrix composites to form a new computer code, ICAN/PART. Details of the flowchart, input and output for ICAN/PART are described, along with examples of the input and output. Only the differences between ICAN/PART and the original ICAN code are described in detail, and the user is assumed to be familiar with the structure and usage of the original ICAN code. Detailed verification studies, utilizing dim dimensional finite element and boundary element analyses, are conducted in order to verify that the micromechanics methodology accurately models the mechanics of particulate matrix composites. ne equivalent properties computed by ICAN/PART fall within bounds established by the finite element and boundary element results. Furthermore, constituent microstresses computed by ICAN/PART agree in average sense with results computed using the finite element method. The verification studies indicate that the micromechanics programmed into ICAN/PART do indeed accurately model the mechanics of particulate matrix composites.

  2. Determining the mechanical properties of a radiochromic silicone-based 3D dosimeter

    NASA Astrophysics Data System (ADS)

    Kaplan, L. P.; Høye, E. M.; Balling, P.; Muren, L. P.; Petersen, J. B. B.; Poulsen, P. R.; Yates, E. S.; Skyt, P. S.

    2017-07-01

    New treatment modalities in radiotherapy (RT) enable delivery of highly conformal dose distributions in patients. This creates a need for precise dose verification in three dimensions (3D). A radiochromic silicone-based 3D dosimetry system has recently been developed. Such a dosimeter can be used for dose verification in deformed geometries, which requires knowledge of the dosimeter’s mechanical properties. In this study we have characterized the dosimeter’s elastic behaviour under tensile and compressive stress. In addition, the dose response under strain was determined. It was found that the dosimeter behaved as an incompressible hyperelastic material with a non-linear stress/strain curve and with no observable hysteresis or plastic deformation even at high strains. The volume was found to be constant within a 2% margin at deformations up to 60%. Furthermore, it was observed that the dosimeter returned to its original geometry within a 2% margin when irradiated under stress, and that the change in optical density per centimeter was constant regardless of the strain during irradiation. In conclusion, we have shown that this radiochromic silicone-based dosimeter’s mechanical properties make it a viable candidate for dose verification in deformable 3D geometries.

  3. Avionics Technology Contract Project Report Phase I with Research Findings.

    ERIC Educational Resources Information Center

    Sappe', Hoyt; Squires, Shiela S.

    This document reports on Phase I of a project that examined the occupation of avionics technician, established appropriate committees, and conducted task verification. Results of this phase provide the basic information required to develop the program standards and to guide and set up the committee structure to guide the project. Section 1…

  4. Executive Function and Children's Understanding of False Belief: How Specific Is the Relation?

    ERIC Educational Resources Information Center

    Muller, U.; Zelazo, P.D.; Imrisek, S.

    2005-01-01

    The present study examined developmental relations among understanding false belief, understanding ''false'' photographs, performance on the Dimensional Change Card Sort (DCCS), and performance on a picture-sentence verification task in 69 3-5-year-old children. Results showed that performance on the DCCS predicted performance on false belief…

  5. Interpersonal Congruence, Transactive Memory, and Feedback Processes: An Integrative Model of Group Learning

    ERIC Educational Resources Information Center

    London, Manuel; Polzer, Jeffrey T.; Omoregie, Heather

    2005-01-01

    This article presents a multilevel model of group learning that focuses on antecedents and consequences of interpersonal congruence, transactive memory, and feedback processes. The model holds that members' self-verification motives and situational conditions (e.g., member diversity and task demands) give rise to identity negotiation behaviors…

  6. 76 FR 64859 - Pilot Loading of Navigation and Terrain Awareness Database Updates

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-19

    ... category the task of updating databases used in self-contained, front-panel or pedestal-mounted navigation... Rule This rulemaking would allow pilots of all certificated aircraft equipped with self-contained... verification, or by errors in ATC assignments which may occur during redirection of the flight. Both types of...

  7. Sheet Metal Contract. Project Report Phase I with Research Findings.

    ERIC Educational Resources Information Center

    Kirkpatrick, Thomas; Sappe', Hoyt

    This report provides results of Phase I of a project that researched the occupational area of sheet metal, established appropriate committees, and conducted task verification. These results are intended to guide development of a program designed to train sheet metal workers. Section 1 contains general information: purpose of Phase I; description…

  8. Stimulus Type, Level of Categorization, and Spatial-Frequencies Utilization: Implications for Perceptual Categorization Hierarchies

    ERIC Educational Resources Information Center

    Harel, Assaf; Bentin, Shlomo

    2009-01-01

    The type of visual information needed for categorizing faces and nonface objects was investigated by manipulating spatial frequency scales available in the image during a category verification task addressing basic and subordinate levels. Spatial filtering had opposite effects on faces and airplanes that were modulated by categorization level. The…

  9. Process development for automated solar cell and module production. Task 4: Automated array assembly

    NASA Technical Reports Server (NTRS)

    Hagerty, J. J.

    1981-01-01

    The cell preparation station was installed in its new enclosure. Operation verification tests were performed. The detailed layout drawings of the automated lamination station were produced and construction began. All major and most minor components were delivered by vendors. The station framework was built and assembly of components begun.

  10. Independent verification of plutonium decontamination on Johnston Atoll (1992--1996)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson-Nichols, M.J.; Wilson, J.E.; McDowell-Boyer, L.M.

    1998-05-01

    The Field Command, Defense Special Weapons Agency (FCDSWA) (formerly FCDNA) contracted Oak Ridge National Laboratory (ORNL) Environmental Technology Section (ETS) to conduct an independent verification (IV) of the Johnston Atoll (JA) Plutonium Decontamination Project by an interagency agreement with the US Department of Energy in 1992. The main island is contaminated with the transuranic elements plutonium and americium, and soil decontamination activities have been ongoing since 1984. FCDSWA has selected a remedy that employs a system of sorting contaminated particles from the coral/soil matrix, allowing uncontaminated soil to be reused. The objective of IV is to evaluate the effectiveness ofmore » remedial action. The IV contractor`s task is to determine whether the remedial action contractor has effectively reduced contamination to levels within established criteria and whether the supporting documentation describing the remedial action is adequate. ORNL conducted four interrelated tasks from 1992 through 1996 to accomplish the IV mission. This document is a compilation and summary of those activities, in addition to a comprehensive review of the history of the project.« less

  11. Inverse dynamics of underactuated mechanical systems: A simple case study and experimental verification

    NASA Astrophysics Data System (ADS)

    Blajer, W.; Dziewiecki, K.; Kołodziejczyk, K.; Mazur, Z.

    2011-05-01

    Underactuated systems are featured by fewer control inputs than the degrees-of-freedom, m < n. The determination of an input control strategy that forces such a system to complete a set of m specified motion tasks is a challenging task, and the explicit solution existence is conditioned to differential flatness of the problem. The flatness-based solution denotes that all the 2 n states and m control inputs can be algebraically expressed in terms of the m specified outputs and their time derivatives up to a certain order, which is in practice attainable only for simple systems. In this contribution the problem is posed in a more practical way as a set of index-three differential-algebraic equations, and the solution is obtained numerically. The formulation is then illustrated by a two-degree-of-freedom underactuated system composed of two rotating discs connected by a torsional spring, in which the pre-specified motion of one of the discs is actuated by the torque applied to the other disc, n = 2 and m = 1. Experimental verification of the inverse simulation control methodology is reported.

  12. An investigation of the effects of relevant samples and a comparison of verification versus discovery based lab design

    NASA Astrophysics Data System (ADS)

    Rieben, James C., Jr.

    This study focuses on the effects of relevance and lab design on student learning within the chemistry laboratory environment. A general chemistry conductivity of solutions experiment and an upper level organic chemistry cellulose regeneration experiment were employed. In the conductivity experiment, the two main variables studied were the effect of relevant (or "real world") samples on student learning and a verification-based lab design versus a discovery-based lab design. With the cellulose regeneration experiment, the effect of a discovery-based lab design vs. a verification-based lab design was the sole focus. Evaluation surveys consisting of six questions were used at three different times to assess student knowledge of experimental concepts. In the general chemistry laboratory portion of this study, four experimental variants were employed to investigate the effect of relevance and lab design on student learning. These variants consisted of a traditional (or verification) lab design, a traditional lab design using "real world" samples, a new lab design employing real world samples/situations using unknown samples, and the new lab design using real world samples/situations that were known to the student. Data used in this analysis were collected during the Fall 08, Winter 09, and Fall 09 terms. For the second part of this study a cellulose regeneration experiment was employed to investigate the effects of lab design. A demonstration creating regenerated cellulose "rayon" was modified and converted to an efficient and low-waste experiment. In the first variant students tested their products and verified a list of physical properties. In the second variant, students filled in a blank physical property chart with their own experimental results for the physical properties. Results from the conductivity experiment show significant student learning of the effects of concentration on conductivity and how to use conductivity to differentiate solution types with the use of real world samples. In the organic chemistry experiment, results suggest that the discovery-based design improved student retention of the chain length differentiation by physical properties relative to the verification-based design.

  13. Space station data system analysis/architecture study. Task 3: Trade studies, DR-5, volume 2

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Results of a Space Station Data System Analysis/Architecture Study for the Goddard Space Flight Center are presented. This study, which emphasized a system engineering design for a complete, end-to-end data system, was divided into six tasks: (1); Functional requirements definition; (2) Options development; (3) Trade studies; (4) System definitions; (5) Program plan; and (6) Study maintenance. The Task inter-relationship and documentation flow are described. Information in volume 2 is devoted to Task 3: trade Studies. Trade Studies have been carried out in the following areas: (1) software development test and integration capability; (2) fault tolerant computing; (3) space qualified computers; (4) distributed data base management system; (5) system integration test and verification; (6) crew workstations; (7) mass storage; (8) command and resource management; and (9) space communications. Results are presented for each task.

  14. Automatic building of a web-like structure based on thermoplastic adhesive.

    PubMed

    Leach, Derek; Wang, Liyu; Reusser, Dorothea; Iida, Fumiya

    2014-09-01

    Animals build structures to extend their control over certain aspects of the environment; e.g., orb-weaver spiders build webs to capture prey, etc. Inspired by this behaviour of animals, we attempt to develop robotics technology that allows a robot to automatically builds structures to help it accomplish certain tasks. In this paper we show automatic building of a web-like structure with a robot arm based on thermoplastic adhesive (TPA) material. The material properties of TPA, such as elasticity, adhesiveness, and low melting temperature, make it possible for a robot to form threads across an open space by an extrusion-drawing process and then combine several of these threads into a web-like structure. The problems addressed here are discovering which parameters determine the thickness of a thread and determining how web-like structures may be used for certain tasks. We first present a model for the extrusion and the drawing of TPA threads which also includes the temperature-dependent material properties. The model verification result shows that the increasing relative surface area of the TPA thread as it is drawn thinner increases the heat loss of the thread, and that by controlling how quickly the thread is drawn, a range of diameters can be achieved from 0.2-0.75 mm. We then present a method based on a generalized nonlinear finite element truss model. The model was validated and could predict the deformation of various web-like structures when payloads are added. At the end, we demonstrate automatic building of a web-like structure for payload bearing.

  15. Towards Verification of Operational Procedures Using Auto-Generated Diagnostic Trees

    NASA Technical Reports Server (NTRS)

    Kurtoglu, Tolga; Lutz, Robyn; Patterson-Hine, Ann

    2009-01-01

    The design, development, and operation of complex space, lunar and planetary exploration systems require the development of general procedures that describe a detailed set of instructions capturing how mission tasks are performed. For both crewed and uncrewed NASA systems, mission safety and the accomplishment of the scientific mission objectives are highly dependent on the correctness of procedures. In this paper, we describe how to use the auto-generated diagnostic trees from existing diagnostic models to improve the verification of standard operating procedures. Specifically, we introduce a systematic method, namely the Diagnostic Tree for Verification (DTV), developed with the goal of leveraging the information contained within auto-generated diagnostic trees in order to check the correctness of procedures, to streamline the procedures in terms of reducing the number of steps or use of resources in them, and to propose alternative procedural steps adaptive to changing operational conditions. The application of the DTV method to a spacecraft electrical power system shows the feasibility of the approach and its range of capabilities

  16. Z-2 Architecture Description and Requirements Verification Results

    NASA Technical Reports Server (NTRS)

    Graziosi, Dave; Jones, Bobby; Ferl, Jinny; Scarborough, Steve; Hewes, Linda; Ross, Amy; Rhodes, Richard

    2016-01-01

    The Z-2 Prototype Planetary Extravehicular Space Suit Assembly is a continuation of NASA's Z series of spacesuits. The Z-2 is another step in NASA's technology development roadmap leading to human exploration of the Martian surface. The suit was designed for maximum mobility at 8.3 psid, reduced mass, and to have high fidelity life support interfaces. As Z-2 will be man-tested at full vacuum in NASA JSC's Chamber B, it was manufactured as Class II, making it the most flight-like planetary walking suit produced to date. The Z-2 suit architecture is an evolution of previous EVA suits, namely the ISS EMU, Mark III, Rear Entry I-Suit and Z-1 spacesuits. The suit is a hybrid hard and soft multi-bearing, rear entry spacesuit. The hard upper torso (HUT) is an all-composite structure and includes a 2-bearing rolling convolute shoulder with Vernier sizing mechanism, removable suit port interface plate (SIP), elliptical hemispherical helmet and self-don/doff shoulder harness. The hatch is a hybrid aluminum and composite construction with Apollo style gas connectors, custom water pass-thru, removable hatch cage and interfaces to primary and auxiliary life support feed water bags. The suit includes Z-1 style lower arms with cam brackets for Vernier sizing and government furnished equipment (GFE) Phase VI gloves. The lower torso includes a telescopic waist sizing system, waist bearing, rolling convolute waist joint, hard brief, 2 bearing soft hip thigh, Z-1 style legs with ISS EMU style cam brackets for sizing, and conformal walking boots with ankle bearings. The Z-2 Requirements Verification Plan includes the verification of more than 200 individual requirements. The verification methods include test, analysis, inspection, demonstration or a combination of methods. Examples of unmanned requirements include suit leakage, proof pressure testing, operational life, mass, isometric man-loads, sizing adjustment ranges, internal and external interfaces such as in-suit drink bag, partial pressure relief valve, purge valve, donning stand and ISS Body Restraint Tether (BRT). Examples of manned requirements include verification of anthropometric range, suit self-don/doff, secondary suit exit method, donning stand self-ingress/egress and manned mobility covering eight functional tasks. The eight functional tasks include kneeling with object pick-up, standing toe touch, cross-body reach, walking, reach to the SIP and helmet visor. This paper will provide an overview of the Z-2 design. Z-2 requirements verification testing was performed with NASA at the ILC Houston test facility. This paper will also discuss pre-delivery manned and unmanned test results as well as analysis performed in support of requirements verification.

  17. Hardware proofs using EHDM and the RSRE verification methodology

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Sjogren, Jon A.

    1988-01-01

    Examined is a methodology for hardware verification developed by Royal Signals and Radar Establishment (RSRE) in the context of the SRI International's Enhanced Hierarchical Design Methodology (EHDM) specification/verification system. The methodology utilizes a four-level specification hierarchy with the following levels: functional level, finite automata model, block model, and circuit level. The properties of a level are proved as theorems in the level below it. This methodology is applied to a 6-bit counter problem and is critically examined. The specifications are written in EHDM's specification language, Extended Special, and the proofs are improving both the RSRE methodology and the EHDM system.

  18. Software Model Checking Without Source Code

    NASA Technical Reports Server (NTRS)

    Chaki, Sagar; Ivers, James

    2009-01-01

    We present a framework, called AIR, for verifying safety properties of assembly language programs via software model checking. AIR extends the applicability of predicate abstraction and counterexample guided abstraction refinement to the automated verification of low-level software. By working at the assembly level, AIR allows verification of programs for which source code is unavailable-such as legacy and COTS software-and programs that use features-such as pointers, structures, and object-orientation-that are problematic for source-level software verification tools. In addition, AIR makes no assumptions about the underlying compiler technology. We have implemented a prototype of AIR and present encouraging results on several non-trivial examples.

  19. Deductive Evaluation: Formal Code Analysis With Low User Burden

    NASA Technical Reports Server (NTRS)

    Di Vito, Ben. L

    2016-01-01

    We describe a framework for symbolically evaluating iterative C code using a deductive approach that automatically discovers and proves program properties. Although verification is not performed, the method can infer detailed program behavior. Software engineering work flows could be enhanced by this type of analysis. Floyd-Hoare verification principles are applied to synthesize loop invariants, using a library of iteration-specific deductive knowledge. When needed, theorem proving is interleaved with evaluation and performed on the fly. Evaluation results take the form of inferred expressions and type constraints for values of program variables. An implementation using PVS (Prototype Verification System) is presented along with results for sample C functions.

  20. Scanning electron microscope fractography in failure analysis of steels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wouters, R.; Froyen, L.

    1996-04-01

    For many failure cases, macroscopic examination of the fracture surface permits discrimination of fatigue fractures from overload fractures. For clarifying fatigue fractures, the practical significance of microfractography is limited to an investigation of the crack initiation areas. Scanning electron microscopy is successfully used in tracing local material abnormalities that act as fatigue crack initiators. The task for the scanning electron microscope, however, is much more substantial in failure analysis of overload fractures, especially for steels. By revealing specific fractographic characteristics, complemented by information about the material and the loading conditions, scanning electron microscopy provides a strong indication of the probablemore » cause of failure. A complete dimple fracture is indicative of acceptable bulk material properties; overloading, by subdimensioning or excessive external loading, has to be verified. The presence of cleavage fracture makes the material properties questionable if external conditions causing embrittlement are absent. Intergranular brittle fracture requires verification of grain-boundary weakening conditions--a sensitized structure, whether or not combined with a local stress state or a specific environment. The role of scanning electron microscopy in failure analysis is illustrated by case histories of the aforementioned fracture types.« less

  1. Learning Assumptions for Compositional Verification

    NASA Technical Reports Server (NTRS)

    Cobleigh, Jamieson M.; Giannakopoulou, Dimitra; Pasareanu, Corina; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Compositional verification is a promising approach to addressing the state explosion problem associated with model checking. One compositional technique advocates proving properties of a system by checking properties of its components in an assume-guarantee style. However, the application of this technique is difficult because it involves non-trivial human input. This paper presents a novel framework for performing assume-guarantee reasoning in an incremental and fully automated fashion. To check a component against a property, our approach generates assumptions that the environment needs to satisfy for the property to hold. These assumptions are then discharged on the rest of the system. Assumptions are computed by a learning algorithm. They are initially approximate, but become gradually more precise by means of counterexamples obtained by model checking the component and its environment, alternately. This iterative process may at any stage conclude that the property is either true or false in the system. We have implemented our approach in the LTSA tool and applied it to the analysis of a NASA system.

  2. Evaluation and field verification of strength and structural improvement of chemically stabilized subgrade soil.

    DOT National Transportation Integrated Search

    2008-07-01

    Often subgrade soils exhibit properties, particularly strength and/or volume change properties that limit their performance as a support element for pavements. : Typical problems include shrink-swell, settlement, collapse, erosion or simply insuffici...

  3. Navigating the Requirements Jungle

    NASA Astrophysics Data System (ADS)

    Langer, Boris; Tautschnig, Michael

    Research on validation and verification of requirements specifications has thus far focused on functional properties. Yet, in embedded systems, functional requirements constitute only a small fraction of the properties that must hold to guarantee proper and safe operation of the system under design.

  4. Fourth NASA Langley Formal Methods Workshop

    NASA Technical Reports Server (NTRS)

    Holloway, C. Michael (Compiler); Hayhurst, Kelly J. (Compiler)

    1997-01-01

    This publication consists of papers presented at NASA Langley Research Center's fourth workshop on the application of formal methods to the design and verification of life-critical systems. Topic considered include: Proving properties of accident; modeling and validating SAFER in VDM-SL; requirement analysis of real-time control systems using PVS; a tabular language for system design; automated deductive verification of parallel systems. Also included is a fundamental hardware design in PVS.

  5. 12 CFR 614.4266 - Personal and intangible property evaluations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    .... 614.4266 Section 614.4266 Banks and Banking FARM CREDIT ADMINISTRATION FARM CREDIT SYSTEM LOAN POLICIES AND OPERATIONS Collateral Evaluation Requirements § 614.4266 Personal and intangible property... shall include provisions for periodic collateral inspections and verification by the institution's...

  6. RF verification tasks underway at the Harris Corporation for multiple aperture reflector system

    NASA Technical Reports Server (NTRS)

    Gutwein, T. A.

    1982-01-01

    Mesh effects on gain and patterns and adjacent aperture coupling effects for "pie" and circular apertures are discussed. Wire effects for Harris model with Langley scale model results included for assessing D/lamda effects, and wire effects with adjacent aperture coupling were determined. Reflector surface distortion effects (pillows and manufacturing roughness) were studied.

  7. Using Statechart Assertion for the Formal Validation and Verification of a Real-Time Software System: A Case Study

    DTIC Science & Technology

    2011-03-01

    could be an entry point into a repeated task (or thread). The following example uses binary semaphores . The VxWorks operating system utilizes binary... semaphores via system calls: SemTake and SemGive. These semaphores are used primarily for mutual exclusion to protect resources from being accessed

  8. Parsing the Passive: Comparing Children with Specific Language Impairment to Sequential Bilingual Children

    ERIC Educational Resources Information Center

    Marinis, Theodoros; Saddy, Douglas

    2013-01-01

    Twenty-five monolingual (L1) children with specific language impairment (SLI), 32 sequential bilingual (L2) children, and 29 L1 controls completed the Test of Active & Passive Sentences-Revised (van der Lely 1996) and the Self-Paced Listening Task with Picture Verification for actives and passives (Marinis 2007). These revealed important…

  9. A Physics Laboratory Course Designed Using Problem-Based Learning for Prospective Physics Teachers

    ERIC Educational Resources Information Center

    Ünal, Cezmi; Özdemir, Ömer Faruk

    2013-01-01

    In general, laboratories are exercises with a primary focus on the verification of established laws and principles, or on the discovery of objectively knowable facts. In laboratories, students gather data without comprehending the meaning of their actions. The cognitive demand of laboratory tasks is reduced to a minimal level. To prevent these…

  10. Representation, Classification and Information Fusion for Robust and Efficient Multimodal Human States Recognition

    ERIC Educational Resources Information Center

    Li, Ming

    2013-01-01

    The goal of this work is to enhance the robustness and efficiency of the multimodal human states recognition task. Human states recognition can be considered as a joint term for identifying/verifing various kinds of human related states, such as biometric identity, language spoken, age, gender, emotion, intoxication level, physical activity, vocal…

  11. Multitask visual learning using genetic programming.

    PubMed

    Jaśkowski, Wojciech; Krawiec, Krzysztof; Wieloch, Bartosz

    2008-01-01

    We propose a multitask learning method of visual concepts within the genetic programming (GP) framework. Each GP individual is composed of several trees that process visual primitives derived from input images. Two trees solve two different visual tasks and are allowed to share knowledge with each other by commonly calling the remaining GP trees (subfunctions) included in the same individual. The performance of a particular tree is measured by its ability to reproduce the shapes contained in the training images. We apply this method to visual learning tasks of recognizing simple shapes and compare it to a reference method. The experimental verification demonstrates that such multitask learning often leads to performance improvements in one or both solved tasks, without extra computational effort.

  12. NAS Grid Benchmarks. 1.0

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob; Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We provide a paper-and-pencil specification of a benchmark suite for computational grids. It is based on the NAS (NASA Advanced Supercomputing) Parallel Benchmarks (NPB) and is called the NAS Grid Benchmarks (NGB). NGB problems are presented as data flow graphs encapsulating an instance of a slightly modified NPB task in each graph node, which communicates with other nodes by sending/receiving initialization data. Like NPB, NGB specifies several different classes (problem sizes). In this report we describe classes S, W, and A, and provide verification values for each. The implementor has the freedom to choose any language, grid environment, security model, fault tolerance/error correction mechanism, etc., as long as the resulting implementation passes the verification test and reports the turnaround time of the benchmark.

  13. Array automated assembly task, phase 2. Low cost silicon solar array project

    NASA Technical Reports Server (NTRS)

    Rhee, S. S.; Jones, G. T.; Allison, K. T.

    1978-01-01

    Several modifications instituted in the wafer surface preparation process served to significantly reduce the process cost to 1.55 cents per peak watt in 1975 cents. Performance verification tests of a laser scanning system showed a limited capability to detect hidden cracks or defects, but with potential equipment modifications this cost effective system could be rendered suitable for applications. Installation of electroless nickel plating system was completed along with an optimization of the wafer plating process. The solder coating and flux removal process verification test was completed. An optimum temperature range of 500-550 C was found to produce uniform solder coating with the restriction that a modified dipping procedure is utilized. Finally, the construction of the spray-on dopant equipment was completed.

  14. An assessment of space shuttle flight software development processes

    NASA Technical Reports Server (NTRS)

    1993-01-01

    In early 1991, the National Aeronautics and Space Administration's (NASA's) Office of Space Flight commissioned the Aeronautics and Space Engineering Board (ASEB) of the National Research Council (NRC) to investigate the adequacy of the current process by which NASA develops and verifies changes and updates to the Space Shuttle flight software. The Committee for Review of Oversight Mechanisms for Space Shuttle Flight Software Processes was convened in Jan. 1992 to accomplish the following tasks: (1) review the entire flight software development process from the initial requirements definition phase to final implementation, including object code build and final machine loading; (2) review and critique NASA's independent verification and validation process and mechanisms, including NASA's established software development and testing standards; (3) determine the acceptability and adequacy of the complete flight software development process, including the embedded validation and verification processes through comparison with (1) generally accepted industry practices, and (2) generally accepted Department of Defense and/or other government practices (comparing NASA's program with organizations and projects having similar volumes of software development, software maturity, complexity, criticality, lines of code, and national standards); (4) consider whether independent verification and validation should continue. An overview of the study, independent verification and validation of critical software, and the Space Shuttle flight software development process are addressed. Findings and recommendations are presented.

  15. Towards Trustable Digital Evidence with PKIDEV: PKI Based Digital Evidence Verification Model

    NASA Astrophysics Data System (ADS)

    Uzunay, Yusuf; Incebacak, Davut; Bicakci, Kemal

    How to Capture and Preserve Digital Evidence Securely? For the investigation and prosecution of criminal activities that involve computers, digital evidence collected in the crime scene has a vital importance. On one side, it is a very challenging task for forensics professionals to collect them without any loss or damage. On the other, there is the second problem of providing the integrity and authenticity in order to achieve legal acceptance in a court of law. By conceiving digital evidence simply as one instance of digital data, it is evident that modern cryptography offers elegant solutions for this second problem. However, to our knowledge, there is not any previous work proposing a systematic model having a holistic view to address all the related security problems in this particular case of digital evidence verification. In this paper, we present PKIDEV (Public Key Infrastructure based Digital Evidence Verification model) as an integrated solution to provide security for the process of capturing and preserving digital evidence. PKIDEV employs, inter alia, cryptographic techniques like digital signatures and secure time-stamping as well as latest technologies such as GPS and EDGE. In our study, we also identify the problems public-key cryptography brings when it is applied to the verification of digital evidence.

  16. Design and verification of distributed logic controllers with application of Petri nets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiśniewski, Remigiusz; Grobelna, Iwona; Grobelny, Michał

    2015-12-31

    The paper deals with the designing and verification of distributed logic controllers. The control system is initially modelled with Petri nets and formally verified against structural and behavioral properties with the application of the temporal logic and model checking technique. After that it is decomposed into separate sequential automata that are working concurrently. Each of them is re-verified and if the validation is successful, the system can be finally implemented.

  17. Design and verification of distributed logic controllers with application of Petri nets

    NASA Astrophysics Data System (ADS)

    Wiśniewski, Remigiusz; Grobelna, Iwona; Grobelny, Michał; Wiśniewska, Monika

    2015-12-01

    The paper deals with the designing and verification of distributed logic controllers. The control system is initially modelled with Petri nets and formally verified against structural and behavioral properties with the application of the temporal logic and model checking technique. After that it is decomposed into separate sequential automata that are working concurrently. Each of them is re-verified and if the validation is successful, the system can be finally implemented.

  18. Group discriminatory power of handwritten characters

    NASA Astrophysics Data System (ADS)

    Tomai, Catalin I.; Kshirsagar, Devika M.; Srihari, Sargur N.

    2003-12-01

    Using handwritten characters we address two questions (i) what is the group identification performance of different alphabets (upper and lower case) and (ii) what are the best characters for the verification task (same writer/different writer discrimination) knowing demographic information about the writer such as ethnicity, age or sex. The Bhattacharya distance is used to rank different characters by their group discriminatory power and the k-nn classifier to measure the individual performance of characters for group identification. Given the tasks of identifying the correct gender/age/ethnicity or handedness, the accumulated performance of characters varies between 65% and 85%.

  19. A preliminary experimental examination of worldview verification, perceived racism, and stress reactivity in African Americans.

    PubMed

    Lucas, Todd; Lumley, Mark A; Flack, John M; Wegner, Rhiana; Pierce, Jennifer; Goetz, Stefan

    2016-04-01

    According to worldview verification theory, inconsistencies between lived experiences and worldviews are psychologically threatening. These inconsistencies may be key determinants of stress processes that influence cardiovascular health disparities. This preliminary examination considers how experiencing injustice can affect perceived racism and biological stress reactivity among African Americans. Guided by worldview verification theory, it was hypothesized that responses to receiving an unfair outcome would be moderated by fairness of the accompanying decision process, and that this effect would further depend on the consistency of the decision process with preexisting justice beliefs. A sample of 118 healthy African American adults completed baseline measures of justice beliefs, followed by a laboratory-based social-evaluative stressor task. Two randomized fairness manipulations were implemented during the task: participants were given either high or low levels of distributive (outcome) and procedural (decision process) justice. Glucocorticoid (cortisol) and inflammatory (C-reactive protein) biological responses were measured in oral fluids, and attributions of racism were also measured. The hypothesized 3-way interaction was generally obtained. Among African Americans with a strong belief in justice, perceived racism, cortisol, and C-reactive protein responses to low distributive justice were higher when procedural justice was low. Among African Americans with a weak belief in justice however, these responses were higher when a low level of distributive justice was coupled with high procedural justice. Biological and psychological processes that contribute to cardiovascular health disparities are affected by consistency between individual-level and contextual justice factors. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  20. The calculating brain: an fMRI study.

    PubMed

    Rickard, T C; Romero, S G; Basso, G; Wharton, C; Flitman, S; Grafman, J

    2000-01-01

    To explore brain areas involved in basic numerical computation, functional magnetic imaging (fMRI) scanning was performed on college students during performance of three tasks; simple arithmetic, numerical magnitude judgment, and a perceptual-motor control task. For the arithmetic relative to the other tasks, results for all eight subjects revealed bilateral activation in Brodmann's area 44, in dorsolateral prefrontal cortex (areas 9 and 10), in inferior and superior parietal areas, and in lingual and fusiform gyri. Activation was stronger on the left for all subjects, but only at Brodmann's area 44 and the parietal cortices. No activation was observed in the arithmetic task in several other areas previously implicated for arithmetic, including the angular and supramarginal gyri and the basal ganglia. In fact, angular and supramarginal gyri were significantly deactivated by the verification task relative to both the magnitude judgment and control tasks for every subject. Areas activated by the magnitude task relative to the control were more variable, but in five subjects included bilateral inferior parietal cortex. These results confirm some existing hypotheses regarding the neural basis of numerical processes, invite revision of others, and suggest productive lines for future investigation.

  1. Generating Customized Verifiers for Automatically Generated Code

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Fischer, Bernd

    2008-01-01

    Program verification using Hoare-style techniques requires many logical annotations. We have previously developed a generic annotation inference algorithm that weaves in all annotations required to certify safety properties for automatically generated code. It uses patterns to capture generator- and property-specific code idioms and property-specific meta-program fragments to construct the annotations. The algorithm is customized by specifying the code patterns and integrating them with the meta-program fragments for annotation construction. However, this is difficult since it involves tedious and error-prone low-level term manipulations. Here, we describe an annotation schema compiler that largely automates this customization task using generative techniques. It takes a collection of high-level declarative annotation schemas tailored towards a specific code generator and safety property, and generates all customized analysis functions and glue code required for interfacing with the generic algorithm core, thus effectively creating a customized annotation inference algorithm. The compiler raises the level of abstraction and simplifies schema development and maintenance. It also takes care of some more routine aspects of formulating patterns and schemas, in particular handling of irrelevant program fragments and irrelevant variance in the program structure, which reduces the size, complexity, and number of different patterns and annotation schemas that are required. The improvements described here make it easier and faster to customize the system to a new safety property or a new generator, and we demonstrate this by customizing it to certify frame safety of space flight navigation code that was automatically generated from Simulink models by MathWorks' Real-Time Workshop.

  2. [Methods, challenges and opportunities for big data analyses of microbiome].

    PubMed

    Sheng, Hua-Fang; Zhou, Hong-Wei

    2015-07-01

    Microbiome is a novel research field related with a variety of chronic inflamatory diseases. Technically, there are two major approaches to analysis of microbiome: metataxonome by sequencing the 16S rRNA variable tags, and metagenome by shot-gun sequencing of the total microbial (mainly bacterial) genome mixture. The 16S rRNA sequencing analyses pipeline includes sequence quality control, diversity analyses, taxonomy and statistics; metagenome analyses further includes gene annotation and functional analyses. With the development of the sequencing techniques, the cost of sequencing will decrease, and big data analyses will become the central task. Data standardization, accumulation, modeling and disease prediction are crucial for future exploit of these data. Meanwhile, the information property in these data, and the functional verification with culture-dependent and culture-independent experiments remain the focus in future research. Studies of human microbiome will bring a better understanding of the relations between the human body and the microbiome, especially in the context of disease diagnosis and therapy, which promise rich research opportunities.

  3. Shadow-Based Vehicle Detection in Urban Traffic

    PubMed Central

    Ibarra-Arenado, Manuel; Tjahjadi, Tardi; Pérez-Oria, Juan; Robla-Gómez, Sandra; Jiménez-Avello, Agustín

    2017-01-01

    Vehicle detection is a fundamental task in Forward Collision Avoiding Systems (FACS). Generally, vision-based vehicle detection methods consist of two stages: hypotheses generation and hypotheses verification. In this paper, we focus on the former, presenting a feature-based method for on-road vehicle detection in urban traffic. Hypotheses for vehicle candidates are generated according to the shadow under the vehicles by comparing pixel properties across the vertical intensity gradients caused by shadows on the road, and followed by intensity thresholding and morphological discrimination. Unlike methods that identify the shadow under a vehicle as a road region with intensity smaller than a coarse lower bound of the intensity for road, the thresholding strategy we propose determines a coarse upper bound of the intensity for shadow which reduces false positives rates. The experimental results are promising in terms of detection performance and robustness in day time under different weather conditions and cluttered scenarios to enable validation for the first stage of a complete FACS. PMID:28448465

  4. Spacecraft Environmental Testing SMAP (Soil, Moisture, Active, Passive)

    NASA Technical Reports Server (NTRS)

    Fields, Keith

    2014-01-01

    Testing a complete full up spacecraft to verify it will survive the environment, in which it will be exposed to during its mission, is a formidable task in itself. However, the ''test like you fly'' philosophy sometimes gets compromised because of cost, design and or time. This paper describes the thermal-vacuum and mass properties testing of the Soil Moisture Active Passive (SMAP) earth orbiting satellite. SMAP will provide global observations of soil moisture and freeze/thaw state (the hydrosphere state). SMAP hydrosphere state measurements will be used to enhance understanding of processes that link the water, energy, and carbon cycles, and to extend the capabilities of weather and climate prediction models. It will explain the problems encountered, and the solutions developed, which minimized the risk typically associated with such an arduous process. Also discussed, the future of testing on expensive long lead-time spacecraft. Will we ever reach the ''build and shoot" scenario with minimal or no verification testing?

  5. Challenges of designing and testing a highly stable sensor platform: Cesic solves MTG star sensor bracket thermoelastic requirements

    NASA Astrophysics Data System (ADS)

    Kroedel, Matthias; Zauner, Christoph

    2017-09-01

    The Meteosat Third Generation's extreme pointing requirements call for a highly stable bracket for mounting the Star Trackers. HB-Cesic®, a chopped fibre reinforced silicon carbide, was selected as a base material for the sensor bracket. The high thermal conductivity and low thermal expansion of HB-Cesic® were the key properties to fulfil the demanding thermo-elastic pointing requirements of below 1μrad/K for the Star Trackers mounting interfaces. Dominated by thermoelastic stability requirements, the design and analysis of the Bracket required a multidisciplinary approach with the focus on thermal and thermo-elastic analyses. Dedicated modal and thermal post-processing strategies have been applied in the scope of the light weighting process. The experimental verification of this thermo-elastic stable system has been a challenging task of its own. A thermo-elastic distortion measurement rig was developed with a stability of <0.1μrad/K in all three rotational degrees of freedom.

  6. Western Shallow Oil Zone, Elk Hills Field, Kern County, California: General reservoir study, Appendix 4, Fourth Wilhelm sand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carey, K.B.

    1987-09-01

    The general Reservoir Study of the Western Shallow Oil Zone was prepared by Evans, Carey and Crozier as Task Assignment 009 with the United States Department of Energy. This study, Appendix IV, addresses the Fourth Wilhelm Sand and its sub units and pools. Basic pressure, production and assorted technical data were provided by the US Department of Energy staff at Elk Hills. Basic pressure production and assorted technical data were provided by the US Department of Energy staff at Elk Hills. These data were accepted as furnished with no attempt being made by Evans, Carey and Crozier for independent verification.more » This study has identified the petrophysical properties and the past productive performance of the reservoir. Primary reserves have been determined and general means of enhancing future recovery have been suggested. It is hoped that this volume can now additionally serve as a take off point for exploitation engineers to develop specific programs toward the end. 12 figs., 9 tabs.« less

  7. Western Shallow Oil Zone, Elk Hills Field, Kern County, California: General reservoir study, Appendix 3, Second Wilhelm Sand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carey, K.B.

    1987-09-01

    The general Reservoir Study of the Western Shallow Oil Zone was prepared by Evans, Carey and Crozier as Task Assignment 009 under Contract No. DE-ACO1-85FE60600 with the United States Department of Energy. This study Appendix III, the second Wilhelm Sand and it's sub units and pools. Basic pressure, production and assorted technical data were provided by the U.S. Department of Energy staff at Elk Hills. These data were accepted as furnished with no attempt being made by Evans, Carey and Crozier for independent verification. This study has identified the petrophysical properties and the past productive performance of the reservoir. Primarymore » reserves have been determined and general means of enhancing future recovery have been suggested. It is hoped that this volume can not additionally serve as a take off point for exploitation engineers to develop specific programs towards these ends. 15 figs., 9 tabs.« less

  8. Real-Time Kennedy Space Center and Cape Canaveral Air Force Station High-Resolution Model Implementation and Verification

    NASA Technical Reports Server (NTRS)

    Shafer, Jaclyn A.; Watson, Leela R.

    2015-01-01

    Customer: NASA's Launch Services Program (LSP), Ground Systems Development and Operations (GSDO), and Space Launch System (SLS) programs. NASA's LSP, GSDO, SLS and other programs at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) use the daily and weekly weather forecasts issued by the 45th Weather Squadron (45 WS) as decision tools for their day-to-day and launch operations on the Eastern Range (ER). For example, to determine if they need to limit activities such as vehicle transport to the launch pad, protect people, structures or exposed launch vehicles given a threat of severe weather, or reschedule other critical operations. The 45 WS uses numerical weather prediction models as a guide for these weather forecasts, particularly the Air Force Weather Agency (AFWA) 1.67 kilometer Weather Research and Forecasting (WRF) model. Considering the 45 WS forecasters' and Launch Weather Officers' (LWO) extensive use of the AFWA model, the 45 WS proposed a task at the September 2013 Applied Meteorology Unit (AMU) Tasking Meeting requesting the AMU verify this model. Due to the lack of archived model data available from AFWA, verification is not yet possible. Instead, the AMU proposed to implement and verify the performance of an ER version of the AMU high-resolution WRF Environmental Modeling System (EMS) model (Watson 2013) in real-time. The tasking group agreed to this proposal; therefore the AMU implemented the WRF-EMS model on the second of two NASA AMU modeling clusters. The model was set up with a triple-nested grid configuration over KSC/CCAFS based on previous AMU work (Watson 2013). The outer domain (D01) has 12-kilometer grid spacing, the middle domain (D02) has 4-kilometer grid spacing, and the inner domain (D03) has 1.33-kilometer grid spacing. The model runs a 12-hour forecast every hour, D01 and D02 domain outputs are available once an hour and D03 is every 15 minutes during the forecast period. The AMU assessed the WRF-EMS 1.33-kilometer domain model performance for the 2014 warm season (May-September). Verification statistics were computed using the Model Evaluation Tools, which compared the model forecasts to observations. The mean error values were close to 0 and the root mean square error values were less than 1.8 for mean sea-level pressure (millibars), temperature (degrees Kelvin), dewpoint temperature (degrees Kelvin), and wind speed (per millisecond), all very small differences between the forecast and observations considering the normal magnitudes of the parameters. The precipitation forecast verification results showed consistent under-forecasting of the precipitation object size. This could be an artifact of calculating the statistics for each hour rather than for the entire 12-hour period. The AMU will continue to generate verification statistics for the 1.33-kilometer WRF-EMS domain as data become available in future cool and warm seasons. More data will produce more robust statistics and reveal a more accurate assessment of model performance. Once the formal task was complete, the AMU conducted additional work to better understand the wind direction results. The results were stratified diurnally and by wind speed to determine what effects the stratifications would have on the model wind direction verification statistics. The results are summarized in the addendum at the end of this report. In addition to verifying the model's performance, the AMU also made the output available in the Advanced Weather Interactive Processing System II (AWIPS II). This allows the 45 WS and AMU staff to customize the model output display on the AMU and Range Weather Operations AWIPS II client computers and conduct real-time subjective analyses. In the future, the AMU will implement an updated version of the WRF-EMS model that incorporates local data assimilation. This model will also run in real-time and be made available in AWIPS II.

  9. Abstract Model of the SATS Concept of Operations: Initial Results and Recommendations

    NASA Technical Reports Server (NTRS)

    Dowek, Gilles; Munoz, Cesar; Carreno, Victor A.

    2004-01-01

    An abstract mathematical model of the concept of operations for the Small Aircraft Transportation System (SATS) is presented. The Concept of Operations consist of several procedures that describe nominal operations for SATS, Several safety properties of the system are proven using formal techniques. The final goal of the verification effort is to show that under nominal operations, aircraft are safely separated. The abstract model was written and formally verified in the Prototype Verification System (PVS).

  10. Instrumentation: Nondestructive Examination for Verification of Canister and Cladding Integrity. FY2014 Status Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Ryan M.; Suter, Jonathan D.; Jones, Anthony M.

    2014-09-12

    This report documents FY14 efforts for two instrumentation subtasks under storage and transportation. These instrumentation tasks relate to developing effective nondestructive evaluation (NDE) methods and techniques to (1) verify the integrity of metal canisters for the storage of used nuclear fuel (UNF) and to (2) verify the integrity of dry storage cask internals.

  11. Possibly All of that and Then Some: Scalar Implicatures Are Understood in Two Steps

    ERIC Educational Resources Information Center

    Tomlinson, John M., Jr.; Bailey, Todd M.; Bott, Lewis

    2013-01-01

    Scalar implicatures often incur a processing cost in sentence comprehension tasks. We used a novel mouse-tracking technique in a sentence verification paradigm to test different accounts of this effect. We compared a two-step account, in which people access a basic meaning and then enrich the basic meaning to form the scalar implicature, against a…

  12. Finding Needles in Haystacks: Identity Mismatch Frequency and Facial Identity Verification

    ERIC Educational Resources Information Center

    Bindemann, Markus; Avetisyan, Meri; Blackwell, Kristy-Ann

    2010-01-01

    Accurate person identification is central to all security, police, and judicial systems. A commonplace method to achieve this is to compare a photo-ID and the face of its purported owner. The critical aspect of this task is to spot cases in which these two instances of a face do not match. Studies of person identification show that these instances…

  13. When Law Students Read Multiple Documents about Global Warming: Examining the Role of Topic-Specific Beliefs about the Nature of Knowledge and Knowing

    ERIC Educational Resources Information Center

    Braten, Ivar; Stromso, Helge I.

    2010-01-01

    In this study, law students (n = 49) read multiple authentic documents presenting conflicting information on the topic of climate change and responded to verification tasks assessing their superficial as well as their deeper-level within- and across-documents comprehension. Hierarchical multiple regression analyses showed that even after variance…

  14. Development of Product Relatedness and Distance Effects in Typical Achievers and in Children with Mathematics Learning Disabilities

    ERIC Educational Resources Information Center

    Rotem, Avital; Henik, Avishai

    2015-01-01

    The current study examined the development of two effects that have been found in single-digit multiplication errors: relatedness and distance. Typically achieving (TA) second, fourth, and sixth graders and adults, and sixth and eighth graders with a mathematics learning disability (MLD) performed a verification task. Relatedness was defined by a…

  15. Analytical three-dimensional neutron transport benchmarks for verification of nuclear engineering codes. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganapol, B.D.; Kornreich, D.E.

    Because of the requirement of accountability and quality control in the scientific world, a demand for high-quality analytical benchmark calculations has arisen in the neutron transport community. The intent of these benchmarks is to provide a numerical standard to which production neutron transport codes may be compared in order to verify proper operation. The overall investigation as modified in the second year renewal application includes the following three primary tasks. Task 1 on two dimensional neutron transport is divided into (a) single medium searchlight problem (SLP) and (b) two-adjacent half-space SLP. Task 2 on three-dimensional neutron transport covers (a) pointmore » source in arbitrary geometry, (b) single medium SLP, and (c) two-adjacent half-space SLP. Task 3 on code verification, includes deterministic and probabilistic codes. The primary aim of the proposed investigation was to provide a suite of comprehensive two- and three-dimensional analytical benchmarks for neutron transport theory applications. This objective has been achieved. The suite of benchmarks in infinite media and the three-dimensional SLP are a relatively comprehensive set of one-group benchmarks for isotropically scattering media. Because of time and resource limitations, the extensions of the benchmarks to include multi-group and anisotropic scattering are not included here. Presently, however, enormous advances in the solution for the planar Green`s function in an anisotropically scattering medium have been made and will eventually be implemented in the two- and three-dimensional solutions considered under this grant. Of particular note in this work are the numerical results for the three-dimensional SLP, which have never before been presented. The results presented were made possible only because of the tremendous advances in computing power that have occurred during the past decade.« less

  16. Conditional High-Order Boltzmann Machines for Supervised Relation Learning.

    PubMed

    Huang, Yan; Wang, Wei; Wang, Liang; Tan, Tieniu

    2017-09-01

    Relation learning is a fundamental problem in many vision tasks. Recently, high-order Boltzmann machine and its variants have shown their great potentials in learning various types of data relation in a range of tasks. But most of these models are learned in an unsupervised way, i.e., without using relation class labels, which are not very discriminative for some challenging tasks, e.g., face verification. In this paper, with the goal to perform supervised relation learning, we introduce relation class labels into conventional high-order multiplicative interactions with pairwise input samples, and propose a conditional high-order Boltzmann Machine (CHBM), which can learn to classify the data relation in a binary classification way. To be able to deal with more complex data relation, we develop two improved variants of CHBM: 1) latent CHBM, which jointly performs relation feature learning and classification, by using a set of latent variables to block the pathway from pairwise input samples to output relation labels and 2) gated CHBM, which untangles factors of variation in data relation, by exploiting a set of latent variables to multiplicatively gate the classification of CHBM. To reduce the large number of model parameters generated by the multiplicative interactions, we approximately factorize high-order parameter tensors into multiple matrices. Then, we develop efficient supervised learning algorithms, by first pretraining the models using joint likelihood to provide good parameter initialization, and then finetuning them using conditional likelihood to enhance the discriminant ability. We apply the proposed models to a series of tasks including invariant recognition, face verification, and action similarity labeling. Experimental results demonstrate that by exploiting supervised relation labels, our models can greatly improve the performance.

  17. Inverse probability weighting estimation of the volume under the ROC surface in the presence of verification bias.

    PubMed

    Zhang, Ying; Alonzo, Todd A

    2016-11-01

    In diagnostic medicine, the volume under the receiver operating characteristic (ROC) surface (VUS) is a commonly used index to quantify the ability of a continuous diagnostic test to discriminate between three disease states. In practice, verification of the true disease status may be performed only for a subset of subjects under study since the verification procedure is invasive, risky, or expensive. The selection for disease examination might depend on the results of the diagnostic test and other clinical characteristics of the patients, which in turn can cause bias in estimates of the VUS. This bias is referred to as verification bias. Existing verification bias correction in three-way ROC analysis focuses on ordinal tests. We propose verification bias-correction methods to construct ROC surface and estimate the VUS for a continuous diagnostic test, based on inverse probability weighting. By applying U-statistics theory, we develop asymptotic properties for the estimator. A Jackknife estimator of variance is also derived. Extensive simulation studies are performed to evaluate the performance of the new estimators in terms of bias correction and variance. The proposed methods are used to assess the ability of a biomarker to accurately identify stages of Alzheimer's disease. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Verification of cardiac mechanics software: benchmark problems and solutions for testing active and passive material behaviour.

    PubMed

    Land, Sander; Gurev, Viatcheslav; Arens, Sander; Augustin, Christoph M; Baron, Lukas; Blake, Robert; Bradley, Chris; Castro, Sebastian; Crozier, Andrew; Favino, Marco; Fastl, Thomas E; Fritz, Thomas; Gao, Hao; Gizzi, Alessio; Griffith, Boyce E; Hurtado, Daniel E; Krause, Rolf; Luo, Xiaoyu; Nash, Martyn P; Pezzuto, Simone; Plank, Gernot; Rossi, Simone; Ruprecht, Daniel; Seemann, Gunnar; Smith, Nicolas P; Sundnes, Joakim; Rice, J Jeremy; Trayanova, Natalia; Wang, Dafang; Jenny Wang, Zhinuo; Niederer, Steven A

    2015-12-08

    Models of cardiac mechanics are increasingly used to investigate cardiac physiology. These models are characterized by a high level of complexity, including the particular anisotropic material properties of biological tissue and the actively contracting material. A large number of independent simulation codes have been developed, but a consistent way of verifying the accuracy and replicability of simulations is lacking. To aid in the verification of current and future cardiac mechanics solvers, this study provides three benchmark problems for cardiac mechanics. These benchmark problems test the ability to accurately simulate pressure-type forces that depend on the deformed objects geometry, anisotropic and spatially varying material properties similar to those seen in the left ventricle and active contractile forces. The benchmark was solved by 11 different groups to generate consensus solutions, with typical differences in higher-resolution solutions at approximately 0.5%, and consistent results between linear, quadratic and cubic finite elements as well as different approaches to simulating incompressible materials. Online tools and solutions are made available to allow these tests to be effectively used in verification of future cardiac mechanics software.

  19. Software Tool Integrating Data Flow Diagrams and Petri Nets

    NASA Technical Reports Server (NTRS)

    Thronesbery, Carroll; Tavana, Madjid

    2010-01-01

    Data Flow Diagram - Petri Net (DFPN) is a software tool for analyzing other software to be developed. The full name of this program reflects its design, which combines the benefit of data-flow diagrams (which are typically favored by software analysts) with the power and precision of Petri-net models, without requiring specialized Petri-net training. (A Petri net is a particular type of directed graph, a description of which would exceed the scope of this article.) DFPN assists a software analyst in drawing and specifying a data-flow diagram, then translates the diagram into a Petri net, then enables graphical tracing of execution paths through the Petri net for verification, by the end user, of the properties of the software to be developed. In comparison with prior means of verifying the properties of software to be developed, DFPN makes verification by the end user more nearly certain, thereby making it easier to identify and correct misconceptions earlier in the development process, when correction is less expensive. After the verification by the end user, DFPN generates a printable system specification in the form of descriptions of processes and data.

  20. The mental representation of living and nonliving things: differential weighting and interactivity of sensorial and non-sensorial features.

    PubMed

    Ventura, Paulo; Morais, José; Brito-Mendes, Carlos; Kolinsky, Régine

    2005-02-01

    Warrington and colleagues (Warrington & McCarthy, 1983, 1987; Warrington & Shallice, 1984) claimed that sensorial and functional-associative (FA) features are differentially important in determining the meaning of living things (LT) and nonliving things (NLT). The first aim of the present study was to evaluate this hypothesis through two different access tasks: feature generation (Experiment 1) and cued recall (Experiment 2). The results of both experiments provided consistent empirical support for Warrington and colleagues' assumption. The second aim of the present study was to test a new differential interactivity hypothesis that combines Warrington and colleagueS' assumption with the notion of a higher number of intercorrelations and hence of a stronger connectivity between sensorial and non-sensorial features for LTs than for NLTs. This hypothesis was motivated by previoUs reports of an uncrossed interaction between domain (LTs vs NLTs) and attribute type (sensorial vs FA) in, for example, a feature verification task (Laws, Humber, Ramsey, & McCarthy, 1995): while FA attributes are verified faster than sensorial attributes for NLTs, no difference is observed for LTs. We replicated and generalised this finding using several feature verification tasks on both written words and pictures (Experiment 3), including in conditions aimed at minimising the intervention of priming biases and strategic or mnemonic processes (Experiment 4). The whole set of results suggests that both privileged relations between features and categories, and the differential importance of intercorrelations between features as a function of category, modulate access to semantic features.

  1. The influence of cardiorespiratory fitness on strategic, behavioral, and electrophysiological indices of arithmetic cognition in preadolescent children

    PubMed Central

    Moore, R. Davis; Drollette, Eric S.; Scudder, Mark R.; Bharij, Aashiv; Hillman, Charles H.

    2014-01-01

    The current study investigated the influence of cardiorespiratory fitness on arithmetic cognition in forty 9–10 year old children. Measures included a standardized mathematics achievement test to assess conceptual and computational knowledge, self-reported strategy selection, and an experimental arithmetic verification task (including small and large addition problems), which afforded the measurement of event-related brain potentials (ERPs). No differences in math achievement were observed as a function of fitness level, but all children performed better on math concepts relative to math computation. Higher fit children reported using retrieval more often to solve large arithmetic problems, relative to lower fit children. During the arithmetic verification task, higher fit children exhibited superior performance for large problems, as evidenced by greater d' scores, while all children exhibited decreased accuracy and longer reaction time for large relative to small problems, and incorrect relative to correct solutions. On the electrophysiological level, modulations of early (P1, N170) and late ERP components (P3, N400) were observed as a function of problem size and solution correctness. Higher fit children exhibited selective modulations for N170, P3, and N400 amplitude relative to lower fit children, suggesting that fitness influences symbolic encoding, attentional resource allocation and semantic processing during arithmetic tasks. The current study contributes to the fitness-cognition literature by demonstrating that the benefits of cardiorespiratory fitness extend to arithmetic cognition, which has important implications for the educational environment and the context of learning. PMID:24829556

  2. Memory and comprehension deficits in spatial descriptions of children with non-verbal and reading disabilities.

    PubMed

    Mammarella, Irene C; Meneghetti, Chiara; Pazzaglia, Francesca; Cornoldi, Cesare

    2014-01-01

    The present study investigated the difficulties encountered by children with non-verbal learning disability (NLD) and reading disability (RD) when processing spatial information derived from descriptions, based on the assumption that both groups should find it more difficult than matched controls, but for different reasons, i.e., due to a memory encoding difficulty in cases of RD and to spatial information comprehension problems in cases of NLD. Spatial descriptions from both survey and route perspectives were presented to 9-12-year-old children divided into three groups: NLD (N = 12); RD (N = 12), and typically developing controls (TD; N = 15); then participants completed a sentence verification task and a memory for locations task. The sentence verification task was presented in two conditions: in one the children could refer to the text while answering the questions (i.e., text present condition), and in the other the text was withdrawn (i.e., text absent condition). Results showed that the RD group benefited from the text present condition, but was impaired to the same extent as the NLD group in the text absent condition, suggesting that the NLD children's difficulty is due mainly to their poor comprehension of spatial descriptions, while the RD children's difficulty is due more to a memory encoding problem. These results are discussed in terms of their implications in the neuropsychological profiles of children with NLD or RD, and the processes involved in spatial descriptions.

  3. SENSOR++: Simulation of Remote Sensing Systems from Visible to Thermal Infrared

    NASA Astrophysics Data System (ADS)

    Paproth, C.; Schlüßler, E.; Scherbaum, P.; Börner, A.

    2012-07-01

    During the development process of a remote sensing system, the optimization and the verification of the sensor system are important tasks. To support these tasks, the simulation of the sensor and its output is valuable. This enables the developers to test algorithms, estimate errors, and evaluate the capabilities of the whole sensor system before the final remote sensing system is available and produces real data. The presented simulation concept, SENSOR++, consists of three parts. The first part is the geometric simulation which calculates where the sensor looks at by using a ray tracing algorithm. This also determines whether the observed part of the scene is shadowed or not. The second part describes the radiometry and results in the spectral at-sensor radiance from the visible spectrum to the thermal infrared according to the simulated sensor type. In the case of earth remote sensing, it also includes a model of the radiative transfer through the atmosphere. The final part uses the at-sensor radiance to generate digital images by using an optical and an electronic sensor model. Using SENSOR++ for an optimization requires the additional application of task-specific data processing algorithms. The principle of the simulation approach is explained, all relevant concepts of SENSOR++ are discussed, and first examples of its use are given, for example a camera simulation for a moon lander. Finally, the verification of SENSOR++ is demonstrated.

  4. Consolidated View on Space Software Engineering Problems - An Empirical Study

    NASA Astrophysics Data System (ADS)

    Silva, N.; Vieira, M.; Ricci, D.; Cotroneo, D.

    2015-09-01

    Independent software verification and validation (ISVV) has been a key process for engineering quality assessment for decades, and is considered in several international standards. The “European Space Agency (ESA) ISVV Guide” is used for the European Space market to drive the ISVV tasks and plans, and to select applicable tasks and techniques. Software artefacts have room for improvement due to the amount if issues found during ISVV tasks. This article presents the analysis of the results of a large set of ISVV issues originated from three different ESA missions-amounting to more than 1000 issues. The study presents the main types, triggers and impacts related to the ISVV issues found and sets the path for a global software engineering improvement based on the most common deficiencies identified for space projects.

  5. Design, analysis, and test verification of advanced encapsulation systems

    NASA Technical Reports Server (NTRS)

    Mardesich, N.; Minning, C.

    1982-01-01

    Design sensitivities are established for the development of photovoltaic module criteria and the definition of needed research tasks. The program consists of three phases. In Phase I, analytical models were developed to perform optical, thermal, electrical, and structural analyses on candidate encapsulation systems. From these analyses several candidate systems will be selected for qualification testing during Phase II. Additionally, during Phase II, test specimens of various types will be constructed and tested to determine the validity of the analysis methodology developed in Phase I. In Phse III, a finalized optimum design based on knowledge gained in Phase I and II will be developed. All verification testing was completed during this period. Preliminary results and observations are discussed. Descriptions of the thermal, thermal structural, and structural deflection test setups are included.

  6. Shuttle-tethered satellite system definition study extension

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A system requirements definition and configuration study (Phase B) of the Tethered Satellite System (TSS) was conducted during the period 14 November 1977 to 27 February 1979. Subsequently a study extension was conducted during the period 13 June 1979 to 30 June 1980, for the purpose of refining the requirements identified during the main phase of the study, and studying in some detail the implications of accommodating various types of scientific experiments on the initial verification flight mission. An executive overview is given of the Tethered Satellite System definition developed during the study. The results of specific study tasks undertaken in the extension phase of the study are reported. Feasibility of the Tethered Satellite System has been established with reasonable confidence and the groundwork laid for proceeding with hardware design for the verification mission.

  7. Interactive Scene Analysis Module - A sensor-database fusion system for telerobotic environments

    NASA Technical Reports Server (NTRS)

    Cooper, Eric G.; Vazquez, Sixto L.; Goode, Plesent W.

    1992-01-01

    Accomplishing a task with telerobotics typically involves a combination of operator control/supervision and a 'script' of preprogrammed commands. These commands usually assume that the location of various objects in the task space conform to some internal representation (database) of that task space. The ability to quickly and accurately verify the task environment against the internal database would improve the robustness of these preprogrammed commands. In addition, the on-line initialization and maintenance of a task space database is difficult for operators using Cartesian coordinates alone. This paper describes the Interactive Scene' Analysis Module (ISAM) developed to provide taskspace database initialization and verification utilizing 3-D graphic overlay modelling, video imaging, and laser radar based range imaging. Through the fusion of taskspace database information and image sensor data, a verifiable taskspace model is generated providing location and orientation data for objects in a task space. This paper also describes applications of the ISAM in the Intelligent Systems Research Laboratory (ISRL) at NASA Langley Research Center, and discusses its performance relative to representation accuracy and operator interface efficiency.

  8. A FUNCTIONAL NEUROIMAGING INVESTIGATION OF THE ROLES OF STRUCTURAL COMPLEXITY AND TASK-DEMAND DURING AUDITORY SENTENCE PROCESSING

    PubMed Central

    Love, Tracy; Haist, Frank; Nicol, Janet; Swinney, David

    2009-01-01

    Using functional magnetic resonance imaging (fMRI), this study directly examined an issue that bridges the potential language processing and multi-modal views of the role of Broca’s area: the effects of task-demands in language comprehension studies. We presented syntactically simple and complex sentences for auditory comprehension under three different (differentially complex) task-demand conditions: passive listening, probe verification, and theme judgment. Contrary to many language imaging findings, we found that both simple and complex syntactic structures activated left inferior frontal cortex (L-IFC). Critically, we found activation in these frontal regions increased together with increased task-demands. Specifically, tasks that required greater manipulation and comparison of linguistic material recruited L-IFC more strongly; independent of syntactic structure complexity. We argue that much of the presumed syntactic effects previously found in sentence imaging studies of L-IFC may, among other things, reflect the tasks employed in these studies and that L-IFC is a region underlying mnemonic and other integrative functions, on which much language processing may rely. PMID:16881268

  9. Exploring implementation practices in results-based financing: the case of the verification in Benin.

    PubMed

    Antony, Matthieu; Bertone, Maria Paola; Barthes, Olivier

    2017-03-14

    Results-based financing (RBF) has been introduced in many countries across Africa and a growing literature is building around the assessment of their impact. These studies are usually quantitative and often silent on the paths and processes through which results are achieved and on the wider health system effects of RBF. To address this gap, our study aims at exploring the implementation of an RBF pilot in Benin, focusing on the verification of results. The study is based on action research carried out by authors involved in the pilot as part of the agency supporting the RBF implementation in Benin. While our participant observation and operational collaboration with project's stakeholders informed the study, the analysis is mostly based on quantitative and qualitative secondary data, collected throughout the project's implementation and documentation processes. Data include project documents, reports and budgets, RBF data on service outputs and on the outcome of the verification, daily activity timesheets of the technical assistants in the districts, as well as focus groups with Community-based Organizations and informal interviews with technical assistants and district medical officers. Our analysis focuses on the actual practices of quantitative, qualitative and community verification. Results show that the verification processes are complex, costly and time-consuming, and in practice they end up differing from what designed originally. We explore the consequences of this on the operation of the scheme, on its potential to generate the envisaged change. We find, for example, that the time taken up by verification procedures limits the time available for data analysis and feedback to facility staff, thus limiting the potential to improve service delivery. Verification challenges also result in delays in bonus payment, which delink effort and reward. Additionally, the limited integration of the verification activities of district teams with their routine tasks causes a further verticalization of the health system. Our results highlight the potential disconnect between the theory of change behind RBF and the actual scheme's implementation. The implications are relevant at methodological level, stressing the importance of analyzing implementation processes to fully understand results, as well as at operational level, pointing to the need to carefully adapt the design of RBF schemes (including verification and other key functions) to the context and to allow room to iteratively modify it during implementation. They also question whether the rationale for thorough and costly verification is justified, or rather adaptations are possible.

  10. Space station data system analysis/architecture study. Task 2: Options development, DR-5. Volume 3: Programmatic options

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Task 2 in the Space Station Data System (SSDS) Analysis/Architecture Study is the development of an information base that will support the conduct of trade studies and provide sufficient data to make design/programmatic decisions. This volume identifies the preferred options in the programmatic category and characterizes these options with respect to performance attributes, constraints, costs, and risks. The programmatic category includes methods used to administrate/manage the development, operation and maintenance of the SSDS. The specific areas discussed include standardization/commonality; systems management; and systems development, including hardware procurement, software development and system integration, test and verification.

  11. The integration of a mesh reflector to a 15-foot box truss structure. Task 3: Box truss analysis and technology development

    NASA Technical Reports Server (NTRS)

    Bachtell, E. E.; Thiemet, W. F.; Morosow, G.

    1987-01-01

    To demonstrate the design and integration of a reflective mesh surface to a deployable truss structure, a mesh reflector was installed on a 15 foot box truss cube. The specific features demonstrated include: (1) sewing seams in reflective mesh; (2) mesh stretching to desired preload; (3) installation of surface tie cords; (4) installation of reflective surface on truss; (5) setting of reflective surface; (6) verification of surface shape/accuracy; (7) storage and deployment; (8) repeatability of reflector surface; and (9) comparison of surface with predicted shape using analytical methods developed under a previous task.

  12. The Stanford how things work project

    NASA Technical Reports Server (NTRS)

    Fikes, Richard; Gruber, Tom; Iwasaki, Yumi

    1994-01-01

    We provide an overview of the Stanford How Things Work (HTW) project, an ongoing integrated collection of research activities in the Knowledge Systems Laboratory at Stanford University. The project is developing technology for representing knowledge about engineered devices in a form that enables the knowledge to be used in multiple systems for multiple reasoning tasks and reasoning methods that enable the represented knowledge to be effectively applied to the performance of the core engineering task of simulating and analyzing device behavior. The central new capabilities currently being developed in the project are automated assistance with model formulation and with verification that a design for an electro-mechanical device satisfies its functional specification.

  13. Verifying the Modal Logic Cube Is an Easy Task (For Higher-Order Automated Reasoners)

    NASA Astrophysics Data System (ADS)

    Benzmüller, Christoph

    Prominent logics, including quantified multimodal logics, can be elegantly embedded in simple type theory (classical higher-order logic). Furthermore, off-the-shelf reasoning systems for simple type type theory exist that can be uniformly employed for reasoning within and about embedded logics. In this paper we focus on reasoning about modal logics and exploit our framework for the automated verification of inclusion and equivalence relations between them. Related work has applied first-order automated theorem provers for the task. Our solution achieves significant improvements, most notably, with respect to elegance and simplicity of the problem encodings as well as with respect to automation performance.

  14. Symbolic LTL Compilation for Model Checking: Extended Abstract

    NASA Technical Reports Server (NTRS)

    Rozier, Kristin Y.; Vardi, Moshe Y.

    2007-01-01

    In Linear Temporal Logic (LTL) model checking, we check LTL formulas representing desired behaviors against a formal model of the system designed to exhibit these behaviors. To accomplish this task, the LTL formulas must be translated into automata [21]. We focus on LTL compilation by investigating LTL satisfiability checking via a reduction to model checking. Having shown that symbolic LTL compilation algorithms are superior to explicit automata construction algorithms for this task [16], we concentrate here on seeking a better symbolic algorithm.We present experimental data comparing algorithmic variations such as normal forms, encoding methods, and variable ordering and examine their effects on performance metrics including processing time and scalability. Safety critical systems, such as air traffic control, life support systems, hazardous environment controls, and automotive control systems, pervade our daily lives, yet testing and simulation alone cannot adequately verify their reliability [3]. Model checking is a promising approach to formal verification for safety critical systems which involves creating a formal mathematical model of the system and translating desired safety properties into a formal specification for this model. The complement of the specification is then checked against the system model. When the model does not satisfy the specification, model-checking tools accompany this negative answer with a counterexample, which points to an inconsistency between the system and the desired behaviors and aids debugging efforts.

  15. An Analysis of USSPACECOM’s Space Surveillance Network (SSN) Sensor Tasking Methodology

    DTIC Science & Technology

    1992-12-01

    2-6 2.3.2 Collateral Sensors .......................... 2- 7 2.3.3 Contributing Sensors ........................ 2-8 2.4 Space Surveillance Network...3I 3.1.1 T"hr State, Solution . ...... . ................... 3.:1 Page 3.1.2 The State-Transition Matrix... ............ 3- 7 3.2 Differential...Execution ........................... 4- 7 4.3.3 Model Verification ......................... 4-10 4.41 Differential Corrector

  16. Gains to L2 Listeners from Reading while Listening vs. Listening Only in Comprehending Short Stories

    ERIC Educational Resources Information Center

    Chang, Anna C.-S.

    2009-01-01

    This study builds on the concept that aural-written verification helps L2 learners develop auditory discrimination skills, refine word recognition and gain awareness of form-meaning relationships, by comparing two modes of aural input: reading while listening (R/L) vs. listening only (L/O). Two test tasks (sequencing and gap filling) of 95 items,…

  17. Quantified Event Automata: Towards Expressive and Efficient Runtime Monitors

    NASA Technical Reports Server (NTRS)

    Barringer, Howard; Falcone, Ylies; Havelund, Klaus; Reger, Giles; Rydeheard, David

    2012-01-01

    Runtime verification is the process of checking a property on a trace of events produced by the execution of a computational system. Runtime verification techniques have recently focused on parametric specifications where events take data values as parameters. These techniques exist on a spectrum inhabited by both efficient and expressive techniques. These characteristics are usually shown to be conflicting - in state-of-the-art solutions, efficiency is obtained at the cost of loss of expressiveness and vice-versa. To seek a solution to this conflict we explore a new point on the spectrum by defining an alternative runtime verification approach.We introduce a new formalism for concisely capturing expressive specifications with parameters. Our technique is more expressive than the currently most efficient techniques while at the same time allowing for optimizations.

  18. Species verification of Dalbergia nigra and Dalbergia spruceana samples in the wood collection of the Forest Products Laboratory

    Treesearch

    Michael C. Wiemann; Edgard O. Espinoza

    2017-01-01

    To evade endangered timber species laws, unscrupulous importers sometimes attempt to pass protected Dalbergia nigra as look-alike but unprotected, Dalbergia Spruceana. Wood density and fluorescence properties are sometimes used to identify the species. Although these properties are useful and do not require special equipment,...

  19. 41 CFR 102-118.90 - If my agency orders transportation and/or transportation services with a Government contractor...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Management Federal Property Management Regulations System (Continued) FEDERAL MANAGEMENT REGULATION... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false If my agency orders... the prepayment audit verification process), a prepayment audit can be used. As with all prepayment...

  20. Satellite Calibration and Verification of Remotely Sensed Cloud and Radiation Properties Using ARM UAV Data

    NASA Technical Reports Server (NTRS)

    Minnis, Patrick; Charlock, Thomas P.

    1998-01-01

    The work proposed under this agreement was designed to validate and improve remote sensing of cloud and radiation properties in the atmosphere for climate studies with special emphasis on the use of satellites for monitoring these parameters to further the goals of the Atmospheric Radiation Measurement (ARM) Program.

  1. Total systems design analysis of high performance structures

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1993-01-01

    Designer-control parameters were identified at interdiscipline interfaces to optimize structural systems performance and downstream development and operations with reliability and least life-cycle cost. Interface tasks and iterations are tracked through a matrix of performance disciplines integration versus manufacturing, verification, and operations interactions for a total system design analysis. Performance integration tasks include shapes, sizes, environments, and materials. Integrity integrating tasks are reliability and recurring structural costs. Significant interface designer control parameters were noted as shapes, dimensions, probability range factors, and cost. Structural failure concept is presented, and first-order reliability and deterministic methods, benefits, and limitations are discussed. A deterministic reliability technique combining benefits of both is proposed for static structures which is also timely and economically verifiable. Though launch vehicle environments were primarily considered, the system design process is applicable to any surface system using its own unique filed environments.

  2. 48 CFR 245.603-70 - Contractor performance of plant clearance duties.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Government-furnished property from inventory schedules; (v) Evaluate physical, quantitative, and technical allocability of contractor inventory prior to disposal using Standard Form 1423, Inventory Verification Survey...

  3. Control system of the inspection robots group applying auctions and multi-criteria analysis for task allocation

    NASA Astrophysics Data System (ADS)

    Panfil, Wawrzyniec; Moczulski, Wojciech

    2017-10-01

    In the paper presented is a control system of a mobile robots group intended for carrying out inspection missions. The main research problem was to define such a control system in order to facilitate a cooperation of the robots resulting in realization of the committed inspection tasks. Many of the well-known control systems use auctions for tasks allocation, where a subject of an auction is a task to be allocated. It seems that in the case of missions characterized by much larger number of tasks than number of robots it will be better if robots (instead of tasks) are subjects of auctions. The second identified problem concerns the one-sided robot-to-task fitness evaluation. Simultaneous assessment of the robot-to-task fitness and task attractiveness for robot should affect positively for the overall effectiveness of the multi-robot system performance. The elaborated system allows to assign tasks to robots using various methods for evaluation of fitness between robots and tasks, and using some tasks allocation methods. There is proposed the method for multi-criteria analysis, which is composed of two assessments, i.e. robot's concurrency position for task among other robots and task's attractiveness for robot among other tasks. Furthermore, there are proposed methods for tasks allocation applying the mentioned multi-criteria analysis method. The verification of both the elaborated system and the proposed tasks' allocation methods was carried out with the help of simulated experiments. The object under test was a group of inspection mobile robots being a virtual counterpart of the real mobile-robot group.

  4. Dosimetric characterization and output verification for conical brachytherapy surface applicators. Part I. Electronic brachytherapy source

    PubMed Central

    Fulkerson, Regina K.; Micka, John A.; DeWerd, Larry A.

    2014-01-01

    Purpose: Historically, treatment of malignant surface lesions has been achieved with linear accelerator based electron beams or superficial x-ray beams. Recent developments in the field of brachytherapy now allow for the treatment of surface lesions with specialized conical applicators placed directly on the lesion. Applicators are available for use with high dose rate (HDR) 192Ir sources, as well as electronic brachytherapy sources. Part I of this paper will discuss the applicators used with electronic brachytherapy sources; Part II will discuss those used with HDR 192Ir sources. Although the use of these applicators has gained in popularity, the dosimetric characteristics including depth dose and surface dose distributions have not been independently verified. Additionally, there is no recognized method of output verification for quality assurance procedures with applicators like these. Existing dosimetry protocols available from the AAPM bookend the cross-over characteristics of a traditional brachytherapy source (as described by Task Group 43) being implemented as a low-energy superficial x-ray beam (as described by Task Group 61) as observed with the surface applicators of interest. Methods: This work aims to create a cohesive method of output verification that can be used to determine the dose at the treatment surface as part of a quality assurance/commissioning process for surface applicators used with HDR electronic brachytherapy sources (Part I) and 192Ir sources (Part II). Air-kerma rate measurements for the electronic brachytherapy sources were completed with an Attix Free-Air Chamber, as well as several models of small-volume ionization chambers to obtain an air-kerma rate at the treatment surface for each applicator. Correction factors were calculated using MCNP5 and EGSnrc Monte Carlo codes in order to determine an applicator-specific absorbed dose to water at the treatment surface from the measured air-kerma rate. Additionally, relative dose measurements of the surface dose distributions and characteristic depth dose curves were completed in-phantom. Results: Theoretical dose distributions and depth dose curves were generated for each applicator and agreed well with the measured values. A method of output verification was created that allows users to determine the applicator-specific dose to water at the treatment surface based on a measured air-kerma rate. Conclusions: The novel output verification methods described in this work will reduce uncertainties in dose delivery for treatments with these kinds of surface applicators, ultimately improving patient care. PMID:24506635

  5. Psychometric properties of startle and corrugator response in NPU, Affective Picture Viewing, and Resting State tasks

    PubMed Central

    Kaye, Jesse T.; Bradford, Daniel E.; Curtin, John J.

    2016-01-01

    The current study provides a comprehensive evaluation of critical psychometric properties of commonly used psychophysiology laboratory tasks/measures within the NIMH RDoC. Participants (N = 128) completed the No Shock, Predictable Shock, Unpredictable Shock (NPU) task, Affective Picture Viewing task, and Resting State task at two study visits separated by one week. We examined potentiation/modulation scores in NPU (predictable or unpredictable shock vs. no shock) and Affective Picture Viewing tasks (pleasant or unpleasant vs. neutral pictures) for startle and corrugator responses with two commonly used quantification methods. We quantified startle potentiation/modulation scores with raw and standardized responses. We quantified corrugator potentiation/modulation in the time and frequency domains. We quantified general startle reactivity in the Resting State Task as the mean raw startle response during the task. For these three tasks, two measures, and two quantification methods we evaluated effect size robustness and stability, internal consistency (i.e., split-half reliability), and one-week temporal stability. The psychometric properties of startle potentiation in the NPU task were good but concerns were noted for corrugator potentiation in this task. Some concerns also were noted for the psychometric properties of both startle and corrugator modulation in the Affective Picture Viewing task, in particular for pleasant picture modulation. Psychometric properties of general startle reactivity in the Resting State task were good. Some salient differences in the psychometric properties of the NPU and Affective Picture Viewing tasks were observed within and across quantification methods. PMID:27167717

  6. Task-dependent and task-independent neurovascular responses to syntactic processing⋆

    PubMed Central

    Caplan, David; Chen, Evan; Waters, Gloria

    2008-01-01

    The neural basis for syntactic processing was studied using event-related fMRI to determine the locations of BOLD signal increases in the contrast of syntactically complex sentences with center-embedded, object-extracted relative clauses and syntactically simple sentences with right-branching, subject-extracted relative clauses in a group of 15 participants in three tasks. In a sentence verification task, participants saw a target sentence in one of these two syntactic forms, followed by a probe in a simple active form, and determined whether the probe expressed a proposition in the target. In a plausibility judgment task, participants determined whether a sentence in one of these two syntactic forms was plausible or implausible. Finally, in a non-word detection task, participants determined whether a sentence in one of these two syntactic forms contained only real words or a non-word. BOLD signal associated with the syntactic contrast increased in the left posterior inferior frontal gyrus in non-word detection and in a widespread set of areas in the other two tasks. We conclude that the BOLD activity in the left posterior inferior frontal gyrus reflects syntactic processing independent of concurrent cognitive operations and the more widespread areas of activation reflect the use of strategies and the use of the products of syntactic processing to accomplish tasks. PMID:18387556

  7. Supervisory Control of a Humanoid Robot in Microgravity for Manipulation Tasks

    NASA Technical Reports Server (NTRS)

    Farrell, Logan C.; Strawser, Phil; Hambuchen, Kimberly; Baker, Will; Badger, Julia

    2017-01-01

    Teleoperation is the dominant form of dexterous robotic tasks in the field. However, there are many use cases in which direct teleoperation is not feasible such as disaster areas with poor communication as posed in the DARPA Robotics Challenge, or robot operations on spacecraft a large distance from Earth with long communication delays. Presented is a solution that combines the Affordance Template Framework for object interaction with TaskForce for supervisory control in order to accomplish high level task objectives with basic autonomous behavior from the robot. TaskForce, is a new commanding infrastructure that allows for optimal development of task execution, clear feedback to the user to aid in off-nominal situations, and the capability to add autonomous verification and corrective actions. This framework has allowed the robot to take corrective actions before requesting assistance from the user. This framework is demonstrated with Robonaut 2 removing a Cargo Transfer Bag from a simulated logistics resupply vehicle for spaceflight using a single operator command. This was executed with 80% success with no human involvement, and 95% success with limited human interaction. This technology sets the stage to do any number of high level tasks using a similar framework, allowing the robot to accomplish tasks with minimal to no human interaction.

  8. Level-2 perspectives computed quickly and spontaneously: Evidence from eight- to 9.5-year-old children.

    PubMed

    Elekes, Fruzsina; Varga, Máté; Király, Ildikó

    2017-11-01

    It has been widely assumed that computing how a scene looks from another perspective (level-2 perspective taking, PT) is an effortful process, as opposed to the automatic capacity of tracking visual access to objects (level-1 PT). Recently, adults have been found to compute both forms of visual perspectives in a quick but context-sensitive way, indicating that the two functions share more features than previously assumed. However, the developmental literature still shows the dissociation between automatic level-1 and effortful level-2 PT. In the current paper, we report an experiment showing that in a minimally social situation, participating in a number verification task with an adult confederate, eight- to 9.5-year-old children demonstrate similar online level-2 PT capacities as adults. Future studies need to address whether online PT shows selectivity in children as well and develop paradigms that are adequate to test preschoolers' online level-2 PT abilities. Statement of Contribution What is already known on this subject? Adults can access how objects appear to others (level-2 perspective) spontaneously and online Online level-1, but not level-2 perspective taking (PT) has been documented in school-aged children What the present study adds? Eight- to 9.5-year-olds performed a number verification task with a confederate who had the same task Children showed similar perspective interference as adults, indicating spontaneous level-2 PT Not only agent-object relations but also object appearances are computed online by eight- to 9.5-year-olds. © 2017 The British Psychological Society.

  9. Verification of Spatial Forecasts of Continuous Meteorological Variables Using Categorical and Object-Based Methods

    DTIC Science & Technology

    2016-08-01

    Using Categorical and Object-Based Methods by John W Raby and Huaqing Cai Approved for public release; distribution...by John W Raby and Huaqing Cai Computational and Information Sciences Directorate, ARL Approved for public release...AUTHOR(S) John W Raby and Huaqing Cai 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND

  10. Inertial Upper Stage (IUS) software analysis

    NASA Technical Reports Server (NTRS)

    Grayson, W. L.; Nickel, C. E.; Rose, P. L.; Singh, R. P.

    1979-01-01

    The Inertial Upper Stage (IUS) System, an extension of the Space Transportation System (STS) operating regime to include higher orbits, orbital plane changes, geosynchronous orbits, and interplanetary trajectories is presented. The IUS software design, the IUS software interfaces with other systems, and the cost effectiveness in software verification are described. Tasks of the IUS discussed include: (1) design analysis; (2) validation requirements analysis; (3) interface analysis; and (4) requirements analysis.

  11. Orbital transfer vehicle engine technology: Baffled injector design, fabrication, and verification

    NASA Technical Reports Server (NTRS)

    Schneider, J. A.

    1991-01-01

    New technologies for space-based, reusable, throttleable, cryogenic orbit transfer propulsion are being evaluated. Supporting tasks for the design of a dual expander cycle engine thrust chamber design are documented. The purpose of the studies was to research the materials used in the thrust chamber design, the supporting fabrication methods necessary to complete the design, and the modification of the injector element for optimum injector/chamber compatibility.

  12. Formal Assurance for Cognitive Architecture Based Autonomous Agent

    NASA Technical Reports Server (NTRS)

    Bhattacharyya, Siddhartha; Eskridge, Thomas; Neogi, Natasha; Carvalho, Marco

    2017-01-01

    Autonomous systems are designed and deployed in different modeling paradigms. These environments focus on specific concepts in designing the system. We focus our effort in the use of cognitive architectures to design autonomous agents to collaborate with humans to accomplish tasks in a mission. Our research focuses on introducing formal assurance methods to verify the behavior of agents designed in Soar, by translating the agent to the formal verification environment Uppaal.

  13. Experimental verification of a new laminar airfoil: A project for the graduate program in aeronautics

    NASA Technical Reports Server (NTRS)

    Nicks, Oran W.; Korkan, Kenneth D.

    1991-01-01

    Two reports on student activities to determine the properties of a new laminar airfoil which were delivered at a conference on soaring technology are presented. The papers discuss a wind tunnel investigation and analysis of the SM701 airfoil and verification of the SM701 airfoil aerodynamic charcteristics utilizing theoretical techniques. The papers are based on a combination of analytical design, hands-on model fabrication, wind tunnel calibration and testing, data acquisition and analysis, and comparison of test results and theory.

  14. Bounded Parametric Model Checking for Elementary Net Systems

    NASA Astrophysics Data System (ADS)

    Knapik, Michał; Szreter, Maciej; Penczek, Wojciech

    Bounded Model Checking (BMC) is an efficient verification method for reactive systems. BMC has been applied so far to verification of properties expressed in (timed) modal logics, but never to their parametric extensions. In this paper we show, for the first time that BMC can be extended to PRTECTL - a parametric extension of the existential version of CTL. To this aim we define a bounded semantics and a translation from PRTECTL to SAT. The implementation of the algorithm for Elementary Net Systems is presented, together with some experimental results.

  15. A High-Level Language for Modeling Algorithms and Their Properties

    NASA Astrophysics Data System (ADS)

    Akhtar, Sabina; Merz, Stephan; Quinson, Martin

    Designers of concurrent and distributed algorithms usually express them using pseudo-code. In contrast, most verification techniques are based on more mathematically-oriented formalisms such as state transition systems. This conceptual gap contributes to hinder the use of formal verification techniques. Leslie Lamport introduced PlusCal, a high-level algorithmic language that has the "look and feel" of pseudo-code, but is equipped with a precise semantics and includes a high-level expression language based on set theory. PlusCal models can be compiled to TLA + and verified using the model checker tlc.

  16. Comparative study of the swabbing properties of seven commercially available swab materials for cleaning verification.

    PubMed

    Corrigan, Damion K; Piletsky, Sergey; McCrossen, Sean

    2009-01-01

    This article compares the technical performances of several different commercially available swabbing materials for the purpose of cleaning verification. A steel surface was soiled with solutions of acetaminophen, nicotinic acid, diclofenac, and benzamidine and wiped with each swabbing material. The compounds were extracted with water or ethanol (depending on polarity of analyte) and their concentration in extract was quantified spectrophotometrically. The study also investigated swab debris on the wiped surface. The swab performances were compared and the best swab material was identified.

  17. On supertaskers and the neural basis of efficient multitasking.

    PubMed

    Medeiros-Ward, Nathan; Watson, Jason M; Strayer, David L

    2015-06-01

    The present study used brain imaging to determine the neural basis of individual differences in multitasking, the ability to successfully perform at least two attention-demanding tasks at once. Multitasking is mentally taxing and, therefore, should recruit the prefrontal cortex to maintain task goals when coordinating attentional control and managing the cognitive load. To investigate this possibility, we used functional neuroimaging to assess neural activity in both extraordinary multitaskers (Supertaskers) and control subjects who were matched on working memory capacity. Participants performed a challenging dual N-back task in which auditory and visual stimuli were presented simultaneously, requiring independent and continuous maintenance, updating, and verification of the contents of verbal and spatial working memory. With the task requirements and considerable cognitive load that accompanied increasing N-back, relative to the controls, the multitasking of Supertaskers was characterized by more efficient recruitment of anterior cingulate and posterior frontopolar prefrontal cortices. Results are interpreted using neuropsychological and evolutionary perspectives on individual differences in multitasking ability and the neural correlates of attentional control.

  18. Importance of education and competence maintenance in metrology field (measurement science)

    NASA Astrophysics Data System (ADS)

    Dobiliene, J.; Meskuotiene, A.

    2015-02-01

    For certain tasks in metrology field trained employers might be necessary to fulfill specific requirements. It is important to pay attention that metrologists are responsible for fluent work of devices that belong to huge variety of vide spectrum of measurements. People who perform measurements (that are related to our safety, security or everyday life) with reliable measuring instruments must be sure for trueness of their results or conclusions. So with the purpose to reach the harmony between the ordinary man and his used means it is very important to ensure competence of specialists that are responsible for mentioned harmony implementation. Usually these specialists have a university degree and perform highly specified tasks in science, industry or laboratories. Their task is quite narrow. For example, type approval of measuring instrument or calibration and verification. Due to the fact that the number of such employers and their tasks is relatively small in the field of legal metrology, this paper focuses on the significance of training and qualification of legal metrology officers.

  19. The influence of autostereoscopic 3D displays on subsequent task performance

    NASA Astrophysics Data System (ADS)

    Barkowsky, Marcus; Le Callet, Patrick

    2010-02-01

    Viewing 3D content on an autostereoscopic is an exciting experience. This is partly due to the fact that the 3D effect is seen without glasses. Nevertheless, it is an unnatural condition for the eyes as the depth effect is created by the disparity of the left and the right view on a flat screen instead of having a real object at the corresponding location. Thus, it may be more tiring to watch 3D than 2D. This question is investigated in this contribution by a subjective experiment. A search task experiment is conducted and the behavior of the participants is recorded with an eyetracker. Several indicators both for low level perception as well as for the task performance itself are evaluated. In addition two optometric tests are performed. A verification session with conventional 2D viewing is included. The results are discussed in detail and it can be concluded that the 3D viewing does not have a negative impact on the task performance used in the experiment.

  20. Discrete Abstractions of Hybrid Systems: Verification of Safety and Application to User-Interface Design

    NASA Technical Reports Server (NTRS)

    Oishi, Meeko; Tomlin, Claire; Degani, Asaf

    2003-01-01

    Human interaction with complex hybrid systems involves the user, the automation's discrete mode logic, and the underlying continuous dynamics of the physical system. Often the user-interface of such systems displays a reduced set of information about the entire system. In safety-critical systems, how can we identify user-interface designs which do not have adequate information, or which may confuse the user? Here we describe a methodology, based on hybrid system analysis, to verify that a user-interface contains information necessary to safely complete a desired procedure or task. Verification within a hybrid framework allows us to account for the continuous dynamics underlying the simple, discrete representations displayed to the user. We provide two examples: a car traveling through a yellow light at an intersection and an aircraft autopilot in a landing/go-around maneuver. The examples demonstrate the general nature of this methodology, which is applicable to hybrid systems (not fully automated) which have operational constraints we can pose in terms of safety. This methodology differs from existing work in hybrid system verification in that we directly account for the user's interactions with the system.

  1. Fingerprint Identification Using SIFT-Based Minutia Descriptors and Improved All Descriptor-Pair Matching

    PubMed Central

    Zhou, Ru; Zhong, Dexing; Han, Jiuqiang

    2013-01-01

    The performance of conventional minutiae-based fingerprint authentication algorithms degrades significantly when dealing with low quality fingerprints with lots of cuts or scratches. A similar degradation of the minutiae-based algorithms is observed when small overlapping areas appear because of the quite narrow width of the sensors. Based on the detection of minutiae, Scale Invariant Feature Transformation (SIFT) descriptors are employed to fulfill verification tasks in the above difficult scenarios. However, the original SIFT algorithm is not suitable for fingerprint because of: (1) the similar patterns of parallel ridges; and (2) high computational resource consumption. To enhance the efficiency and effectiveness of the algorithm for fingerprint verification, we propose a SIFT-based Minutia Descriptor (SMD) to improve the SIFT algorithm through image processing, descriptor extraction and matcher. A two-step fast matcher, named improved All Descriptor-Pair Matching (iADM), is also proposed to implement the 1:N verifications in real-time. Fingerprint Identification using SMD and iADM (FISiA) achieved a significant improvement with respect to accuracy in representative databases compared with the conventional minutiae-based method. The speed of FISiA also can meet real-time requirements. PMID:23467056

  2. A model of the human supervisor

    NASA Technical Reports Server (NTRS)

    Kok, J. J.; Vanwijk, R. A.

    1977-01-01

    A general model of the human supervisor's behavior is given. Submechanisms of the model include: the observer/reconstructor; decision-making; and controller. A set of hypothesis is postulated for the relations between the task variables and the parameters of the different submechanisms of the model. Verification of the model hypotheses is considered using variations in the task variables. An approach is suggested for the identification of the model parameters which makes use of a multidimensional error criterion. Each of the elements of this multidimensional criterion corresponds to a certain aspect of the supervisor's behavior, and is directly related to a particular part of the model and its parameters. This approach offers good possibilities for an efficient parameter adjustment procedure.

  3. Close to real-time robust pedestrian detection and tracking

    NASA Astrophysics Data System (ADS)

    Lipetski, Y.; Loibner, G.; Sidla, O.

    2015-03-01

    Fully automated video based pedestrian detection and tracking is a challenging task with many practical and important applications. We present our work aimed to allow robust and simultaneously close to real-time tracking of pedestrians. The presented approach is stable to occlusions, lighting conditions and is generalized to be applied on arbitrary video data. The core tracking approach is built upon tracking-by-detections principle. We describe our cascaded HOG detector with successive CNN verification in detail. For the tracking and re-identification task, we did an extensive analysis of appearance based features as well as their combinations. The tracker was tested on many hours of video data for different scenarios; the results are presented and discussed.

  4. Engineering support activities for the Apollo 17 Surface Electrical Properties Experiment.

    NASA Technical Reports Server (NTRS)

    Cubley, H. D.

    1972-01-01

    Description of the engineering support activities which were required to ensure fulfillment of objectives specified for the Apollo 17 SEP (Surface Electrical Properties) Experiment. Attention is given to procedural steps involving verification of hardware acceptability to the astronauts, computer simulation of the experiment hardware, field trials, receiver antenna pattern measurements, and the qualification test program.

  5. A New Integrated Threshold Selection Methodology for Spatial Forecast Verification of Extreme Events

    NASA Astrophysics Data System (ADS)

    Kholodovsky, V.

    2017-12-01

    Extreme weather and climate events such as heavy precipitation, heat waves and strong winds can cause extensive damage to the society in terms of human lives and financial losses. As climate changes, it is important to understand how extreme weather events may change as a result. Climate and statistical models are often independently used to model those phenomena. To better assess performance of the climate models, a variety of spatial forecast verification methods have been developed. However, spatial verification metrics that are widely used in comparing mean states, in most cases, do not have an adequate theoretical justification to benchmark extreme weather events. We proposed a new integrated threshold selection methodology for spatial forecast verification of extreme events that couples existing pattern recognition indices with high threshold choices. This integrated approach has three main steps: 1) dimension reduction; 2) geometric domain mapping; and 3) thresholds clustering. We apply this approach to an observed precipitation dataset over CONUS. The results are evaluated by displaying threshold distribution seasonally, monthly and annually. The method offers user the flexibility of selecting a high threshold that is linked to desired geometrical properties. The proposed high threshold methodology could either complement existing spatial verification methods, where threshold selection is arbitrary, or be directly applicable in extreme value theory.

  6. European Train Control System: A Case Study in Formal Verification

    NASA Astrophysics Data System (ADS)

    Platzer, André; Quesel, Jan-David

    Complex physical systems have several degrees of freedom. They only work correctly when their control parameters obey corresponding constraints. Based on the informal specification of the European Train Control System (ETCS), we design a controller for its cooperation protocol. For its free parameters, we successively identify constraints that are required to ensure collision freedom. We formally prove the parameter constraints to be sharp by characterizing them equivalently in terms of reachability properties of the hybrid system dynamics. Using our deductive verification tool KeYmaera, we formally verify controllability, safety, liveness, and reactivity properties of the ETCS protocol that entail collision freedom. We prove that the ETCS protocol remains correct even in the presence of perturbation by disturbances in the dynamics. We verify that safety is preserved when a PI controlled speed supervision is used.

  7. An analysis of random projection for changeable and privacy-preserving biometric verification.

    PubMed

    Wang, Yongjin; Plataniotis, Konstantinos N

    2010-10-01

    Changeability and privacy protection are important factors for widespread deployment of biometrics-based verification systems. This paper presents a systematic analysis of a random-projection (RP)-based method for addressing these problems. The employed method transforms biometric data using a random matrix with each entry an independent and identically distributed Gaussian random variable. The similarity- and privacy-preserving properties, as well as the changeability of the biometric information in the transformed domain, are analyzed in detail. Specifically, RP on both high-dimensional image vectors and dimensionality-reduced feature vectors is discussed and compared. A vector translation method is proposed to improve the changeability of the generated templates. The feasibility of the introduced solution is well supported by detailed theoretical analyses. Extensive experimentation on a face-based biometric verification problem shows the effectiveness of the proposed method.

  8. Assessment of Thermal Control and Protective Coatings

    NASA Technical Reports Server (NTRS)

    Mell, Richard J.

    2000-01-01

    This final report is concerned with the tasks performed during the contract period which included spacecraft coating development, testing, and applications. Five marker coatings consisting of a bright yellow handrail coating, protective overcoat for ceramic coatings, and specialized primers for composites (or polymer) surfaces were developed and commercialized by AZ Technology during this program. Most of the coatings have passed space environmental stability requirements via ground tests and/or flight verification. Marker coatings and protective overcoats were successfully flown on the Passive Optical Sample Assembly (POSA) and the Optical Properties Monitor (OPM) experiments flown on the Russian space station MIR. To date, most of the coatings developed and/or modified during this program have been utilized on the International Space Station and other spacecraft. For ISS, AZ Technology manufactured the 'UNITY' emblem now being flown on the NASA UNITY node (Node 1) that is docked to the Russian Zarya (FGB) utilizing the colored marker coatings (white, blue, red) developed by AZ Technology. The UNITY emblem included the US American flag, the Unity logo, and NASA logo on a white background, applied to a Beta cloth substrate.

  9. Analysis of film cooling in rocket nozzles

    NASA Technical Reports Server (NTRS)

    Woodbury, Keith A.

    1993-01-01

    This report summarizes the findings on the NASA contract NAG8-212, Task No. 3. The overall project consists of three tasks, all of which have been successfully completed. In addition, some supporting supplemental work, not required by the contract, has been performed and is documented herein. Task 1 involved the modification of the wall functions in the code FDNS (Finite Difference Navier-Stokes) to use a Reynolds Analogy-based method. This task was completed in August, 1992. Task 2 involved the verification of the code against experimentally available data. The data chosen for comparison was from an experiment involving the injection of helium from a wall jet. Results obtained in completing this task also show the sensitivity of the FDNS code to unknown conditions at the injection slot. This task was completed in September, 1992. Task 3 required the computation of the flow of hot exhaust gases through the P&W 40K subscale nozzle. Computations were performed both with and without film coolant injection. This task was completed in July, 1993. The FDNS program tends to overpredict heat fluxes, but, with suitable modeling of backside cooling, may give reasonable wall temperature predictions. For film cooling in the P&W 40K calorimeter subscale nozzle, the average wall temperature is reduced from 1750R to about 1050R by the film cooling. The average wall heat flux is reduced by a factor of 3.

  10. GlassForm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2011-09-16

    GlassForm is a software tool for generating preliminary waste glass formulas for a given waste stream. The software is useful because it reduces the number of verification melts required to develop a suitable additive composition. The software includes property models that calculate glass properties of interest from the chemical composition of the waste glass. The software includes property models for glass viscosity, electrical conductivity, glass transition temperature, and leach resistance as measured by the 7-day product consistency test (PCT).

  11. Spatial Evaluation and Verification of Earthquake Simulators

    NASA Astrophysics Data System (ADS)

    Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.

    2017-06-01

    In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.

  12. Arithmetic and algebraic problem solving and resource allocation: the distinct impact of fluid and numerical intelligence.

    PubMed

    Dix, Annika; van der Meer, Elke

    2015-04-01

    This study investigates cognitive resource allocation dependent on fluid and numerical intelligence in arithmetic/algebraic tasks varying in difficulty. Sixty-six 11th grade students participated in a mathematical verification paradigm, while pupil dilation as a measure of resource allocation was collected. Students with high fluid intelligence solved the tasks faster and more accurately than those with average fluid intelligence, as did students with high compared to average numerical intelligence. However, fluid intelligence sped up response times only in students with average but not high numerical intelligence. Further, high fluid but not numerical intelligence led to greater task-related pupil dilation. We assume that fluid intelligence serves as a domain-general resource that helps to tackle problems for which domain-specific knowledge (numerical intelligence) is missing. The allocation of this resource can be measured by pupil dilation. Copyright © 2014 Society for Psychophysiological Research.

  13. JPL space robotics: Present accomplishments and future thrusts

    NASA Astrophysics Data System (ADS)

    Weisbin, C. R.; Hayati, S. A.; Rodriguez, G.

    1994-10-01

    Complex missions require routine and unscheduled inspection for safe operation. The purpose of research in this task is to facilitate structural inspection of the planned Space Station while mitigating the need for extravehicular activity (EVA), and giving the operator supervisory control over detailed and somewhat mundane, but important tasks. The telerobotic system enables inspection relative to a given reference (e.g., the status of the facility at the time of the last inspection) and alerts the operator to potential anomalies for verification and action. There are two primary objectives of this project: (1) To develop technologies that enable well-integrated NASA ground-to-orbit telerobotics operations, and (2) to develop a prototype common architecture workstation which implements these capabilities for other NASA technology projects and planned NASA flight applications. This task develops and supports three telerobot control modes which are applicable to time delay operation: Preview teleoperation, teleprogramming, and supervised autonomy.

  14. JPL space robotics: Present accomplishments and future thrusts

    NASA Technical Reports Server (NTRS)

    Weisbin, C. R.; Hayati, S. A.; Rodriguez, G.

    1994-01-01

    Complex missions require routine and unscheduled inspection for safe operation. The purpose of research in this task is to facilitate structural inspection of the planned Space Station while mitigating the need for extravehicular activity (EVA), and giving the operator supervisory control over detailed and somewhat mundane, but important tasks. The telerobotic system enables inspection relative to a given reference (e.g., the status of the facility at the time of the last inspection) and alerts the operator to potential anomalies for verification and action. There are two primary objectives of this project: (1) To develop technologies that enable well-integrated NASA ground-to-orbit telerobotics operations, and (2) to develop a prototype common architecture workstation which implements these capabilities for other NASA technology projects and planned NASA flight applications. This task develops and supports three telerobot control modes which are applicable to time delay operation: Preview teleoperation, teleprogramming, and supervised autonomy.

  15. A verification procedure for MSC/NASTRAN Finite Element Models

    NASA Technical Reports Server (NTRS)

    Stockwell, Alan E.

    1995-01-01

    Finite Element Models (FEM's) are used in the design and analysis of aircraft to mathematically describe the airframe structure for such diverse tasks as flutter analysis and actively controlled landing gear design. FEM's are used to model the entire airplane as well as airframe components. The purpose of this document is to describe recommended methods for verifying the quality of the FEM's and to specify a step-by-step procedure for implementing the methods.

  16. Space station System Engineering and Integration (SE and I). Volume 2: Study results

    NASA Technical Reports Server (NTRS)

    1987-01-01

    A summary of significant study results that are products of the Phase B conceptual design task are contained. Major elements are addressed. Study results applicable to each major element or area of design are summarized and included where appropriate. Areas addressed include: system engineering and integration; customer accommodations; test and program verification; product assurance; conceptual design; operations and planning; technical and management information system (TMIS); and advanced development.

  17. Thermal design and test verification of GALAXY evolution explorer (GALEX)

    NASA Technical Reports Server (NTRS)

    Wu, P. S.; Lee, S. -C.

    2002-01-01

    This paper describes the thermal control design of GALEX, an ultraviolet telescope that investigates the UV properties of local galaxies, history of star formation, and global causes of star formation and evolution.

  18. On the use of advanced numerical models for the evaluation of dosimetric parameters and the verification of exposure limits at workplaces.

    PubMed

    Catarinucci, L; Tarricone, L

    2009-12-01

    With the next transposition of the 2004/40/EC Directive, employers will become responsible for the electromagnetic field level at the workplace. To make this task easier, the scientific community is compiling practical guidelines to be followed. This work aims at enriching such guidelines, especially for the dosimetric issues. More specifically, some critical aspects related to the application of numerical dosimetric techniques for the verification of the safety limit compliance have been highlighted. In particular, three different aspects have been considered: the dosimetric parameter dependence on the shape and the inner characterisation of the exposed subject as well as on the numerical algorithm used, and the correlation between reference limits and basic restriction. Results and discussions demonstrate how, even by using sophisticated numerical techniques, in some cases a complex interpretation of the result is mandatory.

  19. On Biometrics With Eye Movements.

    PubMed

    Zhang, Youming; Juhola, Martti

    2017-09-01

    Eye movements are a relatively novel data source for biometric identification. When video cameras applied to eye tracking become smaller and more efficient, this data source could offer interesting opportunities for the development of eye movement biometrics. In this paper, we study primarily biometric identification as seen as a classification task of multiple classes, and secondarily biometric verification considered as binary classification. Our research is based on the saccadic eye movement signal measurements from 109 young subjects. In order to test the data measured, we use a procedure of biometric identification according to the one-versus-one (subject) principle. In a development from our previous research, which also involved biometric verification based on saccadic eye movements, we now apply another eye movement tracker device with a higher sampling frequency of 250 Hz. The results obtained are good, with correct identification rates at 80-90% at their best.

  20. Low cost solar array project silicon materials task. Development of a process for high capacity arc heater production of silicon for solar arrays

    NASA Technical Reports Server (NTRS)

    Fey, M. G.

    1981-01-01

    The experimental verification system for the production of silicon via the arc heater-sodium reduction of SiCl4 was designed, fabricated, installed, and operated. Each of the attendant subsystems was checked out and operated to insure performance requirements. These subsystems included: the arc heaters/reactor, cooling water system, gas system, power system, Control & Instrumentation system, Na injection system, SiCl4 injection system, effluent disposal system and gas burnoff system. Prior to introducing the reactants (Na and SiCl4) to the arc heater/reactor, a series of gas only-power tests was conducted to establish the operating parameters of the three arc heaters of the system. Following the successful completion of the gas only-power tests and the readiness tests of the sodium and SiCl4 injection systems, a shakedown test of the complete experimental verification system was conducted.

  1. [Verification of ballast water exchange for international ships anchored in Xiamen Port by CDOM tracer].

    PubMed

    Li, Chao; Zhang, Yan-po; Guo, Wei-dong; Zhu, Yue; Xu, Jing; Deng, Xun

    2010-09-01

    Fluorescence excitation-emission matrix (EEM) and absorption spectroscopy were applied to study the optical properties of 29 CDOM samples collected from different ballast tanks of nine international route vessels anchored in Xiamen Port between October 2007 and April 2008. The purpose was to examine the feasibility of these spectral properties as a tracer to verify if these vessels follow the mid-ocean ballast water exchange (BWE) regulation. Using parallel factor analysis, four fluorescent components were identified, including two humic-like components (C1: 245, 300/386 nm; C2: 250, 345/458 nm) and two protein-like components (C3: 220, 275/306 nm; C4: 235, 290/345 nm), of which C2 component was the suitable fluorescence verification indicator. The vertical distribution of all fluorescent components in ballast tank was nearly similar indicating that profile-mixing sampling was preferable. Combined use of C2 component, spectral slope ratio (SR) of absorption spectroscopy and salinity may provide reasonable verification if BWE carried out by these nine ships. The results suggested that the combined use of multiple parameters (fluorescence, absorption and salinity) would be much reliable to determine the origin of ballast water, and to provide the technical guarantee for fast examination of ballast water exchange in Chinese ports.

  2. Draft Plan for Characterizing Commercial Data Products in Support of Earth Science Research

    NASA Technical Reports Server (NTRS)

    Ryan, Robert E.; Terrie, Greg; Berglund, Judith

    2006-01-01

    This presentation introduces a draft plan for characterizing commercial data products for Earth science research. The general approach to the commercial product verification and validation includes focused selection of a readily available commercial remote sensing products that support Earth science research. Ongoing product verification and characterization will question whether the product meets specifications and will examine its fundamental properties, potential and limitations. Validation will encourage product evaluation for specific science research and applications. Specific commercial products included in the characterization plan include high-spatial-resolution multispectral (HSMS) imagery and LIDAR data products. Future efforts in this process will include briefing NASA headquarters and modifying plans based on feedback, increased engagement with the science community and refinement of details, coordination with commercial vendors and The Joint Agency Commercial Imagery Evaluation (JACIE) for HSMS satellite acquisitions, acquiring waveform LIDAR data and performing verification and validation.

  3. Stratway: A Modular Approach to Strategic Conflict Resolution

    NASA Technical Reports Server (NTRS)

    Hagen, George E.; Butler, Ricky W.; Maddalon, Jeffrey M.

    2011-01-01

    In this paper we introduce Stratway, a modular approach to finding long-term strategic resolutions to conflicts between aircraft. The modular approach provides both advantages and disadvantages. Our primary concern is to investigate the implications on the verification of safety-critical properties of a strategic resolution algorithm. By partitioning the problem into verifiable modules much stronger verification claims can be established. Since strategic resolution involves searching for solutions over an enormous state space, Stratway, like most similar algorithms, searches these spaces by applying heuristics, which present especially difficult verification challenges. An advantage of a modular approach is that it makes a clear distinction between the resolution function and the trajectory generation function. This allows the resolution computation to be independent of any particular vehicle. The Stratway algorithm was developed in both Java and C++ and is available through a open source license. Additionally there is a visualization application that is helpful when analyzing and quickly creating conflict scenarios.

  4. Automated Analysis of Stateflow Models

    NASA Technical Reports Server (NTRS)

    Bourbouh, Hamza; Garoche, Pierre-Loic; Garion, Christophe; Gurfinkel, Arie; Kahsaia, Temesghen; Thirioux, Xavier

    2017-01-01

    Stateflow is a widely used modeling framework for embedded and cyber physical systems where control software interacts with physical processes. In this work, we present a framework a fully automated safety verification technique for Stateflow models. Our approach is two-folded: (i) we faithfully compile Stateflow models into hierarchical state machines, and (ii) we use automated logic-based verification engine to decide the validity of safety properties. The starting point of our approach is a denotational semantics of State flow. We propose a compilation process using continuation-passing style (CPS) denotational semantics. Our compilation technique preserves the structural and modal behavior of the system. The overall approach is implemented as an open source toolbox that can be integrated into the existing Mathworks Simulink Stateflow modeling framework. We present preliminary experimental evaluations that illustrate the effectiveness of our approach in code generation and safety verification of industrial scale Stateflow models.

  5. Social Cognition Psychometric Evaluation: Results of the Initial Psychometric Study

    PubMed Central

    Pinkham, Amy E.; Penn, David L.; Green, Michael F.; Harvey, Philip D.

    2016-01-01

    Measurement of social cognition in treatment trials remains problematic due to poor and limited psychometric data for many tasks. As part of the Social Cognition Psychometric Evaluation (SCOPE) study, the psychometric properties of 8 tasks were assessed. One hundred and seventy-nine stable outpatients with schizophrenia and 104 healthy controls completed the battery at baseline and a 2–4-week retest period at 2 sites. Tasks included the Ambiguous Intentions Hostility Questionnaire (AIHQ), Bell Lysaker Emotion Recognition Task (BLERT), Penn Emotion Recognition Task (ER-40), Relationships Across Domains (RAD), Reading the Mind in the Eyes Task (Eyes), The Awareness of Social Inferences Test (TASIT), Hinting Task, and Trustworthiness Task. Tasks were evaluated on: (i) test-retest reliability, (ii) utility as a repeated measure, (iii) relationship to functional outcome, (iv) practicality and tolerability, (v) sensitivity to group differences, and (vi) internal consistency. The BLERT and Hinting task showed the strongest psychometric properties across all evaluation criteria and are recommended for use in clinical trials. The ER-40, Eyes Task, and TASIT showed somewhat weaker psychometric properties and require further study. The AIHQ, RAD, and Trustworthiness Task showed poorer psychometric properties that suggest caution for their use in clinical trials. PMID:25943125

  6. Lightweight Small Arms Technologies

    DTIC Science & Technology

    2006-11-01

    conducted using several methods. Initial measurements were obtained using a strand burner , followed by closed bomb measurements using both pressed... pellets and entire cases. Specialized fixtures were developed to measure primer and booster combustion properties. The final verification of interior

  7. Control of embankment settlement field verification on PCPT prediction methods.

    DOT National Transportation Integrated Search

    2011-07-01

    Piezocone penetration tests (PCPT) have been widely used by geotechnical engineers for subsurface : investigation and evaluation of different soil properties such as strength and deformation characteristics of the : soil. This report focuses on the v...

  8. The VATES-Diamond as a Verifier's Best Friend

    NASA Astrophysics Data System (ADS)

    Glesner, Sabine; Bartels, Björn; Göthel, Thomas; Kleine, Moritz

    Within a model-based software engineering process it needs to be ensured that properties of abstract specifications are preserved by transformations down to executable code. This is even more important in the area of safety-critical real-time systems where additionally non-functional properties are crucial. In the VATES project, we develop formal methods for the construction and verification of embedded systems. We follow a novel approach that allows us to formally relate abstract process algebraic specifications to their implementation in a compiler intermediate representation. The idea is to extract a low-level process algebraic description from the intermediate code and to formally relate it to previously developed abstract specifications. We apply this approach to a case study from the area of real-time operating systems and show that this approach has the potential to seamlessly integrate modeling, implementation, transformation and verification stages of embedded system development.

  9. [Task sharing with radiotherapy technicians in image-guided radiotherapy].

    PubMed

    Diaz, O; Lorchel, F; Revault, C; Mornex, F

    2013-10-01

    The development of accelerators with on-board imaging systems now allows better target volumes reset at the time of irradiation (image-guided radiotherapy [IGRT]). However, these technological advances in the control of repositioning led to a multiplication of tasks for each actor in radiotherapy and increase the time available for the treatment, whether for radiotherapy technicians or radiation oncologists. As there is currently no explicit regulatory framework governing the use of IGRT, some institutional experiments show that a transfer is possible between radiation oncologists and radiotherapy technicians for on-line verification of image positioning. Initial training for every technical and drafting procedures within institutions will improve audit quality by reducing interindividual variability. Copyright © 2013. Published by Elsevier SAS.

  10. NASA Aerospace Flight Battery Program: Wet Life of Nickel-Hydrogen (Ni-H2) Batteries. Volume 2, Part 3; Appendices

    NASA Technical Reports Server (NTRS)

    Jung, David S,; Lee, Leonine S.; Manzo, Michelle A.

    2010-01-01

    This NASA Aerospace Flight Battery Systems Working Group was chartered within the NASA Engineering and Safety Center (NESC). The Battery Working Group was tasked to complete tasks and to propose proactive work to address battery related, agency-wide issues on an annual basis. In its first year of operation, this proactive program addressed various aspects of the validation and verification of aerospace battery systems for NASA missions. Studies were performed, issues were discussed and in many cases, test programs were executed to generate recommendations and guidelines to reduce risk associated with various aspects of implementing battery technology in the aerospace industry. This document contains Part 3 - Volume II Appendices to Part 3 - Volume I.

  11. NASA Aerospace Flight Battery Program: Recommendations for Technical Requirements for Inclusion in Aerospace Battery Procurements. Volume 2/Part 2

    NASA Technical Reports Server (NTRS)

    Jung, David S.; Manzo, Michelle A.

    2010-01-01

    This NASA Aerospace Flight Battery Systems Working Group was chartered within the NASA Engineering and Safety Center (NESC). The Battery Working Group was tasked to complete tasks and to propose proactive work to address battery related, agency-wide issues on an annual basis. In its first year of operation, this proactive program addressed various aspects of the validation and verification of aerospace battery systems for NASA missions. Studies were performed, issues were discussed and in many cases, test programs were executed to generate recommendations and guidelines to reduce risk associated with various aspects of implementing battery technology in the aerospace industry. This document contains Part 2 - Volume II Appendix A to Part 2 - Volume I.

  12. Modeling and stability analysis for the upper atmosphere research satellite auxiliary array switch component

    NASA Technical Reports Server (NTRS)

    Wolfgang, R.; Natarajan, T.; Day, J.

    1987-01-01

    A feedback control system, called an auxiliary array switch, was designed to connect or disconnect auxiliary solar panel segments from a spacecraft electrical bus to meet fluctuating demand for power. A simulation of the control system was used to carry out a number of design and analysis tasks that could not economically be performed with a breadboard of the hardware. These tasks included: (1) the diagnosis of a stability problem, (2) identification of parameters to which the performance of the control system was particularly sensitive, (3) verification that the response of the control system to anticipated fluctuations in the electrical load of the spacecraft was satisfactory, and (4) specification of limitations on the frequency and amplitude of the load fluctuations.

  13. Space station data system analysis/architecture study. Task 3: Trade studies, DR-5, volume 1

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The primary objective of Task 3 is to provide additional analysis and insight necessary to support key design/programmatic decision for options quantification and selection for system definition. This includes: (1) the identification of key trade study topics; (2) the definition of a trade study procedure for each topic (issues to be resolved, key inputs, criteria/weighting, methodology); (3) conduct tradeoff and sensitivity analysis; and (4) the review/verification of results within the context of evolving system design and definition. The trade study topics addressed in this volume include space autonomy and function automation, software transportability, system network topology, communications standardization, onboard local area networking, distributed operating system, software configuration management, and the software development environment facility.

  14. Seismological investigation of the National Data Centre Preparedness Exercise 2013

    NASA Astrophysics Data System (ADS)

    Gestermann, Nicolai; Hartmann, Gernot; Ross, J. Ole; Ceranna, Lars

    2015-04-01

    The Comprehensive Nuclear-Test-Ban Treaty (CTBT) prohibits all kinds of nuclear explosions conducted on Earth - underground, underwater or in the atmosphere. The verification regime of the CTBT is designed to detect any treaty violation. While the data of the International Monitoring System (IMS) is collected, processed and technically analyzed at the International Data Centre (IDC) of the CTBT-Organization, National Data Centres (NDC) of the member states provide interpretation and advice to their government concerning suspicious detections. The NDC Preparedness Exercises (NPE) are regularly performed dealing with fictitious treaty violations to practice the combined analysis of CTBT verification technologies. These exercises should help to evaluate the effectiveness of analysis procedures applied at NDCs and the quality, completeness and usefulness of IDC products for example. The exercise trigger of NPE2013 is a combination of a tempo-spatial indication pointing to a certain waveform event and simulated radionuclide concentrations generated by forward Atmospheric Transport Modelling based on a fictitious release. For the waveform event the date (4 Sept. 2013) is given and the region is communicated in a map showing the fictitious state of "Frisia" at the Coast of the North Sea in Central Europe. The potential connection between the waveform and radionuclide evidence remains unclear for exercise participants. The verification task was to identify the waveform event and to investigate potential sources of the radionuclide findings. The final question was whether the findings are CTBT relevant and justify a request for On-Site-Inspection in "Frisia". The seismic event was not included in the Reviewed Event Bulletin (REB) of the IDC. The available detections from the closest seismic IMS stations lead to a epicenter accuracy of about 24 km which is not sufficient to specify the 1000 km2 inspection area in case of an OSI. With use of data from local stations and adjusted velocity models the epicenter accuracy could be improved to less than 2 km, which demonstrates the crucial role of national technical means for verification tasks. The seismic NPE2013 event could be identified as induced from natural gas production in the source region. Similar waveforms and comparable spectral characteristic as a set of events in the same region are clear indications. The scenario of a possible treaty violation at the location of the seismic NPE2013 event could be disproved.

  15. Real-Time Reliability Verification for UAV Flight Control System Supporting Airworthiness Certification.

    PubMed

    Xu, Haiyang; Wang, Ping

    2016-01-01

    In order to verify the real-time reliability of unmanned aerial vehicle (UAV) flight control system and comply with the airworthiness certification standard, we proposed a model-based integration framework for modeling and verification of time property. Combining with the advantages of MARTE, this framework uses class diagram to create the static model of software system, and utilizes state chart to create the dynamic model. In term of the defined transformation rules, the MARTE model could be transformed to formal integrated model, and the different part of the model could also be verified by using existing formal tools. For the real-time specifications of software system, we also proposed a generating algorithm for temporal logic formula, which could automatically extract real-time property from time-sensitive live sequence chart (TLSC). Finally, we modeled the simplified flight control system of UAV to check its real-time property. The results showed that the framework could be used to create the system model, as well as precisely analyze and verify the real-time reliability of UAV flight control system.

  16. Real-Time Reliability Verification for UAV Flight Control System Supporting Airworthiness Certification

    PubMed Central

    Xu, Haiyang; Wang, Ping

    2016-01-01

    In order to verify the real-time reliability of unmanned aerial vehicle (UAV) flight control system and comply with the airworthiness certification standard, we proposed a model-based integration framework for modeling and verification of time property. Combining with the advantages of MARTE, this framework uses class diagram to create the static model of software system, and utilizes state chart to create the dynamic model. In term of the defined transformation rules, the MARTE model could be transformed to formal integrated model, and the different part of the model could also be verified by using existing formal tools. For the real-time specifications of software system, we also proposed a generating algorithm for temporal logic formula, which could automatically extract real-time property from time-sensitive live sequence chart (TLSC). Finally, we modeled the simplified flight control system of UAV to check its real-time property. The results showed that the framework could be used to create the system model, as well as precisely analyze and verify the real-time reliability of UAV flight control system. PMID:27918594

  17. Methodology for the specification of communication activities within the framework of a multi-layered architecture: Toward the definition of a knowledge base

    NASA Astrophysics Data System (ADS)

    Amyay, Omar

    A method defined in terms of synthesis and verification steps is presented. The specification of the services and protocols of communication within a multilayered architecture of the Open Systems Interconnection (OSI) type is an essential issue for the design of computer networks. The aim is to obtain an operational specification of the protocol service couple of a given layer. Planning synthesis and verification steps constitute a specification trajectory. The latter is based on the progressive integration of the 'initial data' constraints and verification of the specification originating from each synthesis step, through validity constraints that characterize an admissible solution. Two types of trajectories are proposed according to the style of the initial specification of the service protocol couple: operational type and service supplier viewpoint; knowledge property oriented type and service viewpoint. Synthesis and verification activities were developed and formalized in terms of labeled transition systems, temporal logic and epistemic logic. The originality of the second specification trajectory and the use of the epistemic logic are shown. An 'artificial intelligence' approach enables a conceptual model to be defined for a knowledge base system for implementing the method proposed. It is structured in three levels of representation of the knowledge relating to the domain, the reasoning characterizing synthesis and verification activities and the planning of the steps of a specification trajectory.

  18. Experimental Verification of a Theoretical Loading Function Describing Momentum Transfer from an Explosion to a Tree Stem

    DTIC Science & Technology

    1976-01-01

    Environmental Systems Laboratory P. 0. Box 631, Vlcksburg, Mississippi 39100 10. PROGRAM ELEMENT. PROJECT, TASK AREA » WORK UNIT NUMBERS Project...phases of the study were under the general supervision of Messrs. W. G. Shockley, Chief, Mobility and Environmental Systemo Labo- ratory (MESL), and...W. E. Grabau, former Chief, Environmental Systems Division (ESD) and now Special Assistant, MESL, and under the direct supervision of Mr. J. K

  19. Solar power satellite system definition study, phase 2.

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A program plan for the Solar Power Satellite Program is presented. The plan includes research, development, and evaluation phase, engineering and development and cost verification phase, prototype construction, and commercialization. Cost estimates and task requirements are given for the following technology areas: (1) solar arrays; (2) thermal engines and thermal systems; (3) power transmission (to earth); (4) large space structures; (5) materials technology; (6) system control; (7) space construction; (8) space transportation; (9) power distribution, and space environment effects.

  20. Design, fabrication and test of graphite/polyimide composite joints and attachments for advanced aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Koumal, D. E.

    1979-01-01

    The design and evaluation of built-up attachments and bonded joint concepts for use at elevated temperatures is documented. Joint concept screening, verification of GR/PI material, fabrication of design allowables panels, definition of test matrices, and analysis of bonded and bolted joints are among the tasks completed. The results provide data for the design and fabrication of lightly loaded components for advanced space transportation systems and high speed aircraft.

  1. Particle Tracking Model Transport Process Verification: Diffusion Algorithm

    DTIC Science & Technology

    2015-07-01

    sediment densities in space and time along with final particle fates (Demirbilek et al. 2004; Davies et al. 2005; McDonald et al. 2006; Lackey and... McDonald 2007). Although a versatile model currently utilized in various coastal, estuarine, and riverine applications, PTM is specifically designed to...Algorithm 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7

  2. Deductive Derivation and Turing-Computerization of Semiparametric Efficient Estimation

    PubMed Central

    Frangakis, Constantine E.; Qian, Tianchen; Wu, Zhenke; Diaz, Ivan

    2015-01-01

    Summary Researchers often seek robust inference for a parameter through semiparametric estimation. Efficient semiparametric estimation currently requires theoretical derivation of the efficient influence function (EIF), which can be a challenging and time-consuming task. If this task can be computerized, it can save dramatic human effort, which can be transferred, for example, to the design of new studies. Although the EIF is, in principle, a derivative, simple numerical differentiation to calculate the EIF by a computer masks the EIF’s functional dependence on the parameter of interest. For this reason, the standard approach to obtaining the EIF relies on the theoretical construction of the space of scores under all possible parametric submodels. This process currently depends on the correctness of conjectures about these spaces, and the correct verification of such conjectures. The correct guessing of such conjectures, though successful in some problems, is a nondeductive process, i.e., is not guaranteed to succeed (e.g., is not computerizable), and the verification of conjectures is generally susceptible to mistakes. We propose a method that can deductively produce semiparametric locally efficient estimators. The proposed method is computerizable, meaning that it does not need either conjecturing, or otherwise theoretically deriving the functional form of the EIF, and is guaranteed to produce the desired estimates even for complex parameters. The method is demonstrated through an example. PMID:26237182

  3. Deductive derivation and turing-computerization of semiparametric efficient estimation.

    PubMed

    Frangakis, Constantine E; Qian, Tianchen; Wu, Zhenke; Diaz, Ivan

    2015-12-01

    Researchers often seek robust inference for a parameter through semiparametric estimation. Efficient semiparametric estimation currently requires theoretical derivation of the efficient influence function (EIF), which can be a challenging and time-consuming task. If this task can be computerized, it can save dramatic human effort, which can be transferred, for example, to the design of new studies. Although the EIF is, in principle, a derivative, simple numerical differentiation to calculate the EIF by a computer masks the EIF's functional dependence on the parameter of interest. For this reason, the standard approach to obtaining the EIF relies on the theoretical construction of the space of scores under all possible parametric submodels. This process currently depends on the correctness of conjectures about these spaces, and the correct verification of such conjectures. The correct guessing of such conjectures, though successful in some problems, is a nondeductive process, i.e., is not guaranteed to succeed (e.g., is not computerizable), and the verification of conjectures is generally susceptible to mistakes. We propose a method that can deductively produce semiparametric locally efficient estimators. The proposed method is computerizable, meaning that it does not need either conjecturing, or otherwise theoretically deriving the functional form of the EIF, and is guaranteed to produce the desired estimates even for complex parameters. The method is demonstrated through an example. © 2015, The International Biometric Society.

  4. Automated radiotherapy treatment plan integrity verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang Deshan; Moore, Kevin L.

    2012-03-15

    Purpose: In our clinic, physicists spend from 15 to 60 min to verify the physical and dosimetric integrity of radiotherapy plans before presentation to radiation oncology physicians for approval. The purpose of this study was to design and implement a framework to automate as many elements of this quality control (QC) step as possible. Methods: A comprehensive computer application was developed to carry out a majority of these verification tasks in the Philips PINNACLE treatment planning system (TPS). This QC tool functions based on both PINNACLE scripting elements and PERL sub-routines. The core of this technique is the method ofmore » dynamic scripting, which involves a PERL programming module that is flexible and powerful for treatment plan data handling. Run-time plan data are collected, saved into temporary files, and analyzed against standard values and predefined logical rules. The results were summarized in a hypertext markup language (HTML) report that is displayed to the user. Results: This tool has been in clinical use for over a year. The occurrence frequency of technical problems, which would cause delays and suboptimal plans, has been reduced since clinical implementation. Conclusions: In addition to drastically reducing the set of human-driven logical comparisons, this QC tool also accomplished some tasks that are otherwise either quite laborious or impractical for humans to verify, e.g., identifying conflicts amongst IMRT optimization objectives.« less

  5. Phonological and semantic processing during comprehension in Wernicke's aphasia: An N400 and Phonological Mapping Negativity Study.

    PubMed

    Robson, Holly; Pilkington, Emma; Evans, Louise; DeLuca, Vincent; Keidel, James L

    2017-06-01

    Comprehension impairments in Wernicke's aphasia are thought to result from a combination of impaired phonological and semantic processes. However, the relationship between these cognitive processes and language comprehension has only been inferred through offline neuropsychological tasks. This study used ERPs to investigate phonological and semantic processing during online single word comprehension. EEG was recorded in a group of Wernicke's aphasia n=8 and control participants n=10 while performing a word-picture verification task. The N400 and Phonological Mapping Negativity/Phonological Mismatch Negativity (PMN) event-related potential components were investigated as an index of semantic and phonological processing, respectively. Individuals with Wernicke's aphasia displayed reduced and inconsistent N400 and PMN effects in comparison to control participants. Reduced N400 effects in the WA group were simulated in the control group by artificially degrading speech perception. Correlation analyses in the Wernicke's aphasia group found that PMN but not N400 amplitude was associated with behavioural word-picture verification performance. The results confirm impairments at both phonological and semantic stages of comprehension in Wernicke's aphasia. However, reduced N400 responses in Wernicke's aphasia are at least partially attributable to earlier phonological processing impairments. The results provide further support for the traditional model of Wernicke's aphasia which claims a causative link between phonological processing and language comprehension impairments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Propel: Tools and Methods for Practical Source Code Model Checking

    NASA Technical Reports Server (NTRS)

    Mansouri-Samani, Massoud; Mehlitz, Peter; Markosian, Lawrence; OMalley, Owen; Martin, Dale; Moore, Lantz; Penix, John; Visser, Willem

    2003-01-01

    The work reported here is an overview and snapshot of a project to develop practical model checking tools for in-the-loop verification of NASA s mission-critical, multithreaded programs in Java and C++. Our strategy is to develop and evaluate both a design concept that enables the application of model checking technology to C++ and Java, and a model checking toolset for C++ and Java. The design concept and the associated model checking toolset is called Propel. It builds upon the Java PathFinder (JPF) tool, an explicit state model checker for Java applications developed by the Automated Software Engineering group at NASA Ames Research Center. The design concept that we are developing is Design for Verification (D4V). This is an adaption of existing best design practices that has the desired side-effect of enhancing verifiability by improving modularity and decreasing accidental complexity. D4V, we believe, enhances the applicability of a variety of V&V approaches; we are developing the concept in the context of model checking. The model checking toolset, Propel, is based on extending JPF to handle C++. Our principal tasks in developing the toolset are to build a translator from C++ to Java, productize JPF, and evaluate the toolset in the context of D4V. Through all these tasks we are testing Propel capabilities on customer applications.

  7. Optical properties of fly ash. Volume 1, Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Self, S.A.

    1994-12-01

    Research performed under this contract was divided into four tasks under the following headings: Task 1, Characterization of fly ash; Task 2, Measurements of the optical constants of slags; Task 3, Calculations of the radiant properties of fly ash dispersions; and Task 4, Measurements of the radiant properties of fly ash dispersions. Tasks 1 and 4 constituted the Ph.D. research topic of Sarbajit Ghosal, while Tasks 2 and 3 constituted the Ph.D. research topic of Jon Ebert. Together their doctoral dissertations give a complete account of the work performed. This final report, issued in two volumes consists of an executivemore » summary of the whole program followed by the dissertation of Ghosal. Volume 1 contains the dissertation of Ghosal which covers the characterization of fly ash and the measurements of the radiant properties of fly ash dispersions. A list of publications and conference presentations resulting from the work is also included.« less

  8. Creep-fatigue life prediction for engine hot section materials (isotropic)

    NASA Technical Reports Server (NTRS)

    Moreno, V.

    1982-01-01

    The objectives of this program are the investigation of fundamental approaches to high temperature crack initiation life prediction, identification of specific modeling strategies and the development of specific models for component relevant loading conditions. A survey of the hot section material/coating systems used throughout the gas turbine industry is included. Two material/coating systems will be identified for the program. The material/coating system designated as the base system shall be used throughout Tasks 1-12. The alternate material/coating system will be used only in Task 12 for further evaluation of the models developed on the base material. In Task II, candidate life prediction approaches will be screened based on a set of criteria that includes experience of the approaches within the literature, correlation with isothermal data generated on the base material, and judgements relative to the applicability of the approach for the complex cycles to be considered in the option program. The two most promising approaches will be identified. Task 3 further evaluates the best approach using additional base material fatigue testing including verification tests. Task 4 consists of technical, schedular, financial and all other reporting requirements in accordance with the Reports of Work clause.

  9. Development of an inpatient operational pharmacy productivity model.

    PubMed

    Naseman, Ryan W; Lopez, Ben R; Forrey, Ryan A; Weber, Robert J; Kipp, Kris M

    2015-02-01

    An innovative model for measuring the operational productivity of medication order management in inpatient settings is described. Order verification within a computerized prescriber order-entry system was chosen as the pharmacy workload driver. To account for inherent variability in the tasks involved in processing different types of orders, pharmaceutical products were grouped by class, and each class was assigned a time standard, or "medication complexity weight" reflecting the intensity of pharmacist and technician activities (verification of drug indication, verification of appropriate dosing, adverse-event prevention and monitoring, medication preparation, product checking, product delivery, returns processing, nurse/provider education, and problem-order resolution). The resulting "weighted verifications" (WV) model allows productivity monitoring by job function (pharmacist versus technician) to guide hiring and staffing decisions. A 9-month historical sample of verified medication orders was analyzed using the WV model, and the calculations were compared with values derived from two established models—one based on the Case Mix Index (CMI) and the other based on the proprietary Pharmacy Intensity Score (PIS). Evaluation of Pearson correlation coefficients indicated that values calculated using the WV model were highly correlated with those derived from the CMI-and PIS-based models (r = 0.845 and 0.886, respectively). Relative to the comparator models, the WV model offered the advantage of less period-to-period variability. The WV model yielded productivity data that correlated closely with values calculated using two validated workload management models. The model may be used as an alternative measure of pharmacy operational productivity. Copyright © 2015 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  10. Formal development of a clock synchronization circuit

    NASA Technical Reports Server (NTRS)

    Miner, Paul S.

    1995-01-01

    This talk presents the latest stage in formal development of a fault-tolerant clock synchronization circuit. The development spans from a high level specification of the required properties to a circuit realizing the core function of the system. An abstract description of an algorithm has been verified to satisfy the high-level properties using the mechanical verification system EHDM. This abstract description is recast as a behavioral specification input to the Digital Design Derivation system (DDD) developed at Indiana University. DDD provides a formal design algebra for developing correct digital hardware. Using DDD as the principle design environment, a core circuit implementing the clock synchronization algorithm was developed. The design process consisted of standard DDD transformations augmented with an ad hoc refinement justified using the Prototype Verification System (PVS) from SRI International. Subsequent to the above development, Wilfredo Torres-Pomales discovered an area-efficient realization of the same function. Establishing correctness of this optimization requires reasoning in arithmetic, so a general verification is outside the domain of both DDD transformations and model-checking techniques. DDD represents digital hardware by systems of mutually recursive stream equations. A collection of PVS theories was developed to aid in reasoning about DDD-style streams. These theories include a combinator for defining streams that satisfy stream equations, and a means for proving stream equivalence by exhibiting a stream bisimulation. DDD was used to isolate the sub-system involved in Torres-Pomales' optimization. The equivalence between the original design and the optimized verified was verified in PVS by exhibiting a suitable bisimulation. The verification depended upon type constraints on the input streams and made extensive use of the PVS type system. The dependent types in PVS provided a useful mechanism for defining an appropriate bisimulation.

  11. A New Integrated Weighted Model in SNOW-V10: Verification of Categorical Variables

    NASA Astrophysics Data System (ADS)

    Huang, Laura X.; Isaac, George A.; Sheng, Grant

    2014-01-01

    This paper presents the verification results for nowcasts of seven categorical variables from an integrated weighted model (INTW) and the underlying numerical weather prediction (NWP) models. Nowcasting, or short range forecasting (0-6 h), over complex terrain with sufficient accuracy is highly desirable but a very challenging task. A weighting, evaluation, bias correction and integration system (WEBIS) for generating nowcasts by integrating NWP forecasts and high frequency observations was used during the Vancouver 2010 Olympic and Paralympic Winter Games as part of the Science of Nowcasting Olympic Weather for Vancouver 2010 (SNOW-V10) project. Forecast data from Canadian high-resolution deterministic NWP system with three nested grids (at 15-, 2.5- and 1-km horizontal grid-spacing) were selected as background gridded data for generating the integrated nowcasts. Seven forecast variables of temperature, relative humidity, wind speed, wind gust, visibility, ceiling and precipitation rate are treated as categorical variables for verifying the integrated weighted forecasts. By analyzing the verification of forecasts from INTW and the NWP models among 15 sites, the integrated weighted model was found to produce more accurate forecasts for the 7 selected forecast variables, regardless of location. This is based on the multi-categorical Heidke skill scores for the test period 12 February to 21 March 2010.

  12. Mathematics learning disabilities in girls with fragile X or Turner syndrome during late elementary school.

    PubMed

    Murphy, Melissa M; Mazzocco, Michèle M M

    2008-01-01

    The present study focuses on math and related skills among 32 girls with fragile X (n = 14) or Turner (n = 18) syndrome during late elementary school. Performance in each syndrome group was assessed relative to Full Scale IQ-matched comparison groups of girls from the general population (n = 32 and n = 89 for fragile X syndrome and Turner syndrome, respectively). Differences between girls with fragile X and their comparison group emerged on untimed arithmetic calculations, mastery of counting skills, and arithmetic problem verification accuracy. Relative to girls in the comparison group, girls with Turner syndrome did not differ on untimed arithmetic calculations or problem verification accuracy, but they had limited mastery of counting skills and longer response times to complete the problem verification task. Girls with fragile X or Turner syndrome also differed from their respective comparison groups on math-related abilities, including visual-spatial, working memory, and reading skills, and the associations between math and those related skills. Together, these findings support the notion that difficulty with math and related skills among girls with fragile X or Turner syndrome continues into late elementary school and that the profile of math and related skill difficulty distinguishes the two syndrome groups from each other.

  13. Imageability effects on sentence judgement by right-brain-damaged adults

    PubMed Central

    Lederer, Lisa Guttentag; Scott, April Gibbs; Tompkins, Connie A.; Dickey, Michael W.

    2009-01-01

    Background For decades researchers assumed visual image generation was the province of the right hemisphere. The lack of corresponding evidence was only recently noted, yet conflicting results still leave open the possibility that the right hemisphere plays a role. This study assessed imagery generation in adult participants with and without right hemisphere damage (RHD). Imagery was operationalised as the activation of representations retrieved from long-term memory similar to those that underlie sensory experience, in the absence of the usual sensory stimulation, and in the presence of communicative stimuli. Aims The primary aim of the study was to explore the widely held belief that there is an association between the right hemisphere and imagery generation ability. We also investigated whether visual and visuo-motor imagery generation abilities differ in adults with RHD. Methods & Procedures Participants included 34 adults with unilateral RHD due to cerebrovascular accident and 38 adults who served as non-brain-damaged (NBD) controls. To assess the potential effects of RHD on the processing of language stimuli that differ in imageability, participants performed an auditory sentence verification task. Participants listened to high- and low-imageability sentences from Eddy and Glass (1981) and indicated whether each sentence was true or false. The dependent measures for this task were performance accuracy and response times (RT). Outcomes & Results In general, accuracy was higher, and response time lower, for low-imagery than for high-imagery items. Although NBD participants’ RTs for low-imagery items were significantly faster than those for high-imagery items, this difference disappeared in the group with RHD. We confirmed that this result was not due to a speed–accuracy trade-off or to syntactic differences between stimulus sets. A post hoc analysis also suggested that the group with RHD was selectively impaired in motor, rather than visual, imagery generation. Conclusions The disproportionately high RT of participants with RHD in response to low-imagery items suggests that these items had other properties that made their verification difficult for this population. The nature and extent of right hemisphere patients’ deficits in processing different types of imagery should be considered. In addition, the capacity of adults with RHD to generate visual and motor imagery should be investigated separately in future studies. PMID:20054429

  14. FDNS code to predict wall heat fluxes or wall temperatures in rocket nozzles

    NASA Technical Reports Server (NTRS)

    Karr, Gerald R.

    1993-01-01

    This report summarizes the findings on the NASA contract NAG8-212, Task No. 3. The overall project consists of three tasks, all of which have been successfully completed. In addition, some supporting supplemental work, not required by the contract, has been performed and is documented herein. Task 1 involved the modification of the wall functions in the code FDNS to use a Reynolds Analogy-based method. Task 2 involved the verification of the code against experimentally available data. The data chosen for comparison was from an experiment involving the injection of helium from a wall jet. Results obtained in completing this task also show the sensitivity of the FDNS code to unknown conditions at the injection slot. Task 3 required computation of the flow of hot exhaust gases through the P&W 40K subscale nozzle. Computations were performed both with and without film coolant injection. The FDNS program tends to overpredict heat fluxes, but, with suitable modeling of backside cooling, may give reasonable wall temperature predictions. For film cooling in the P&W 40K calorimeter subscale nozzle, the average wall temperature is reduced from 1750 R to about 1050 R by the film cooling. The average wall heat flux is reduced by a factor of three.

  15. Implementing Kanban for agile process management within the ALMA Software Operations Group

    NASA Astrophysics Data System (ADS)

    Reveco, Johnny; Mora, Matias; Shen, Tzu-Chiang; Soto, Ruben; Sepulveda, Jorge; Ibsen, Jorge

    2014-07-01

    After the inauguration of the Atacama Large Millimeter/submillimeter Array (ALMA), the Software Operations Group in Chile has refocused its objectives to: (1) providing software support to tasks related to System Integration, Scientific Commissioning and Verification, as well as Early Science observations; (2) testing the remaining software features, still under development by the Integrated Computing Team across the world; and (3) designing and developing processes to optimize and increase the level of automation of operational tasks. Due to their different stakeholders, each of these tasks presents a wide diversity of importances, lifespans and complexities. Aiming to provide the proper priority and traceability for every task without stressing our engineers, we introduced the Kanban methodology in our processes in order to balance the demand on the team against the throughput of the delivered work. The aim of this paper is to share experiences gained during the implementation of Kanban in our processes, describing the difficulties we have found, solutions and adaptations that led us to our current but still evolving implementation, which has greatly improved our throughput, prioritization and problem traceability.

  16. Human performance across decision making, selective attention, and working memory tasks: Experimental data and computer simulations.

    PubMed

    Stocco, Andrea; Yamasaki, Brianna L; Prat, Chantel S

    2018-04-01

    This article describes the data analyzed in the paper "Individual differences in the Simon effect are underpinned by differences in the competitive dynamics in the basal ganglia: An experimental verification and a computational model" (Stocco et al., 2017) [1]. The data includes behavioral results from participants performing three cognitive tasks (Probabilistic Stimulus Selection (Frank et al., 2004) [2], Simon task (Craft and Simon, 1970) [3], and Automated Operation Span (Unsworth et al., 2005) [4]), as well as simulationed traces generated by a computational neurocognitive model that accounts for individual variations in human performance across the tasks. The experimental data encompasses individual data files (in both preprocessed and native output format) as well as group-level summary files. The simulation data includes the entire model code, the results of a full-grid search of the model's parameter space, and the code used to partition the model space and parallelize the simulations. Finally, the repository includes the R scripts used to carry out the statistical analyses reported in the original paper.

  17. Verification of a Method for Measuring Parkinson's Disease Related Temporal Irregularity in Spiral Drawings.

    PubMed

    Aghanavesi, Somayeh; Memedi, Mevludin; Dougherty, Mark; Nyholm, Dag; Westin, Jerker

    2017-10-13

    Parkinson's disease (PD) is a progressive movement disorder caused by the death of dopamine-producing cells in the midbrain. There is a need for frequent symptom assessment, since the treatment needs to be individualized as the disease progresses. The aim of this paper was to verify and further investigate the clinimetric properties of an entropy-based method for measuring PD-related upper limb temporal irregularities during spiral drawing tasks. More specifically, properties of a temporal irregularity score (TIS) for patients at different stages of PD, and medication time points were investigated. Nineteen PD patients and 22 healthy controls performed repeated spiral drawing tasks on a smartphone. Patients performed the tests before a single levodopa dose and at specific time intervals after the dose was given. Three movement disorder specialists rated videos of the patients based on the unified PD rating scale (UPDRS) and the Dyskinesia scale. Differences in mean TIS between the groups of patients and healthy subjects were assessed. Test-retest reliability of the TIS was measured. The ability of TIS to detect changes from baseline (before medication) to later time points was investigated. Correlations between TIS and clinical rating scores were assessed. The mean TIS was significantly different between healthy subjects and patients in advanced groups ( p -value = 0.02). Test-retest reliability of TIS was good with Intra-class Correlation Coefficient of 0.81. When assessing changes in relation to treatment, TIS contained some information to capture changes from Off to On and wearing off effects. However, the correlations between TIS and clinical scores (UPDRS and Dyskinesia) were weak. TIS was able to differentiate spiral drawings drawn by patients in an advanced stage from those drawn by healthy subjects, and TIS had good test-retest reliability. TIS was somewhat responsive to single-dose levodopa treatment. Since TIS is an upper limb high-frequency-based measure, it cannot be detected during clinical assessment.

  18. TH-AB-201-01: A Feasibility Study of Independent Dose Verification for CyberKnife

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sato, A; Noda, T; Keduka, Y

    2016-06-15

    Purpose: CyberKnife irradiation is composed of tiny-size, multiple and intensity-modulated beams compared to conventional linacs. Few of the publications for Independent dose calculation verification for CyberKnife have been reported. In this study, we evaluated the feasibility of independent dose verification for CyberKnife treatment as Secondary check. Methods: The followings were measured: test plans using some static and single beams, clinical plans in a phantom and using patient’s CT. 75 patient plans were collected from several treatment sites of brain, lung, liver and bone. In the test plans and the phantom plans, a pinpoint ion-chamber measurement was performed to assess dosemore » deviation for a treatment planning system (TPS) and an independent verification program of Simple MU Analysis (SMU). In the clinical plans, dose deviation between the SMU and the TPS was performed. Results: In test plan, the dose deviations were 3.3±4.5%, and 4.1±4.4% for the TPS and the SMU, respectively. In the phantom measurements for the clinical plans, the dose deviations were −0.2±3.6% for the TPS and −2.3±4.8% for the SMU. In the clinical plans using the patient’s CT, the dose deviations were −3.0±2.1% (Mean±1SD). The systematic difference was partially derived from inverse square law and penumbra calculation. Conclusion: The independent dose calculation for CyberKnife shows −3.0±4.2% (Mean±2SD) and our study, the confidence limit was achieved within 5% of the tolerance level from AAPM task group 114 for non-IMRT treatment. Thus, it may be feasible to use independent dose calculation verification for CyberKnife treatment as the secondary check. This research is partially supported by Japan Agency for Medical Research and Development (AMED)« less

  19. Verification of Meteorological and Oceanographic Ensemble Forecasts in the U.S. Navy

    NASA Astrophysics Data System (ADS)

    Klotz, S.; Hansen, J.; Pauley, P.; Sestak, M.; Wittmann, P.; Skupniewicz, C.; Nelson, G.

    2013-12-01

    The Navy Ensemble Forecast Verification System (NEFVS) has been promoted recently to operational status at the U.S. Navy's Fleet Numerical Meteorology and Oceanography Center (FNMOC). NEFVS processes FNMOC and National Centers for Environmental Prediction (NCEP) meteorological and ocean wave ensemble forecasts, gridded forecast analyses, and innovation (observational) data output by FNMOC's data assimilation system. The NEFVS framework consists of statistical analysis routines, a variety of pre- and post-processing scripts to manage data and plot verification metrics, and a master script to control application workflow. NEFVS computes metrics that include forecast bias, mean-squared error, conditional error, conditional rank probability score, and Brier score. The system also generates reliability and Receiver Operating Characteristic diagrams. In this presentation we describe the operational framework of NEFVS and show examples of verification products computed from ensemble forecasts, meteorological observations, and forecast analyses. The construction and deployment of NEFVS addresses important operational and scientific requirements within Navy Meteorology and Oceanography. These include computational capabilities for assessing the reliability and accuracy of meteorological and ocean wave forecasts in an operational environment, for quantifying effects of changes and potential improvements to the Navy's forecast models, and for comparing the skill of forecasts from different forecast systems. NEFVS also supports the Navy's collaboration with the U.S. Air Force, NCEP, and Environment Canada in the North American Ensemble Forecast System (NAEFS) project and with the Air Force and the National Oceanic and Atmospheric Administration (NOAA) in the National Unified Operational Prediction Capability (NUOPC) program. This program is tasked with eliminating unnecessary duplication within the three agencies, accelerating the transition of new technology, such as multi-model ensemble forecasting, to U.S. Department of Defense use, and creating a superior U.S. global meteorological and oceanographic prediction capability. Forecast verification is an important component of NAEFS and NUOPC. Distribution Statement A: Approved for Public Release; distribution is unlimited

  20. Verification of Meteorological and Oceanographic Ensemble Forecasts in the U.S. Navy

    NASA Astrophysics Data System (ADS)

    Klotz, S. P.; Hansen, J.; Pauley, P.; Sestak, M.; Wittmann, P.; Skupniewicz, C.; Nelson, G.

    2012-12-01

    The Navy Ensemble Forecast Verification System (NEFVS) has been promoted recently to operational status at the U.S. Navy's Fleet Numerical Meteorology and Oceanography Center (FNMOC). NEFVS processes FNMOC and National Centers for Environmental Prediction (NCEP) meteorological and ocean wave ensemble forecasts, gridded forecast analyses, and innovation (observational) data output by FNMOC's data assimilation system. The NEFVS framework consists of statistical analysis routines, a variety of pre- and post-processing scripts to manage data and plot verification metrics, and a master script to control application workflow. NEFVS computes metrics that include forecast bias, mean-squared error, conditional error, conditional rank probability score, and Brier score. The system also generates reliability and Receiver Operating Characteristic diagrams. In this presentation we describe the operational framework of NEFVS and show examples of verification products computed from ensemble forecasts, meteorological observations, and forecast analyses. The construction and deployment of NEFVS addresses important operational and scientific requirements within Navy Meteorology and Oceanography (METOC). These include computational capabilities for assessing the reliability and accuracy of meteorological and ocean wave forecasts in an operational environment, for quantifying effects of changes and potential improvements to the Navy's forecast models, and for comparing the skill of forecasts from different forecast systems. NEFVS also supports the Navy's collaboration with the U.S. Air Force, NCEP, and Environment Canada in the North American Ensemble Forecast System (NAEFS) project and with the Air Force and the National Oceanic and Atmospheric Administration (NOAA) in the National Unified Operational Prediction Capability (NUOPC) program. This program is tasked with eliminating unnecessary duplication within the three agencies, accelerating the transition of new technology, such as multi-model ensemble forecasting, to U.S. Department of Defense use, and creating a superior U.S. global meteorological and oceanographic prediction capability. Forecast verification is an important component of NAEFS and NUOPC.

  1. SU-F-T-267: A Clarkson-Based Independent Dose Verification for the Helical Tomotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagata, H; Juntendo University, Hongo, Tokyo; Hongo, H

    2016-06-15

    Purpose: There have been few reports for independent dose verification for Tomotherapy. We evaluated the accuracy and the effectiveness of an independent dose verification system for the Tomotherapy. Methods: Simple MU Analysis (SMU, Triangle Product, Ishikawa, Japan) was used as the independent verification system and the system implemented a Clarkson-based dose calculation algorithm using CT image dataset. For dose calculation in the SMU, the Tomotherapy machine-specific dosimetric parameters (TMR, Scp, OAR and MLC transmission factor) were registered as the machine beam data. Dose calculation was performed after Tomotherapy sinogram from DICOM-RT plan information was converted to the information for MUmore » and MLC location at more segmented control points. The performance of the SMU was assessed by a point dose measurement in non-IMRT and IMRT plans (simple target and mock prostate plans). Subsequently, 30 patients’ treatment plans for prostate were compared. Results: From the comparison, dose differences between the SMU and the measurement were within 3% for all cases in non-IMRT plans. In the IMRT plan for the simple target, the differences (Average±1SD) were −0.70±1.10% (SMU vs. TPS), −0.40±0.10% (measurement vs. TPS) and −1.20±1.00% (measurement vs. SMU), respectively. For the mock prostate, the differences were −0.40±0.60% (SMU vs. TPS), −0.50±0.90% (measurement vs. TPS) and −0.90±0.60% (measurement vs. SMU), respectively. For patients’ plans, the difference was −0.50±2.10% (SMU vs. TPS). Conclusion: A Clarkson-based independent dose verification for the Tomotherapy can be clinically available as a secondary check with the similar tolerance level of AAPM Task group 114. This research is partially supported by Japan Agency for Medical Research and Development (AMED)« less

  2. Tolerance limits and methodologies for IMRT measurement-based verification QA: Recommendations of AAPM Task Group No. 218.

    PubMed

    Miften, Moyed; Olch, Arthur; Mihailidis, Dimitris; Moran, Jean; Pawlicki, Todd; Molineu, Andrea; Li, Harold; Wijesooriya, Krishni; Shi, Jie; Xia, Ping; Papanikolaou, Nikos; Low, Daniel A

    2018-04-01

    Patient-specific IMRT QA measurements are important components of processes designed to identify discrepancies between calculated and delivered radiation doses. Discrepancy tolerance limits are neither well defined nor consistently applied across centers. The AAPM TG-218 report provides a comprehensive review aimed at improving the understanding and consistency of these processes as well as recommendations for methodologies and tolerance limits in patient-specific IMRT QA. The performance of the dose difference/distance-to-agreement (DTA) and γ dose distribution comparison metrics are investigated. Measurement methods are reviewed and followed by a discussion of the pros and cons of each. Methodologies for absolute dose verification are discussed and new IMRT QA verification tools are presented. Literature on the expected or achievable agreement between measurements and calculations for different types of planning and delivery systems are reviewed and analyzed. Tests of vendor implementations of the γ verification algorithm employing benchmark cases are presented. Operational shortcomings that can reduce the γ tool accuracy and subsequent effectiveness for IMRT QA are described. Practical considerations including spatial resolution, normalization, dose threshold, and data interpretation are discussed. Published data on IMRT QA and the clinical experience of the group members are used to develop guidelines and recommendations on tolerance and action limits for IMRT QA. Steps to check failed IMRT QA plans are outlined. Recommendations on delivery methods, data interpretation, dose normalization, the use of γ analysis routines and choice of tolerance limits for IMRT QA are made with focus on detecting differences between calculated and measured doses via the use of robust analysis methods and an in-depth understanding of IMRT verification metrics. The recommendations are intended to improve the IMRT QA process and establish consistent, and comparable IMRT QA criteria among institutions. © 2018 American Association of Physicists in Medicine.

  3. Monte Carlo verification of radiotherapy treatments with CloudMC.

    PubMed

    Miras, Hector; Jiménez, Rubén; Perales, Álvaro; Terrón, José Antonio; Bertolet, Alejandro; Ortiz, Antonio; Macías, José

    2018-06-27

    A new implementation has been made on CloudMC, a cloud-based platform presented in a previous work, in order to provide services for radiotherapy treatment verification by means of Monte Carlo in a fast, easy and economical way. A description of the architecture of the application and the new developments implemented is presented together with the results of the tests carried out to validate its performance. CloudMC has been developed over Microsoft Azure cloud. It is based on a map/reduce implementation for Monte Carlo calculations distribution over a dynamic cluster of virtual machines in order to reduce calculation time. CloudMC has been updated with new methods to read and process the information related to radiotherapy treatment verification: CT image set, treatment plan, structures and dose distribution files in DICOM format. Some tests have been designed in order to determine, for the different tasks, the most suitable type of virtual machines from those available in Azure. Finally, the performance of Monte Carlo verification in CloudMC is studied through three real cases that involve different treatment techniques, linac models and Monte Carlo codes. Considering computational and economic factors, D1_v2 and G1 virtual machines were selected as the default type for the Worker Roles and the Reducer Role respectively. Calculation times up to 33 min and costs of 16 € were achieved for the verification cases presented when a statistical uncertainty below 2% (2σ) was required. The costs were reduced to 3-6 € when uncertainty requirements are relaxed to 4%. Advantages like high computational power, scalability, easy access and pay-per-usage model, make Monte Carlo cloud-based solutions, like the one presented in this work, an important step forward to solve the long-lived problem of truly introducing the Monte Carlo algorithms in the daily routine of the radiotherapy planning process.

  4. The facilitative effects of glucose ingestion on memory retrieval in younger and older adults: is task difficulty or task domain critical?

    PubMed

    Riby, Leigh M; McMurtrie, Hazel; Smallwood, Jonathan; Ballantyne, Carrie; Meikle, Andrew; Smith, Emily

    2006-02-01

    The ingestion of a glucose-containing drink has been shown to improve cognitive performance, particularly memory functioning. However, it remains unclear as to the extent to which task domain and task difficulty moderate the glucose enhancement effect. The aim of this research was to determine whether boosts in performance are restricted to particular classes of memory (episodic v. semantic) or to tasks of considerable cognitive load. A repeated measures (25 g glucose v. saccharin), counterbalanced, double-blind design was used with younger and older adults. Participants performed a battery of episodic (e.g. paired associate learning) and semantic memory (e.g. category verification) tasks under low and high cognitive load. Electrophysiological measures (heart rate and galvanic skin response) of arousal and mental effort were also gathered. The results indicated that whilst glucose appeared to aid episodic remembering, cognitive load did not exaggerate the facilitative effect. For semantic memory, there was little evidence to suggest that glucose can boost semantic memory retrieval even when the load was manipulated. One exception was that glucose facilitated performance during the difficult category fluency task. Regardless, the present findings are consistent with the domain-specific account in which glucose acts primarily on the hippocampal region, which is known to support episodic memory. The possible contribution of the hippocampus in semantic memory processing is also discussed.

  5. Measurement of Temperature and Soil Properties for Finite Element Model Verification

    DOT National Transportation Integrated Search

    2012-08-01

    In recent years, ADOT&PF personnel have used TEMP/W, a commercially available two-dimensional finite element program, to conduct thermal modeling of various : embankment configurations in an effort to reduce the thawing of ice-rich permafrost through...

  6. An Abstract Systolic Model and Its Application to the Design of Finite Element Systems.

    DTIC Science & Technology

    1983-01-01

    networks as a collection of communicating. parallel :.,’-.processes, some of the techniques for the verification of distributed systems ,.woi (see for...item must be collected . even If there is no Interest In its value. In this case. the collection of the data is simply achieved by changing the state of...the appropriate data as well as for collecting the output data and performing some additional tasks that we will discuss later. A basic functional

  7. Space station definition and preliminary design, WP-01. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    Lenda, J. A.

    1987-01-01

    System activities are summarized and an overview of the system level engineering tasks performed are provided. Areas discussed include requirements, system test and verification, the advanced development plan, customer accommodations, software, growth, productivity, operations, product assurance and metrication. The hardware element study results are summarized. Overviews of recommended configurations are provided for the core module, the USL, the logistics elements, the propulsion subsystems, reboost, vehicle accommodations, and the smart front end. A brief overview is provided for costing activities.

  8. Missile and Space Systems Reliability versus Cost Trade-Off Study

    DTIC Science & Technology

    1983-01-01

    F00-1C09 Robert C. Schneider F00-1C09 V . PERFORMING ORGANIZATION NAME AM0 ADDRESS 16 PRGRAM ELEMENT. PROJECT. TASK BoeingAerosace CmpAnyA CA WORK UNIT...reliability problems, which has the - real bearing on program effectiveness. A well planned and funded reliability effort can prevent or ferret out...failure analysis, and the in- corporation and verification of design corrections to prevent recurrence of failures. 302.2.2 A TMJ test plan shall be

  9. FY15 Status Report on NEAMS Neutronics Activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C. H.; Shemon, E. R.; Smith, M. A.

    2015-09-30

    This report summarizes the current status of NEAMS activities in FY2015. The tasks this year are (1) to improve solution methods for steady-state and transient conditions, (2) to develop features and user friendliness to increase the usability and applicability of the code, (3) to improve and verify the multigroup cross section generation scheme, (4) to perform verification and validation tests of the code using SFRs and thermal reactor cores, and (5) to support early users of PROTEUS and update the user manuals.

  10. Enhancement and Verification of the Navy CASEE Model (Calendar Year 1982 Task).

    DTIC Science & Technology

    1982-12-15

    FAILED • "REPAIR CENTRAL SYSTEM a POST REPAIR STATISTC TUNOFFNTIONAL PR TUN b OTH PART EAR ETALSSE I CHECK FOR -m NEXT APPROVAL !•YES .DOWN NO 6 PORT BE...X0 0A. 0- LA- ww .J n 1- 0 . &M Z: or40* AX WI-WL 6-MW IA %A 3. % 00-1. .- 4Am - w a m -A.. OM U -4CM &Ab ZAW 44- w 1-u 3- *- w .- J! j0 44 A Oa tK -6

  11. Urine sampling and collection system optimization and testing

    NASA Technical Reports Server (NTRS)

    Fogal, G. L.; Geating, J. A.; Koesterer, M. G.

    1975-01-01

    A Urine Sampling and Collection System (USCS) engineering model was developed to provide for the automatic collection, volume sensing and sampling of urine from each micturition. The purpose of the engineering model was to demonstrate verification of the system concept. The objective of the optimization and testing program was to update the engineering model, to provide additional performance features and to conduct system testing to determine operational problems. Optimization tasks were defined as modifications to minimize system fluid residual and addition of thermoelectric cooling.

  12. Technology development in support of the TWRS process flowsheet. Revision 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Washenfelder, D.J.

    1995-10-11

    The Tank Waste Remediation System is to treat and dispose of Hanford`s Single-Shell and Double-Shell Tank Waste. The TWRS Process Flowsheet, (WHC-SD-WM-TI-613 Rev. 1) described a flowsheet based on a large number of assumptions and engineering judgements that require verification or further definition through process and technology development activities. This document takes off from the TWRS Process Flowsheet to identify and prioritize tasks that should be completed to strengthen the technical foundation for the flowsheet.

  13. External tank aerothermal design criteria verification

    NASA Technical Reports Server (NTRS)

    Praharaj, Sarat C.; Saladino, Anthony J.

    1991-01-01

    If a Space Shuttle Main Engine (SSME) fails during the initial 160 seconds of the Shuttle flight, a return-to-launch-site maneuver will be implemented. The period of concern for this task is the pitch-around maneuver when the vehicle is flying backward. The intent of this report is to identify and define the flowfield at the most critical locations from an environment perspective. The solution procedure used to predict the plume heating rates involves both computational analysis and engineering modeling.

  14. SIMULATED COAL GAS MCFC POWER PLANT SYSTEM VERIFICATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J.A. Scroppo

    1998-07-01

    This is the Technical Progress Report covering June 1998. All tasks have been completed, except for those discussed on the following pages. Unocal estimated the costs of dismantling and packaging the test facility for storage and shipment. The scope of work for the contract has been modified to accommodate the dismantling and packaging of the plant. An amendment to Sub-Contract No. MCP-9-UNO between M-C Power and Unocal has been executed which includes the Scope of Work in Unocal's cost estimate.

  15. SIMULATED COAL GAS MCFC POWER PLANT SYSTEM VERIFICATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-10-31

    This is the Technical Progress Report covering October 1998. All tasks have been completed, except for those discussed on the following pages. Unocal estimated the costs of dismantling and packaging the test facility for storage and shipment. The scope of work for the contract has been modified to accommodate the dismantling and packaging of the plant. An amendment to Sub-Contract No. MCP-9-UNO between M-C Power and Unocal has been executed which includes the Scope of Work in Unocal's cost estimate.

  16. Security Police Career Ladders AFSCs 811X0, 811X2, and 811X2A.

    DTIC Science & Technology

    1984-11-01

    MONITORS (GRP658) PERCENT MEMBERS PERFORMING TASKS (N=186) J424 PERFORM SPCDS OPERATOR REACTIONS TO SENSOR ALARM, LINE FAULT, OR UNIQUE LINE FAULT...MESSAGES 96 J426 PERFORM SPCDS VERIFICATION PROCEDURES 96 J423 PERFORM SMALL PERMANENT COMMUNICATIONS DISPLAY SEGMENT ( SPCDS ) SHUT-DOWN PROCEDURES 92 J425...PERFORM SPCDS START-UP PROCEDURES 91 J419 PERFORM BISS OPERATOR REACTION TO PRIME POWER LOSS OR SEVERE WEATHER WARNINGS 91 E192 MAKE ENTRIES ON AF

  17. Optical correlators for recognition of human face thermal images

    NASA Astrophysics Data System (ADS)

    Bauer, Joanna; Podbielska, Halina; Suchwalko, Artur; Mazurkiewicz, Jacek

    2005-09-01

    In this paper, the application of the optical correlators for face thermograms recognition is described. The thermograms were colleted from 27 individuals. For each person 10 pictures in different conditions were recorded and the data base composed of 270 images was prepared. Two biometric systems based on joint transform correlator and 4f correlator were built. Each system was designed for realizing two various tasks: verification and identification. The recognition systems were tested and evaluated according to the Face Recognition Vendor Tests (FRVT).

  18. Verification of Concurrent Programs. Part II. Temporal Proof Principles.

    DTIC Science & Technology

    1981-09-01

    not modify any of the shared program variables. In order to ensure the correct synchronization between the processes we use three semaphore variables...direct, simple, and intuitive rides for the establishment of these properties. rhey usually replace long but repetitively similar chains of primitive ...modify the variables on which Q actually depends. A typical case is that of semaphores . We have the following property: The Semaphore Variable Rule

  19. Demonstration of UXO-PenDepth for the Estimation of Projectile Penetration Depth

    DTIC Science & Technology

    2010-08-01

    Effects (JTCG/ME) in August 2001. The accreditation process included verification and validation (V&V) by a subject matter expert (SME) other than...Within UXO-PenDepth, there are three sets of input parameters that are required: impact conditions (Fig. 1a), penetrator properties , and target... properties . The impact conditions that need to be defined are projectile orientation and impact velocity. The algorithm has been evaluated against

  20. Physical property measurements on analog granites related to the joint verification experiment

    NASA Astrophysics Data System (ADS)

    Martin, Randolph J., III; Coyner, Karl B.; Haupt, Robert W.

    1990-08-01

    A key element in JVE (Joint Verification Experiment) conducted jointly between the United States and the USSR is the analysis of the geology and physical properties of the rocks in the respective test sites. A study was initiated to examine unclassified crystalline rock specimens obtained from areas near the Soviet site, Semipalatinsk and appropriate analog samples selected from Mt. Katadin, Maine. These rocks were also compared to Sierra White and Westerly Granite which have been studied in great detail. Measurements performed to characterize these rocks were: (1) Uniaxial strain with simultaneous compressional and shear wave velocities; (2) Hydrostatic compression to 150 MPa with simultaneous compressional and shear wave velocities; (3) Attenuation measurements as a function of frequency and strain amplitude for both dry and water saturated conditions. Elastic moduli determined from the hydrostatic compression and uniaxial strain test show that the rock matrix/mineral properties were comparable in magnitudes which vary within 25 percent from sample to sample. These properties appear to be approximately isotropic, especially at high pressures. However, anisotropy evident for certain samples at pressures below 35 MPa is attributed to dominant pre-existing microcrack populations and their alignments. Dependence of extensional attenuation and Young's modulus on strain amplitude were experimentally determined for intact Sierra White granite using the hysteresis loop technique.

  1. Development of procedures for calculating stiffness and damping properties of elastomers in engineering applications. Part 1: Verification of basic methods

    NASA Technical Reports Server (NTRS)

    Chiang, T.; Tessarzik, J. M.; Badgley, R. H.

    1972-01-01

    The primary aim of this investigation was verification of basic methods which are to be used in cataloging elastomer dynamic properties (stiffness and damping) in terms of viscoelastic model constants. These constants may then be used to predict dynamic properties for general elastomer shapes and operating conditions, thereby permitting optimum application of elastomers as energy absorption and/or energy storage devices in the control of vibrations in a broad variety of applications. The efforts reported involved: (1) literature search; (2) the design, fabrication and use of a test rig for obtaining elastomer dynamic test data over a wide range of frequencies, amplitudes, and preloads; and (3) the reduction of the test data, by means of a selected three-element elastomer model and specialized curve fitting techniques, to material properties. Material constants thus obtained have been used to calculate stiffness and damping for comparison with measured test data. These comparisons are excellent for a number of test conditions and only fair to poor for others. The results confirm the validity of the basic approach of the overall program and the mechanics of the cataloging procedure, and at the same time suggest areas in which refinements should be made.

  2. Verification of Gyrokinetic codes: Theoretical background and applications

    NASA Astrophysics Data System (ADS)

    Tronko, Natalia; Bottino, Alberto; Görler, Tobias; Sonnendrücker, Eric; Told, Daniel; Villard, Laurent

    2017-05-01

    In fusion plasmas, the strong magnetic field allows the fast gyro-motion to be systematically removed from the description of the dynamics, resulting in a considerable model simplification and gain of computational time. Nowadays, the gyrokinetic (GK) codes play a major role in the understanding of the development and the saturation of turbulence and in the prediction of the subsequent transport. Naturally, these codes require thorough verification and validation. Here, we present a new and generic theoretical framework and specific numerical applications to test the faithfulness of the implemented models to theory and to verify the domain of applicability of existing GK codes. For a sound verification process, the underlying theoretical GK model and the numerical scheme must be considered at the same time, which has rarely been done and therefore makes this approach pioneering. At the analytical level, the main novelty consists in using advanced mathematical tools such as variational formulation of dynamics for systematization of basic GK code's equations to access the limits of their applicability. The verification of the numerical scheme is proposed via the benchmark effort. In this work, specific examples of code verification are presented for two GK codes: the multi-species electromagnetic ORB5 (PIC) and the radially global version of GENE (Eulerian). The proposed methodology can be applied to any existing GK code. We establish a hierarchy of reduced GK Vlasov-Maxwell equations implemented in the ORB5 and GENE codes using the Lagrangian variational formulation. At the computational level, detailed verifications of global electromagnetic test cases developed from the CYCLONE Base Case are considered, including a parametric β-scan covering the transition from ITG to KBM and the spectral properties at the nominal β value.

  3. Optical properties of fly ash. Volume 2, Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Self, S.A.

    1994-12-01

    Research performed under this contract was divided into four tasks under the following headings: Task 1, Characterization of fly ash; Task 2, Measurements of the optical constants of slags; Task 3, Calculations of the radiant properties of fly ash dispersions; and Task 4, Measurements of the radiant properties of fly ash dispersions. Tasks 1 and 4 constituted the Ph.D. research topic of Sarbajit Ghosal, while Tasks 2 and 3 constituted the Ph.D. research topic of Jon Ebert. Together their doctoral dissertations give a complete account of the work performed. This final report, issued in two volumes consists of an executivemore » summary of the whole program followed by the dissertation of Ghosal and Ebert. Volume 2 contains the dissertation of Ebert which covers the measurements of the optical constants of slags, and calculations of the radiant properties of fly ash dispersions. A list of publications and conference presentations resulting from the work is also included.« less

  4. Reducing Wrong Patient Selection Errors: Exploring the Design Space of User Interface Techniques

    PubMed Central

    Sopan, Awalin; Plaisant, Catherine; Powsner, Seth; Shneiderman, Ben

    2014-01-01

    Wrong patient selection errors are a major issue for patient safety; from ordering medication to performing surgery, the stakes are high. Widespread adoption of Electronic Health Record (EHR) and Computerized Provider Order Entry (CPOE) systems makes patient selection using a computer screen a frequent task for clinicians. Careful design of the user interface can help mitigate the problem by helping providers recall their patients’ identities, accurately select their names, and spot errors before orders are submitted. We propose a catalog of twenty seven distinct user interface techniques, organized according to a task analysis. An associated video demonstrates eighteen of those techniques. EHR designers who consider a wider range of human-computer interaction techniques could reduce selection errors, but verification of efficacy is still needed. PMID:25954415

  5. Evaluation of speaker de-identification based on voice gender and age conversion

    NASA Astrophysics Data System (ADS)

    Přibil, Jiří; Přibilová, Anna; Matoušek, Jindřich

    2018-03-01

    Two basic tasks are covered in this paper. The first one consists in the design and practical testing of a new method for voice de-identification that changes the apparent age and/or gender of a speaker by multi-segmental frequency scale transformation combined with prosody modification. The second task is aimed at verification of applicability of a classifier based on Gaussian mixture models (GMM) to detect the original Czech and Slovak speakers after applied voice deidentification. The performed experiments confirm functionality of the developed gender and age conversion for all selected types of de-identification which can be objectively evaluated by the GMM-based open-set classifier. The original speaker detection accuracy was compared also for sentences uttered by German and English speakers showing language independence of the proposed method.

  6. NASA Aerospace Flight Battery Program: Recommendations for Technical Requirements for Inclusion in Aerospace Battery Procurements. Volume 1, Part 2

    NASA Technical Reports Server (NTRS)

    Jung, David S.; Manzo, Michelle A.

    2010-01-01

    This NASA Aerospace Flight Battery Systems Working Group was chartered within the NASA Engineering and Safety Center (NESC). The Battery Working Group was tasked to complete tasks and to propose proactive work to address battery related, agency-wide issues on an annual basis. In its first year of operation, this proactive program addressed various aspects of the validation and verification of aerospace battery systems for NASA missions. Studies were performed, issues were discussed and in many cases, test programs were executed to generate recommendations and guidelines to reduce risk associated with various aspects of implementing battery technology in the aerospace industry. This document contains Part 2 - Volume I: Recommendations for Technical Requirements for Inclusion in Aerospace Battery Procurements of the program's operations.

  7. Reducing wrong patient selection errors: exploring the design space of user interface techniques.

    PubMed

    Sopan, Awalin; Plaisant, Catherine; Powsner, Seth; Shneiderman, Ben

    2014-01-01

    Wrong patient selection errors are a major issue for patient safety; from ordering medication to performing surgery, the stakes are high. Widespread adoption of Electronic Health Record (EHR) and Computerized Provider Order Entry (CPOE) systems makes patient selection using a computer screen a frequent task for clinicians. Careful design of the user interface can help mitigate the problem by helping providers recall their patients' identities, accurately select their names, and spot errors before orders are submitted. We propose a catalog of twenty seven distinct user interface techniques, organized according to a task analysis. An associated video demonstrates eighteen of those techniques. EHR designers who consider a wider range of human-computer interaction techniques could reduce selection errors, but verification of efficacy is still needed.

  8. NASA Aerospace Flight Battery Program: Wet Life of Nickel-Hydrogen (Ni-H2) Batteries. Volume 1, Part 3

    NASA Technical Reports Server (NTRS)

    Jung, David S.; Lee, Leonine S.; Manzo, Michelle A.

    2010-01-01

    This NASA Aerospace Flight Battery Systems Working Group was chartered within the NASA Engineering and Safety Center (NESC). The Battery Working Group was tasked to complete tasks and to propose proactive work to address battery related, agency-wide issues on an annual basis. In its first year of operation, this proactive program addressed various aspects of the validation and verification of aerospace battery systems for NASA missions. Studies were performed, issues were discussed and in many cases, test programs were executed to generate recommendations and guidelines to reduce risk associated with various aspects of implementing battery technology in the aerospace industry. This document contains Part 3 - Volume I: Wet Life of Nickel-Hydrogen (Ni-H2) Batteries of the program's operations.

  9. FIELD VERIFICATION OF LINERS FROM SANITARY LANDFILLS

    EPA Science Inventory

    Liner specimens from three existing landfill sites were collected and examined to determine the changes in their physical properties over time and to validate data being developed through laboratory research. Samples examined included a 15-mil PVC liner from a sludge lagoon in Ne...

  10. Verification of Hydrologic Landscape Derived Basin-Scale Classifications in the Pacific Northwest

    EPA Science Inventory

    The interaction between the physical properties of a catchment (form) and climatic forcing of precipitation and energy control how water is partitioned, stored, and conveyed through a catchment (function). Hydrologic Landscapes (HLs) were previously developed across Oregon and de...

  11. Leveraging pattern matching to solve SRAM verification challenges at advanced nodes

    NASA Astrophysics Data System (ADS)

    Kan, Huan; Huang, Lucas; Yang, Legender; Zou, Elaine; Wan, Qijian; Du, Chunshan; Hu, Xinyi; Liu, Zhengfang; Zhu, Yu; Zhang, Recoo; Huang, Elven; Muirhead, Jonathan

    2018-03-01

    Memory is a critical component in today's system-on-chip (SoC) designs. Static random-access memory (SRAM) blocks are assembled by combining intellectual property (IP) blocks that come from SRAM libraries developed and certified by the foundries for both functionality and a specific process node. Customers place these SRAM IP in their designs, adjusting as necessary to achieve DRC-clean results. However, any changes a customer makes to these SRAM IP during implementation, whether intentionally or in error, can impact yield and functionality. Physical verification of SRAM has always been a challenge, because these blocks usually contain smaller feature sizes and spacing constraints compared to traditional logic or other layout structures. At advanced nodes, critical dimension becomes smaller and smaller, until there is almost no opportunity to use optical proximity correction (OPC) and lithography to adjust the manufacturing process to mitigate the effects of any changes. The smaller process geometries, reduced supply voltages, increasing process variation, and manufacturing uncertainty mean accurate SRAM physical verification results are not only reaching new levels of difficulty, but also new levels of criticality for design success. In this paper, we explore the use of pattern matching to create an SRAM verification flow that provides both accurate, comprehensive coverage of the required checks and visual output to enable faster, more accurate error debugging. Our results indicate that pattern matching can enable foundries to improve SRAM manufacturing yield, while allowing designers to benefit from SRAM verification kits that can shorten the time to market.

  12. Integrating Conceptual Knowledge Within and Across Representational Modalities

    PubMed Central

    McNorgan, Chris; Reid, Jackie; McRae, Ken

    2011-01-01

    Research suggests that concepts are distributed across brain regions specialized for processing information from different sensorimotor modalities. Multimodal semantic models fall into one of two broad classes differentiated by the assumed hierarchy of convergence zones over which information is integrated. In shallow models, communication within- and between-modality is accomplished using either direct connectivity, or a central semantic hub. In deep models, modalities are connected via cascading integration sites with successively wider receptive fields. Four experiments provide the first direct behavioral tests of these models using speeded tasks involving feature inference and concept activation. Shallow models predict no within-modal versus cross-modal difference in either task, whereas deep models predict a within-modal advantage for feature inference, but a cross-modal advantage for concept activation. Experiments 1 and 2 used relatedness judgments to tap participants’ knowledge of relations for within- and cross-modal feature pairs. Experiments 3 and 4 used a dual feature verification task. The pattern of decision latencies across Experiments 1 to 4 is consistent with a deep integration hierarchy. PMID:21093853

  13. Test-retest reliability of fMRI-based graph theoretical properties during working memory, emotion processing, and resting state.

    PubMed

    Cao, Hengyi; Plichta, Michael M; Schäfer, Axel; Haddad, Leila; Grimm, Oliver; Schneider, Michael; Esslinger, Christine; Kirsch, Peter; Meyer-Lindenberg, Andreas; Tost, Heike

    2014-01-01

    The investigation of the brain connectome with functional magnetic resonance imaging (fMRI) and graph theory analyses has recently gained much popularity, but little is known about the robustness of these properties, in particular those derived from active fMRI tasks. Here, we studied the test-retest reliability of brain graphs calculated from 26 healthy participants with three established fMRI experiments (n-back working memory, emotional face-matching, resting state) and two parcellation schemes for node definition (AAL atlas, functional atlas proposed by Power et al.). We compared the intra-class correlation coefficients (ICCs) of five different data processing strategies and demonstrated a superior reliability of task-regression methods with condition-specific regressors. The between-task comparison revealed significantly higher ICCs for resting state relative to the active tasks, and a superiority of the n-back task relative to the face-matching task for global and local network properties. While the mean ICCs were typically lower for the active tasks, overall fair to good reliabilities were detected for global and local connectivity properties, and for the n-back task with both atlases, smallworldness. For all three tasks and atlases, low mean ICCs were seen for the local network properties. However, node-specific good reliabilities were detected for node degree in regions known to be critical for the challenged functions (resting-state: default-mode network nodes, n-back: fronto-parietal nodes, face-matching: limbic nodes). Between-atlas comparison demonstrated significantly higher reliabilities for the functional parcellations for global and local network properties. Our findings can inform the choice of processing strategies, brain atlases and outcome properties for fMRI studies using active tasks, graph theory methods, and within-subject designs, in particular future pharmaco-fMRI studies. © 2013 Elsevier Inc. All rights reserved.

  14. Verification of Eulerian-Eulerian and Eulerian-Lagrangian simulations for turbulent fluid-particle flows

    DOE PAGES

    Patel, Ravi G.; Desjardins, Olivier; Kong, Bo; ...

    2017-09-01

    Here, we present a verification study of three simulation techniques for fluid–particle flows, including an Euler–Lagrange approach (EL) inspired by Jackson's seminal work on fluidized particles, a quadrature–based moment method based on the anisotropic Gaussian closure (AG), and the traditional two-fluid model. We perform simulations of two problems: particles in frozen homogeneous isotropic turbulence (HIT) and cluster-induced turbulence (CIT). For verification, we evaluate various techniques for extracting statistics from EL and study the convergence properties of the three methods under grid refinement. The convergence is found to depend on the simulation method and on the problem, with CIT simulations posingmore » fewer difficulties than HIT. Specifically, EL converges under refinement for both HIT and CIT, but statistics exhibit dependence on the postprocessing parameters. For CIT, AG produces similar results to EL. For HIT, converging both TFM and AG poses challenges. Overall, extracting converged, parameter-independent Eulerian statistics remains a challenge for all methods.« less

  15. The FoReVer Methodology: A MBSE Framework for Formal Verification

    NASA Astrophysics Data System (ADS)

    Baracchi, Laura; Mazzini, Silvia; Cimatti, Alessandro; Tonetta, Stefano; Garcia, Gerald

    2013-08-01

    The need for high level of confidence and operational integrity in critical space (software) systems is well recognized in the Space industry and has been addressed so far through rigorous System and Software Development Processes and stringent Verification and Validation regimes. The Model Based Space System Engineering process (MBSSE) derived in the System and Software Functional Requirement Techniques study (SSFRT) focused on the application of model based engineering technologies to support the space system and software development processes, from mission level requirements to software implementation through model refinements and translations. In this paper we report on our work in the ESA-funded FoReVer project where we aim at developing methodological, theoretical and technological support for a systematic approach to the space avionics system development, in phases 0/A/B/C. FoReVer enriches the MBSSE process with contract-based formal verification of properties, at different stages from system to software, through a step-wise refinement approach, with the support for a Software Reference Architecture.

  16. Verification of Eulerian-Eulerian and Eulerian-Lagrangian simulations for turbulent fluid-particle flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patel, Ravi G.; Desjardins, Olivier; Kong, Bo

    Here, we present a verification study of three simulation techniques for fluid–particle flows, including an Euler–Lagrange approach (EL) inspired by Jackson's seminal work on fluidized particles, a quadrature–based moment method based on the anisotropic Gaussian closure (AG), and the traditional two-fluid model. We perform simulations of two problems: particles in frozen homogeneous isotropic turbulence (HIT) and cluster-induced turbulence (CIT). For verification, we evaluate various techniques for extracting statistics from EL and study the convergence properties of the three methods under grid refinement. The convergence is found to depend on the simulation method and on the problem, with CIT simulations posingmore » fewer difficulties than HIT. Specifically, EL converges under refinement for both HIT and CIT, but statistics exhibit dependence on the postprocessing parameters. For CIT, AG produces similar results to EL. For HIT, converging both TFM and AG poses challenges. Overall, extracting converged, parameter-independent Eulerian statistics remains a challenge for all methods.« less

  17. Multipartite entanglement verification resistant against dishonest parties.

    PubMed

    Pappa, Anna; Chailloux, André; Wehner, Stephanie; Diamanti, Eleni; Kerenidis, Iordanis

    2012-06-29

    Future quantum information networks will consist of quantum and classical agents, who have the ability to communicate in a variety of ways with trusted and untrusted parties and securely delegate computational tasks to untrusted large-scale quantum computing servers. Multipartite quantum entanglement is a fundamental resource for such a network and, hence, it is imperative to study the possibility of verifying a multipartite entanglement source in a way that is efficient and provides strong guarantees even in the presence of multiple dishonest parties. In this Letter, we show how an agent of a quantum network can perform a distributed verification of a source creating multipartite Greenberger-Horne-Zeilinger (GHZ) states with minimal resources, which is, nevertheless, resistant against any number of dishonest parties. Moreover, we provide a tight tradeoff between the level of security and the distance between the state produced by the source and the ideal GHZ state. Last, by adding the resource of a trusted common random source, we can further provide security guarantees for all honest parties in the quantum network simultaneously.

  18. Aerospace Nickel-cadmium Cell Verification

    NASA Technical Reports Server (NTRS)

    Manzo, Michelle A.; Strawn, D. Michael; Hall, Stephen W.

    2001-01-01

    During the early years of satellites, NASA successfully flew "NASA-Standard" nickel-cadmium (Ni-Cd) cells manufactured by GE/Gates/SAFF on a variety of spacecraft. In 1992 a NASA Battery Review Board determined that the strategy of a NASA Standard Cell and Battery Specification and the accompanying NASA control of a standard manufacturing control document (MCD) for Ni-Cd cells and batteries was unwarranted. As a result of that determination, standards were abandoned and the use of cells other than the NASA Standard was required. In order to gain insight into the performance and characteristics of the various aerospace Ni-Cd products available, tasks were initiated within the NASA Aerospace Flight Battery Systems Program that involved the procurement and testing of representative aerospace Ni-Cd cell designs. A standard set of test conditions was established in order to provide similar information about the products from various vendors. The objective of this testing was to provide independent verification of representative commercial flight cells available in the marketplace today. This paper will provide a summary of the verification tests run on cells from various manufacturers: Sanyo 35 Ampere-hour (Ali) standard and 35 Ali advanced Ni-Cd cells, SAFr 50 Ah Ni-Cd cells and Eagle-Picher 21 Ali Magnum and 21 Ali Super Ni-CdTM cells from Eagle-Picher were put through a full evaluation. A limited number of 18 and 55 Ali cells from Acme Electric were also tested to provide an initial evaluation of the Acme aerospace cell designs. Additionally, 35 Ali aerospace design Ni-MH cells from Sanyo were evaluated under the standard conditions established for this program. Ile test program is essentially complete. The cell design parameters, the verification test plan and the details of the test result will be discussed.

  19. Quality Assurance of Chemical Measurements.

    ERIC Educational Resources Information Center

    Taylor, John K.

    1981-01-01

    Reviews aspects of quality control (methods to control errors) and quality assessment (verification that systems are operating within acceptable limits) including an analytical measurement system, quality control by inspection, control charts, systematic errors, and use of SRMs, materials for which properties are certified by the National Bureau…

  20. Field verification of geogrid properties for base course reinforcement applications : final report.

    DOT National Transportation Integrated Search

    2013-11-01

    The proposed field study is a continuation of a recently concluded, ODOT-funded project titled: Development of ODOT Guidelines for the Use of Geogrids in Aggregate Bases, which is aimed at addressing the need for improved guidelines for base reinforc...

  1. Design and Verification of Blast Densification for Highway Embankments of Liquefiable Sands

    DOT National Transportation Integrated Search

    2012-10-26

    As part of a larger effort to investigate the effects of blast densification on the properties and : behavior of compacted sand deposits, this study presents a procedure for replicating in the : laboratory the occluded gas bubbles believed to exist i...

  2. Investigation of Trends in Aerosol Direct Radiative Effects over North America Using a Coupled Meteorology-Chemistry Model

    EPA Science Inventory

    A comprehensive investigation of the processes regulating tropospheric aerosol distributions, their optical properties, and their radiative effects in conjunction with verification of their simulated radiative effects for past conditions relative to measurements is needed in orde...

  3. Design and Verification of a Distributed Communication Protocol

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar A.; Goodloe, Alwyn E.

    2009-01-01

    The safety of remotely operated vehicles depends on the correctness of the distributed protocol that facilitates the communication between the vehicle and the operator. A failure in this communication can result in catastrophic loss of the vehicle. To complicate matters, the communication system may be required to satisfy several, possibly conflicting, requirements. The design of protocols is typically an informal process based on successive iterations of a prototype implementation. Yet distributed protocols are notoriously difficult to get correct using such informal techniques. We present a formal specification of the design of a distributed protocol intended for use in a remotely operated vehicle, which is built from the composition of several simpler protocols. We demonstrate proof strategies that allow us to prove properties of each component protocol individually while ensuring that the property is preserved in the composition forming the entire system. Given that designs are likely to evolve as additional requirements emerge, we show how we have automated most of the repetitive proof steps to enable verification of rapidly changing designs.

  4. Apollo Soyuz Test Project Weights and Mass Properties Operational Management System

    NASA Technical Reports Server (NTRS)

    Collins, M. A., Jr.; Hischke, E. R.

    1975-01-01

    The Apollo Soyuz Test Project (ASTP) Weights and Mass Properties Operational Management System was established to assure a timely and authoritative method of acquiring, controlling, generating, and disseminating an official set of vehicle weights and mass properties data. This paper provides an overview of the system and its interaction with the various aspects of vehicle and component design, mission planning, hardware and software simulations and verification, and real-time mission support activities. The effect of vehicle configuration, design maturity, and consumables updates is discussed in the context of weight control.

  5. Functional roles of the cingulo-frontal network in performance on working memory.

    PubMed

    Kondo, Hirohito; Morishita, Masanao; Osaka, Naoyuki; Osaka, Mariko; Fukuyama, Hidenao; Shibasaki, Hiroshi

    2004-01-01

    We examined the relationship between brain activities and task performance on working memory. A large-scale study was initially administered to identify good and poor performers using the operation span and reading span tasks. On the basis of those span scores, we divided 20 consenting participants into high- and low-span groups. In an fMRI study, the participants performed verification of arithmetic problems and retention of target words either concurrently or separately. The behavioral results showed that performance was better in the high-span group than in the low-span group under a dual-task condition, but not under two single-task conditions. The anterior cingulate cortex (ACC), left prefrontal cortex (PFC), left inferior frontal cortex, and bilateral parietal cortex were primarily activated for both span groups. We found that signal changes in the ACC were greater in the high-span group than in the low-span group under the dual-task condition, but not under the single-task conditions. Structural equation modeling indicated that an estimate of effective connectivity from the ACC to the left PFC was positive for the high-span group and negative for the-low span group, suggesting that closer cooperation between the two brain regions was strongly related to working memory performance. We conclude that central executive functioning for attention shifting is modulated by the cingulo-frontal network.

  6. Post-error response inhibition in high math-anxious individuals: Evidence from a multi-digit addition task.

    PubMed

    Núñez-Peña, M Isabel; Tubau, Elisabet; Suárez-Pellicioni, Macarena

    2017-06-01

    The aim of the study was to investigate how high math-anxious (HMA) individuals react to errors in an arithmetic task. Twenty HMA and 19 low math-anxious (LMA) individuals were presented with a multi-digit addition verification task and were given response feedback. Post-error adjustment measures (response time and accuracy) were analyzed in order to study differences between groups when faced with errors in an arithmetical task. Results showed that both HMA and LMA individuals were slower to respond following an error than following a correct answer. However, post-error accuracy effects emerged only for the HMA group, showing that they were also less accurate after having committed an error than after giving the right answer. Importantly, these differences were observed only when individuals needed to repeat the same response given in the previous trial. These results suggest that, for HMA individuals, errors caused reactive inhibition of the erroneous response, facilitating performance if the next problem required the alternative response but hampering it if the response was the same. This stronger reaction to errors could be a factor contributing to the difficulties that HMA individuals experience in learning math and doing math tasks. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Visual Search in the Real World: Color Vision Deficiency Affects Peripheral Guidance, but Leaves Foveal Verification Largely Unaffected.

    PubMed

    Kugler, Günter; 't Hart, Bernard M; Kohlbecher, Stefan; Bartl, Klaus; Schumann, Frank; Einhäuser, Wolfgang; Schneider, Erich

    2015-01-01

    People with color vision deficiencies report numerous limitations in daily life, restricting, for example, their access to some professions. However, they use basic color terms systematically and in a similar manner as people with normal color vision. We hypothesize that a possible explanation for this discrepancy between color perception and behavioral consequences might be found in the gaze behavior of people with color vision deficiency. A group of participants with color vision deficiencies and a control group performed several search tasks in a naturalistic setting on a lawn. All participants wore a mobile eye-tracking-driven camera with a high foveal image resolution (EyeSeeCam). Search performance as well as fixations of objects of different colors were examined. Search performance was similar in both groups in a color-unrelated search task as well as in a search for yellow targets. While searching for red targets, participants with color vision deficiencies exhibited a strongly degraded performance. This was closely matched by the number of fixations on red objects shown by the two groups. Importantly, once they fixated a target, participants with color vision deficiencies exhibited only few identification errors. In contrast to controls, participants with color vision deficiencies are not able to enhance their search for red targets on a (green) lawn by an efficient guiding mechanism. The data indicate that the impaired guiding is the main influence on search performance, while foveal identification (verification) is largely unaffected by the color vision deficiency.

  8. Changes in task-based effective connectivity in language networks following rehabilitation in post-stroke patients with aphasia

    PubMed Central

    Kiran, Swathi; Meier, Erin L.; Kapse, Kushal J.; Glynn, Peter A.

    2015-01-01

    In this study, we examined regions in the left and right hemisphere language network that were altered in terms of the underlying neural activation and effective connectivity subsequent to language rehabilitation. Eight persons with chronic post-stroke aphasia and eight normal controls participated in the current study. Patients received a 10 week semantic feature-based rehabilitation program to improve their skills. Therapy was provided on atypical examples of one trained category while two control categories were monitored; the categories were counterbalanced across patients. In each fMRI session, two experimental tasks were conducted: (a) picture naming and (b) semantic feature verification of trained and untrained categories. Analysis of treatment effect sizes revealed that all patients showed greater improvements on the trained category relative to untrained categories. Results from this study show remarkable patterns of consistency despite the inherent variability in lesion size and activation patterns across patients. Across patients, activation that emerged as a function of rehabilitation on the trained category included bilateral IFG, bilateral SFG, LMFG, and LPCG for picture naming; and bilateral IFG, bilateral MFG, LSFG, and bilateral MTG for semantic feature verification. Analysis of effective connectivity using Dynamic Causal Modeling (DCM) indicated that LIFG was the consistently significantly modulated region after rehabilitation across participants. These results indicate that language networks in patients with aphasia resemble normal language control networks and that this similarity is accentuated by rehabilitation. PMID:26106314

  9. Verification of NWP Cloud Properties using A-Train Satellite Observations

    NASA Astrophysics Data System (ADS)

    Kucera, P. A.; Weeks, C.; Wolff, C.; Bullock, R.; Brown, B.

    2011-12-01

    Recently, the NCAR Model Evaluation Tools (MET) has been enhanced to incorporate satellite observations for the verification of Numerical Weather Prediction (NWP) cloud products. We have developed tools that match fields spatially (both in the vertical and horizontal dimensions) to compare NWP products with satellite observations. These matched fields provide diagnostic evaluation of cloud macro attributes such as vertical distribution of clouds, cloud top height, and the spatial and seasonal distribution of cloud fields. For this research study, we have focused on using CloudSat, CALIPSO, and MODIS observations to evaluate cloud fields for a variety of NWP fields and derived products. We have selected cases ranging from large, mid-latitude synoptic systems to well-organized tropical cyclones. For each case, we matched the observed cloud field with gridded model and/or derived product fields. CloudSat and CALIPSO observations and model fields were matched and compared in the vertical along the orbit track. MODIS data and model fields were matched and compared in the horizontal. We then use MET to compute the verification statistics to quantify the performance of the models in representing the cloud fields. In this presentation we will give a summary of our comparison and show verification results for both synoptic and tropical cyclone cases.

  10. Numerical verification of two-component dental implant in the context of fatigue life for various load cases.

    PubMed

    Szajek, Krzysztof; Wierszycki, Marcin

    2016-01-01

    Dental implant designing is a complex process which considers many limitations both biological and mechanical in nature. In earlier studies, a complete procedure for improvement of two-component dental implant was proposed. However, the optimization tasks carried out required assumption on representative load case, which raised doubts on optimality for the other load cases. This paper deals with verification of the optimal design in context of fatigue life and its main goal is to answer the question if the assumed load scenario (solely horizontal occlusal load) leads to the design which is also "safe" for oblique occlussal loads regardless the angle from an implant axis. The verification is carried out with series of finite element analyses for wide spectrum of physiologically justified loads. The design of experiment methodology with full factorial technique is utilized. All computations are done in Abaqus suite. The maximal Mises stress and normalized effective stress amplitude for various load cases are discussed and compared with the assumed "safe" limit (equivalent of fatigue life for 5e6 cycles). The obtained results proof that coronial-appical load component should be taken into consideration in the two component dental implant when fatigue life is optimized. However, its influence in the analyzed case is small and does not change the fact that the fatigue life improvement is observed for all components within whole range of analyzed loads.

  11. A Design Verification of the Parallel Pipelined Image Processings

    NASA Astrophysics Data System (ADS)

    Wasaki, Katsumi; Harai, Toshiaki

    2008-11-01

    This paper presents a case study of the design and verification of a parallel and pipe-lined image processing unit based on an extended Petri net, which is called a Logical Colored Petri net (LCPN). This is suitable for Flexible-Manufacturing System (FMS) modeling and discussion of structural properties. LCPN is another family of colored place/transition-net(CPN) with the addition of the following features: integer value assignment of marks, representation of firing conditions as marks' value based formulae, and coupling of output procedures with transition firing. Therefore, to study the behavior of a system modeled with this net, we provide a means of searching the reachability tree for markings.

  12. Formal Analysis of BPMN Models Using Event-B

    NASA Astrophysics Data System (ADS)

    Bryans, Jeremy W.; Wei, Wei

    The use of business process models has gone far beyond documentation purposes. In the development of business applications, they can play the role of an artifact on which high level properties can be verified and design errors can be revealed in an effort to reduce overhead at later software development and diagnosis stages. This paper demonstrates how formal verification may add value to the specification, design and development of business process models in an industrial setting. The analysis of these models is achieved via an algorithmic translation from the de-facto standard business process modeling language BPMN to Event-B, a widely used formal language supported by the Rodin platform which offers a range of simulation and verification technologies.

  13. Perspectives of human verification via binary QRS template matching of single-lead and 12-lead electrocardiogram.

    PubMed

    Krasteva, Vessela; Jekova, Irena; Schmid, Ramun

    2018-01-01

    This study aims to validate the 12-lead electrocardiogram (ECG) as a biometric modality based on two straightforward binary QRS template matching characteristics. Different perspectives of the human verification problem are considered, regarding the optimal lead selection and stability over sample size, gender, age, heart rate (HR). A clinical 12-lead resting ECG database, including a population of 460 subjects with two-session recordings (>1 year apart) is used. Cost-effective strategies for extraction of personalized QRS patterns (100ms) and binary template matching estimate similarity in the time scale (matching time) and dissimilarity in the amplitude scale (mismatch area). The two-class person verification task, taking the decision to validate or to reject the subject identity is managed by linear discriminant analysis (LDA). Non-redundant LDA models for different lead configurations (I,II,III,aVF,aVL,aVF,V1-V6) are trained on the first half of 230 subjects by stepwise feature selection until maximization of the area under the receiver operating characteristic curve (ROC AUC). The operating point on the training ROC at equal error rate (EER) is tested on the independent dataset (second half of 230 subjects) to report unbiased validation of test-ROC AUC and true verification rate (TVR = 100-EER). The test results are further evaluated in groups by sample size, gender, age, HR. The optimal QRS pattern projection for single-lead ECG biometric modality is found in the frontal plane sector (60°-0°) with best (Test-AUC/TVR) for lead II (0.941/86.8%) and slight accuracy drop for -aVR (-0.017/-1.4%), I (-0.01/-1.5%). Chest ECG leads have degrading accuracy from V1 (0.885/80.6%) to V6 (0.799/71.8%). The multi-lead ECG improves verification: 6-chest (0.97/90.9%), 6-limb (0.986/94.3%), 12-leads (0.995/97.5%). The QRS pattern matching model shows stable performance for verification of 10 to 230 individuals; insignificant degradation of TVR in women by (1.2-3.6%), adults ≥70 years (3.7%), younger <40 years (1.9%), HR<60bpm (1.2%), HR>90bpm (3.9%), no degradation for HR change (0 to >20bpm).

  14. Progress on 58 m2 Passive Resonant Ring Laser Gyroscope,

    DTIC Science & Technology

    Pad; design of the optical-mechanical hardware to input the laser to the ring; investigations to insure against ZERODUR bar buckling associated with the...ring evacuation force; verification of ZERODUR physical properties which are key to this application, e.g. compressibility resulting from the usual

  15. Individual differences in the Simon effect are underpinned by differences in the competitive dynamics in the basal ganglia: An experimental verification and a computational model.

    PubMed

    Stocco, Andrea; Murray, Nicole L; Yamasaki, Brianna L; Renno, Taylor J; Nguyen, Jimmy; Prat, Chantel S

    2017-07-01

    Cognitive control is thought to be made possible by the activity of the prefrontal cortex, which selectively uses task-specific representations to bias the selection of task-appropriate responses over more automated, but inappropriate, ones. Recent models have suggested, however, that prefrontal representations are in turn controlled by the basal ganglia. In particular, neurophysiological considerations suggest that the basal ganglia's indirect pathway plays a pivotal role in preventing irrelevant information from being incorporated into a task, thus reducing response interference due to the processing of inappropriate stimuli dimensions. Here, we test this hypothesis by showing that individual differences in a non-verbal cognitive control task (the Simon task) are correlated with performance on a decision-making task (the Probabilistic Stimulus Selection task) that tracks the contribution of the indirect pathway. Specifically, the higher the effect of the indirect pathway, the smaller was the behavioral costs associated with suppressing interference in incongruent trials. Additionally, it was found that this correlation was driven by individual differences in incongruent trials only (with little effect on congruent ones) and specific to the indirect pathway (with almost no correlation with the effect of the direct pathways). Finally, it is shown that this pattern of results is precisely what is predicted when competitive dynamics of the basal ganglia are added to the selective attention component of a simple model of the Simon task, thus showing that our experimental results can be fully explained by our initial hypothesis. Published by Elsevier B.V.

  16. A job analysis of care helpers

    PubMed Central

    Choi, Kyung-Sook; Jeong, Seungeun; Kim, Seulgee; Park, Hyeung-Keun; Seok, Jae Eun

    2012-01-01

    The aim of this study was to examine the roles of care helpers through job analysis. To do this, this study used the Developing A Curriculum Method (DACUM) to classify job content and a multi-dimensional study design was applied to identify roles and create a job description by looking into the appropriateness, significance, frequency, and difficulty of job content as identified through workshops and cross-sectional surveys conducted for appropriateness verification. A total of 418 care helpers working in nursing facilities and community senior service facilities across the country were surveyed. The collected data were analyzed using PASW 18.0 software. Six duties and 18 tasks were identified based on the job model. Most tasks were found to be "important task", scoring 4.0 points or above. Physical care duties, elimination care, position changing and movement assistance, feeding assistance, and safety care were identified as high frequency tasks. The most difficult tasks were emergency prevention, early detection, and speedy reporting. A summary of the job of care helpers is providing physical, emotional, housekeeping, and daily activity assistance to elderly patients with problems in independently undertaking daily activities due to physical or mental causes in long-term care facilities or at the client's home. The results of this study suggest a task-focused examination, optimizing the content of the current standard teaching materials authorized by the Ministry of Health and Welfare while supplementing some content which was identified as task elements but not included in the current teaching materials and fully reflecting the actual frequency and difficulty of tasks. PMID:22323929

  17. Heralded quantum steering over a high-loss channel

    PubMed Central

    Weston, Morgan M.; Slussarenko, Sergei; Chrzanowski, Helen M.; Wollmann, Sabine; Shalm, Lynden K.; Verma, Varun B.; Allman, Michael S.; Nam, Sae Woo; Pryde, Geoff J.

    2018-01-01

    Entanglement is the key resource for many long-range quantum information tasks, including secure communication and fundamental tests of quantum physics. These tasks require robust verification of shared entanglement, but performing it over long distances is presently technologically intractable because the loss through an optical fiber or free-space channel opens up a detection loophole. We design and experimentally demonstrate a scheme that verifies entanglement in the presence of at least 14.8 ± 0.1 dB of added loss, equivalent to approximately 80 km of telecommunication fiber. Our protocol relies on entanglement swapping to herald the presence of a photon after the lossy channel, enabling event-ready implementation of quantum steering. This result overcomes the key barrier in device-independent communication under realistic high-loss scenarios and in the realization of a quantum repeater. PMID:29322093

  18. Heralded quantum steering over a high-loss channel.

    PubMed

    Weston, Morgan M; Slussarenko, Sergei; Chrzanowski, Helen M; Wollmann, Sabine; Shalm, Lynden K; Verma, Varun B; Allman, Michael S; Nam, Sae Woo; Pryde, Geoff J

    2018-01-01

    Entanglement is the key resource for many long-range quantum information tasks, including secure communication and fundamental tests of quantum physics. These tasks require robust verification of shared entanglement, but performing it over long distances is presently technologically intractable because the loss through an optical fiber or free-space channel opens up a detection loophole. We design and experimentally demonstrate a scheme that verifies entanglement in the presence of at least 14.8 ± 0.1 dB of added loss, equivalent to approximately 80 km of telecommunication fiber. Our protocol relies on entanglement swapping to herald the presence of a photon after the lossy channel, enabling event-ready implementation of quantum steering. This result overcomes the key barrier in device-independent communication under realistic high-loss scenarios and in the realization of a quantum repeater.

  19. Verification of Gyrokinetic codes: theoretical background and applications

    NASA Astrophysics Data System (ADS)

    Tronko, Natalia

    2016-10-01

    In fusion plasmas the strong magnetic field allows the fast gyro motion to be systematically removed from the description of the dynamics, resulting in a considerable model simplification and gain of computational time. Nowadays, the gyrokinetic (GK) codes play a major role in the understanding of the development and the saturation of turbulence and in the prediction of the consequent transport. We present a new and generic theoretical framework and specific numerical applications to test the validity and the domain of applicability of existing GK codes. For a sound verification process, the underlying theoretical GK model and the numerical scheme must be considered at the same time, which makes this approach pioneering. At the analytical level, the main novelty consists in using advanced mathematical tools such as variational formulation of dynamics for systematization of basic GK code's equations to access the limits of their applicability. The indirect verification of numerical scheme is proposed via the Benchmark process. In this work, specific examples of code verification are presented for two GK codes: the multi-species electromagnetic ORB5 (PIC), and the radially global version of GENE (Eulerian). The proposed methodology can be applied to any existing GK code. We establish a hierarchy of reduced GK Vlasov-Maxwell equations using the generic variational formulation. Then, we derive and include the models implemented in ORB5 and GENE inside this hierarchy. At the computational level, detailed verification of global electromagnetic test cases based on the CYCLONE are considered, including a parametric β-scan covering the transition between the ITG to KBM and the spectral properties at the nominal β value.

  20. NBS (National Bureau of Standards): Materials measurements

    NASA Technical Reports Server (NTRS)

    Manning, J. R.

    1985-01-01

    NBS work for NASA in support of NASA's Microgravity Science and Applications Program under NASA Government Order H-27954B (Properties of Electronic Materials) covering the period April 1, 1984 to March 31, 1985 is described. The work has been carried out in three independent tasks: Task 1--Surface Tensions and Their Variations with Temperature and Impurities; Task 2--Convention during Unidirectional Solidification; Task 3--Measurement of High Temperature Thermodynamic Properties. The results for each task are given separately in the body of the report.

  1. 38 CFR 74.1 - What definitions are important for VetBiz Vendor Information Pages (VIP) Verification Program?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... venture for joint profit, for which purpose they combine their efforts, property, money, skill, or..., Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS (CONTINUED) VETERANS SMALL BUSINESS...'s Office of Small and Disadvantaged Business Utilization. The CVE helps veterans interested in...

  2. 38 CFR 74.1 - What definitions are important for VetBiz Vendor Information Pages (VIP) Verification Program?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... venture for joint profit, for which purpose they combine their efforts, property, money, skill, or..., Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS (CONTINUED) VETERANS SMALL BUSINESS...'s Office of Small and Disadvantaged Business Utilization. The CVE helps veterans interested in...

  3. Field verification of KDOT's Superpave mixture properties to be used as inputs in the NCHRP mechanistic-empirical pavement design guide.

    DOT National Transportation Integrated Search

    2009-01-01

    In the MechanisticEmpirical Pavement Design Guide (M-EPDG), prediction of flexible pavement response and performance needs an input of dynamic modulus of hot-mix asphalt (HMA) at all three levels of hierarchical inputs. This study was intended to ...

  4. Sodern recent development in the design and verification of passive polarization scramblers for space applications

    NASA Astrophysics Data System (ADS)

    Richert, M.; Dubroca, G.; Genestier, D.; Ravel, K.; Forget, M.; Caron, J.; Bezy, J. L.

    2017-09-01

    For an imaging spectrometer, the changing polarization properties of the incoming beam is a performance limiting factor. Indeed, while it is impossible to know in advance what the polarization state of the beam will be, the efficiency of the grating varies with it.

  5. Experimental verification of the influence of time-dependent material properties on long-term bridge characteristics.

    DOT National Transportation Integrated Search

    2006-08-01

    Post-tensioned cast-in-place box girder bridges are commonly used in California. Losses in tension in : the steel prestressing tendons used in these bridges occur over time due to creep and shrinkage of : concrete and relaxation of the tendons. The u...

  6. The surface drifter program for real time and off-line validation of ocean forecasts and reanalyses

    NASA Astrophysics Data System (ADS)

    Hernandez, Fabrice; Regnier, Charly; Drévillon, Marie

    2017-04-01

    As part of the Global Ocean Observing System, the Global Drifter Program (GDP) is comprised of an array of about 1250 drifting buoys spread over the global ocean, that provide operational, near-real time surface velocity, sea surface temperature (SST) and sea level pressure observations. This information is used mainly used for numerical weather forecasting, research, and in-situ calibration/verification of satellite observations. Since 2013 the drifting buoy SST measurements are used for near real time assessment of global forecasting systems from Canada, France, UK, USA, Australia in the frame of the GODAE OceanView Intercomparison and Validation Task. For most of these operational systems, these data are not used for assimilation, and offer an independent observation assessment. This approach mimics the validation performed for SST satellite products. More recently, validation procedures have been proposed in order to assess the surface dynamics of Mercator Océan global and regional forecast and reanalyses. Velocities deduced from drifter trajectories are used in two ways. First, the Eulerian approach where buoy and ocean model velocity values are compared at the position of drifters. Then, from discrepancies, statistics are computed and provide an evaluation of the ocean model's surface dynamics reliability. Second, the Lagrangian approach, where drifting trajectories are simulated at each location of the real drifter trajectory using the ocean model velocity fields. Then, on daily basis, real and simulated drifter trajectories are compared by analyzing the spread after one day, two days etc…. The cumulated statistics on specific geographical boxes are evaluated in term of dispersion properties of the "real ocean" as captured by drifters, and those properties in the ocean model. This approach allows to better evaluate forecasting score for surface dispersion applications, like Search and Rescue, oil spill forecast, drift of other objects or contaminant, larvae dispersion etc… These Eulerian and Lagrangian validation approach can be applied for real time or offline assessment of ocean velocity products. In real time, the main limitation is our capability to detect drifter drogue's loss, causing erroneous assessment. Several methods, by comparison to wind entrainment effect or other velocity estimates like from satellite altimetry, are used. These Eulerian and Lagrangian surface velocity validation methods are planned to be adopted by the GODAE OceanView operational community in order to offer independent verification of surface current forecast.

  7. Determination and Control of Optical and X-Ray Wave Fronts

    NASA Technical Reports Server (NTRS)

    Kim, Young K.

    1997-01-01

    A successful design of a space-based or ground optical system requires an iterative procedure which includes the kinematics and dynamics of the system in operating environment, control synthesis and verification. To facilitate the task of designing optical wave front control systems being developed at NASA/MSFC, a multi-discipline dynamics and control tool has been developed by utilizing TREETOPS, a multi-body dynamics and control simulation, NASTRAN and MATLAB. Dynamics and control models of STABLE and ARIS were developed for TREETOPS simulation, and their simulation results are documented in this report.

  8. Human operator performance of remotely controlled tasks: Teleoperator research conducted at NASA's George C. Marshall Space Flight Center. Executive summary

    NASA Technical Reports Server (NTRS)

    Shields, N., Jr.; Piccione, F.; Kirkpatrick, M., III; Malone, T. B.

    1982-01-01

    The combination of human and machine capabilities into an integrated engineering system which is complex and interactive interdisciplinary undertaking is discussed. Human controlled remote systems referred to as teleoperators, are reviewed. The human factors requirements for remotely manned systems are identified. The data were developed in three principal teleoperator laboratories and the visual, manipulator and mobility laboratories are described. Three major sections are identified: (1) remote system components, (2) human operator considerations; and (3) teleoperator system simulation and concept verification.

  9. Advanced rotorcraft technology: Task force report

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The technological needs and opportunities related to future civil and military rotorcraft were determined and a program plan for NASA research which was responsive to the needs and opportunities was prepared. In general, the program plan places the primary emphasis on design methodology where the development and verification of analytical methods is built upon a sound data base. The four advanced rotorcraft technology elements identified are aerodynamics and structures, flight control and avionic systems, propulsion, and vehicle configurations. Estimates of the total funding levels that would be required to support the proposed program plan are included.

  10. Experimental Blind Quantum Computing for a Classical Client.

    PubMed

    Huang, He-Liang; Zhao, Qi; Ma, Xiongfeng; Liu, Chang; Su, Zu-En; Wang, Xi-Lin; Li, Li; Liu, Nai-Le; Sanders, Barry C; Lu, Chao-Yang; Pan, Jian-Wei

    2017-08-04

    To date, blind quantum computing demonstrations require clients to have weak quantum devices. Here we implement a proof-of-principle experiment for completely classical clients. Via classically interacting with two quantum servers that share entanglement, the client accomplishes the task of having the number 15 factorized by servers who are denied information about the computation itself. This concealment is accompanied by a verification protocol that tests servers' honesty and correctness. Our demonstration shows the feasibility of completely classical clients and thus is a key milestone towards secure cloud quantum computing.

  11. Design evolution of the orbiter reaction control subsystem

    NASA Technical Reports Server (NTRS)

    Taeber, R. J.; Karakulko, W.; Belvins, D.; Hohmann, C.; Henderson, J.

    1985-01-01

    The challenges of space shuttle orbiter reaction control subsystem development began with selection of the propellant for the subsystem. Various concepts were evaluated before the current Earth storable, bipropellant combination was selected. Once that task was accomplished, additional challenges of designing the system to satisfy the wide range of requirements dictated by operating environments, reusability, and long life were met. Verification of system adequacy was achieved by means of a combination of analysis and test. The studies, the design efforts, and the test and analysis techniques employed in meeting the challenges are described.

  12. Experimental Blind Quantum Computing for a Classical Client

    NASA Astrophysics Data System (ADS)

    Huang, He-Liang; Zhao, Qi; Ma, Xiongfeng; Liu, Chang; Su, Zu-En; Wang, Xi-Lin; Li, Li; Liu, Nai-Le; Sanders, Barry C.; Lu, Chao-Yang; Pan, Jian-Wei

    2017-08-01

    To date, blind quantum computing demonstrations require clients to have weak quantum devices. Here we implement a proof-of-principle experiment for completely classical clients. Via classically interacting with two quantum servers that share entanglement, the client accomplishes the task of having the number 15 factorized by servers who are denied information about the computation itself. This concealment is accompanied by a verification protocol that tests servers' honesty and correctness. Our demonstration shows the feasibility of completely classical clients and thus is a key milestone towards secure cloud quantum computing.

  13. Large area sheet task: Advanced Dendritic Web Growth Development

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Hopkins, R. H.; Meier, D.; Schruben, J.

    1981-01-01

    A melt level control system was implemented to provide stepless silicon feed rates from zero to rates exactly matching the silicon consumed during web growth. Bench tests of the unit were successfully completed and the system mounted in a web furnace for operational verification. Tests of long term temperature drift correction techniques were made; web width monitoring seems most appropriate for feedback purposes. A system to program the initiation of the web growth cycle was successfully tested. A low cost temperature controller was tested which functions as well as units four times as expensive.

  14. Property Values Associated with the Failure of Individual Links in a System with Multiple Weak and Strong Links.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helton, Jon C.; Brooks, Dusty Marie; Sallaberry, Cedric Jean-Marie.

    Representations are developed and illustrated for the distribution of link property values at the time of link failure in the presence of aleatory uncertainty in link properties. The following topics are considered: (i) defining properties for weak links and strong links, (ii) cumulative distribution functions (CDFs) for link failure time, (iii) integral-based derivation of CDFs for link property at time of link failure, (iv) sampling-based approximation of CDFs for link property at time of link failure, (v) verification of integral-based and sampling-based determinations of CDFs for link property at time of link failure, (vi) distributions of link properties conditional onmore » time of link failure, and (vii) equivalence of two different integral-based derivations of CDFs for link property at time of link failure.« less

  15. Interference among the Processing of Facial Emotion, Face Race, and Face Gender.

    PubMed

    Li, Yongna; Tse, Chi-Shing

    2016-01-01

    People can process multiple dimensions of facial properties simultaneously. Facial processing models are based on the processing of facial properties. The current study examined the processing of facial emotion, face race, and face gender using categorization tasks. The same set of Chinese, White and Black faces, each posing a neutral, happy or angry expression, was used in three experiments. Facial emotion interacted with face race in all the tasks. The interaction of face race and face gender was found in the race and gender categorization tasks, whereas the interaction of facial emotion and face gender was significant in the emotion and gender categorization tasks. These results provided evidence for a symmetric interaction between variant facial properties (emotion) and invariant facial properties (race and gender).

  16. Interference among the Processing of Facial Emotion, Face Race, and Face Gender

    PubMed Central

    Li, Yongna; Tse, Chi-Shing

    2016-01-01

    People can process multiple dimensions of facial properties simultaneously. Facial processing models are based on the processing of facial properties. The current study examined the processing of facial emotion, face race, and face gender using categorization tasks. The same set of Chinese, White and Black faces, each posing a neutral, happy or angry expression, was used in three experiments. Facial emotion interacted with face race in all the tasks. The interaction of face race and face gender was found in the race and gender categorization tasks, whereas the interaction of facial emotion and face gender was significant in the emotion and gender categorization tasks. These results provided evidence for a symmetric interaction between variant facial properties (emotion) and invariant facial properties (race and gender). PMID:27840621

  17. On the Formal Verification of Conflict Detection Algorithms

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar; Butler, Ricky W.; Carreno, Victor A.; Dowek, Gilles

    2001-01-01

    Safety assessment of new air traffic management systems is a main issue for civil aviation authorities. Standard techniques such as testing and simulation have serious limitations in new systems that are significantly more autonomous than the older ones. In this paper, we present an innovative approach, based on formal verification, for establishing the correctness of conflict detection systems. Fundamental to our approach is the concept of trajectory, which is a continuous path in the x-y plane constrained by physical laws and operational requirements. From the Model of trajectories, we extract, and formally prove, high level properties that can serve as a framework to analyze conflict scenarios. We use the Airborne Information for Lateral Spacing (AILS) alerting algorithm as a case study of our approach.

  18. Probabilistic verification of cloud fraction from three different products with CALIPSO

    NASA Astrophysics Data System (ADS)

    Jung, B. J.; Descombes, G.; Snyder, C.

    2017-12-01

    In this study, we present how Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) can be used for probabilistic verification of cloud fraction, and apply this probabilistic approach to three cloud fraction products: a) The Air Force Weather (AFW) World Wide Merged Cloud Analysis (WWMCA), b) Satellite Cloud Observations and Radiative Property retrieval Systems (SatCORPS) from NASA Langley Research Center, and c) Multi-sensor Advection Diffusion nowCast (MADCast) from NCAR. Although they differ in their details, both WWMCA and SatCORPS retrieve cloud fraction from satellite observations, mainly of infrared radiances. MADCast utilizes in addition a short-range forecast of cloud fraction (provided by the Model for Prediction Across Scales, assuming cloud fraction is advected as a tracer) and a column-by-column particle filter implemented within the Gridpoint Statistical Interpolation (GSI) data-assimilation system. The probabilistic verification considers the retrieved or analyzed cloud fractions as predicting the probability of cloud at any location within a grid cell and the 5-km vertical feature mask (VFM) from CALIPSO level-2 products as a point observation of cloud.

  19. Authentication Based on Pole-zero Models of Signature Velocity

    PubMed Central

    Rashidi, Saeid; Fallah, Ali; Towhidkhah, Farzad

    2013-01-01

    With the increase of communication and financial transaction through internet, on-line signature verification is an accepted biometric technology for access control and plays a significant role in authenticity and authorization in modernized society. Therefore, fast and precise algorithms for the signature verification are very attractive. The goal of this paper is modeling of velocity signal that pattern and properties is stable for persons. With using pole-zero models based on discrete cosine transform, precise method is proposed for modeling and then features is founded from strokes. With using linear, parzen window and support vector machine classifiers, the signature verification technique was tested with a large number of authentic and forgery signatures and has demonstrated the good potential of this technique. The signatures are collected from three different database include a proprietary database, the SVC2004 and the Sabanci University signature database benchmark databases. Experimental results based on Persian, SVC2004 and SUSIG databases show that our method achieves an equal error rate of 5.91%, 5.62% and 3.91% in the skilled forgeries, respectively. PMID:24696797

  20. New Aspects of Probabilistic Forecast Verification Using Information Theory

    NASA Astrophysics Data System (ADS)

    Tödter, Julian; Ahrens, Bodo

    2013-04-01

    This work deals with information-theoretical methods in probabilistic forecast verification, particularly concerning ensemble forecasts. Recent findings concerning the "Ignorance Score" are shortly reviewed, then a consistent generalization to continuous forecasts is motivated. For ensemble-generated forecasts, the presented measures can be calculated exactly. The Brier Score (BS) and its generalizations to the multi-categorical Ranked Probability Score (RPS) and to the Continuous Ranked Probability Score (CRPS) are prominent verification measures for probabilistic forecasts. Particularly, their decompositions into measures quantifying the reliability, resolution and uncertainty of the forecasts are attractive. Information theory sets up a natural framework for forecast verification. Recently, it has been shown that the BS is a second-order approximation of the information-based Ignorance Score (IGN), which also contains easily interpretable components and can also be generalized to a ranked version (RIGN). Here, the IGN, its generalizations and decompositions are systematically discussed in analogy to the variants of the BS. Additionally, a Continuous Ranked IGN (CRIGN) is introduced in analogy to the CRPS. The useful properties of the conceptually appealing CRIGN are illustrated, together with an algorithm to evaluate its components reliability, resolution, and uncertainty for ensemble-generated forecasts. This algorithm can also be used to calculate the decomposition of the more traditional CRPS exactly. The applicability of the "new" measures is demonstrated in a small evaluation study of ensemble-based precipitation forecasts.

  1. Modeling interfacial fracture in Sierra.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Arthur A.; Ohashi, Yuki; Lu, Wei-Yang

    2013-09-01

    This report summarizes computational efforts to model interfacial fracture using cohesive zone models in the SIERRA/SolidMechanics (SIERRA/SM) finite element code. Cohesive surface elements were used to model crack initiation and propagation along predefined paths. Mesh convergence was observed with SIERRA/SM for numerous geometries. As the funding for this project came from the Advanced Simulation and Computing Verification and Validation (ASC V&V) focus area, considerable effort was spent performing verification and validation. Code verification was performed to compare code predictions to analytical solutions for simple three-element simulations as well as a higher-fidelity simulation of a double-cantilever beam. Parameter identification was conductedmore » with Dakota using experimental results on asymmetric double-cantilever beam (ADCB) and end-notched-flexure (ENF) experiments conducted under Campaign-6 funding. Discretization convergence studies were also performed with respect to mesh size and time step and an optimization study was completed for mode II delamination using the ENF geometry. Throughout this verification process, numerous SIERRA/SM bugs were found and reported, all of which have been fixed, leading to over a 10-fold increase in convergence rates. Finally, mixed-mode flexure experiments were performed for validation. One of the unexplained issues encountered was material property variability for ostensibly the same composite material. Since the variability is not fully understood, it is difficult to accurately assess uncertainty when performing predictions.« less

  2. Use of Human Modeling Simulation Software in the Task Analysis of the Environmental Control and Life Support System Component Installation Procedures

    NASA Technical Reports Server (NTRS)

    Estes, Samantha; Parker, Nelson C. (Technical Monitor)

    2001-01-01

    Virtual reality and simulation applications are becoming widespread in human task analysis. These programs have many benefits for the Human Factors Engineering field. Not only do creating and using virtual environments for human engineering analyses save money and time, this approach also promotes user experimentation and provides increased quality of analyses. This paper explains the human engineering task analysis performed on the Environmental Control and Life Support System (ECLSS) space station rack and its Distillation Assembly (DA) subsystem using EAI's human modeling simulation software, Jack. When installed on the International Space Station (ISS), ECLSS will provide the life and environment support needed to adequately sustain crew life. The DA is an Orbital Replaceable Unit (ORU) that provides means of wastewater (primarily urine from flight crew and experimental animals) reclamation. Jack was used to create a model of the weightless environment of the ISS Node 3, where the ECLSS is housed. Computer aided drawings of the ECLSS rack and DA system were also brought into the environment. Anthropometric models of a 95th percentile male and 5th percentile female were used to examine the human interfaces encountered during various ECLSS and DA tasks. The results of the task analyses were used in suggesting modifications to hardware and crew task procedures to improve accessibility, conserve crew time, and add convenience for the crew. This paper will address some of those suggested modifications and the method of presenting final analyses for requirements verification.

  3. Environment learning using descriptions or navigation: The involvement of working memory in young and older adults.

    PubMed

    Meneghetti, Chiara; Borella, Erika; Carbone, Elena; Martinelli, Massimiliano; De Beni, Rossana

    2016-05-01

    This study examined age-related differences between young and older adults in the involvement of verbal and visuo-spatial components of working memory (WM) when paths are learned from verbal and visuo-spatial inputs. A sample of 60 young adults (20-30 years old) and 58 older adults (60-75 years old) learned two paths from the person's point of view, one displayed in the form of a video showing the path, the other presenting the path in a verbal description. During the learning phase, participants concurrently performed a verbal task (articulatory suppression, AS group), or a visuo-spatial task (spatial tapping, ST group), or no secondary task (control, C group). After learning each path, participants completed tasks that involved the following: (1) recalling the sequential order and the location of landmarks; and (2) judging spatial sentences as true or false (verification test). The results showed that young adults outperformed older adults in all recall tasks. In both age groups performance in all types of task was worse in the AS and ST groups than in the C group, irrespective of the type of input. Overall, these findings suggest that verbal and visuo-spatial components of WM underpin the processing of environmental information in both young and older adults. The results are discussed in terms of age-related differences and according to the spatial cognition framework. © 2015 The British Psychological Society.

  4. Motor demand-dependent activation of ipsilateral motor cortex.

    PubMed

    Buetefisch, Cathrin M; Revill, Kate Pirog; Shuster, Linda; Hines, Benjamin; Parsons, Michael

    2014-08-15

    The role of ipsilateral primary motor cortex (M1) in hand motor control during complex task performance remains controversial. Bilateral M1 activation is inconsistently observed in functional (f)MRI studies of unilateral hand performance. Two factors limit the interpretation of these data. As the motor tasks differ qualitatively in these studies, it is conceivable that M1 contributions differ with the demand on skillfulness. Second, most studies lack the verification of a strictly unilateral execution of the motor task during the acquisition of imaging data. Here, we use fMRI to determine whether ipsilateral M1 activity depends on the demand for precision in a pointing task where precision varied quantitatively while movement trajectories remained equal. Thirteen healthy participants used an MRI-compatible joystick to point to targets of four different sizes in a block design. A clustered acquisition technique allowed simultaneous fMRI/EMG data collection and confirmed that movements were strictly unilateral. Accuracy of performance increased with target size. Overall, the pointing task revealed activation in contralateral and ipsilateral M1, extending into contralateral somatosensory and parietal areas. Target size-dependent activation differences were found in ipsilateral M1 extending into the temporal/parietal junction, where activation increased with increasing demand on accuracy. The results suggest that ipsilateral M1 is active during the execution of a unilateral motor task and that its activity is modulated by the demand on precision. Copyright © 2014 the American Physiological Society.

  5. Automation of Presentation Record Production Based on Rich-Media Technology Using SNT Petri Nets Theory.

    PubMed

    Martiník, Ivo

    2015-01-01

    Rich-media describes a broad range of digital interactive media that is increasingly used in the Internet and also in the support of education. Last year, a special pilot audiovisual lecture room was built as a part of the MERLINGO (MEdia-rich Repository of LearnING Objects) project solution. It contains all the elements of the modern lecture room determined for the implementation of presentation recordings based on the rich-media technologies and their publication online or on-demand featuring the access of all its elements in the automated mode including automatic editing. Property-preserving Petri net process algebras (PPPA) were designed for the specification and verification of the Petri net processes. PPPA does not need to verify the composition of the Petri net processes because all their algebraic operators preserve the specified set of the properties. These original PPPA are significantly generalized for the newly introduced class of the SNT Petri process and agent nets in this paper. The PLACE-SUBST and ASYNC-PROC algebraic operators are defined for this class of Petri nets and their chosen properties are proved. The SNT Petri process and agent nets theory were significantly applied at the design, verification, and implementation of the programming system ensuring the pilot audiovisual lecture room functionality.

  6. Automation of Presentation Record Production Based on Rich-Media Technology Using SNT Petri Nets Theory

    PubMed Central

    Martiník, Ivo

    2015-01-01

    Rich-media describes a broad range of digital interactive media that is increasingly used in the Internet and also in the support of education. Last year, a special pilot audiovisual lecture room was built as a part of the MERLINGO (MEdia-rich Repository of LearnING Objects) project solution. It contains all the elements of the modern lecture room determined for the implementation of presentation recordings based on the rich-media technologies and their publication online or on-demand featuring the access of all its elements in the automated mode including automatic editing. Property-preserving Petri net process algebras (PPPA) were designed for the specification and verification of the Petri net processes. PPPA does not need to verify the composition of the Petri net processes because all their algebraic operators preserve the specified set of the properties. These original PPPA are significantly generalized for the newly introduced class of the SNT Petri process and agent nets in this paper. The PLACE-SUBST and ASYNC-PROC algebraic operators are defined for this class of Petri nets and their chosen properties are proved. The SNT Petri process and agent nets theory were significantly applied at the design, verification, and implementation of the programming system ensuring the pilot audiovisual lecture room functionality. PMID:26258164

  7. Speaker diarization system on the 2007 NIST rich transcription meeting recognition evaluation

    NASA Astrophysics Data System (ADS)

    Sun, Hanwu; Nwe, Tin Lay; Koh, Eugene Chin Wei; Bin, Ma; Li, Haizhou

    2007-09-01

    This paper presents a speaker diarization system developed at the Institute for Infocomm Research (I2R) for NIST Rich Transcription 2007 (RT-07) evaluation task. We describe in details our primary approaches for the speaker diarization on the Multiple Distant Microphones (MDM) conditions in conference room scenario. Our proposed system consists of six modules: 1). Least-mean squared (NLMS) adaptive filter for the speaker direction estimate via Time Difference of Arrival (TDOA), 2). An initial speaker clustering via two-stage TDOA histogram distribution quantization approach, 3). Multiple microphone speaker data alignment via GCC-PHAT Time Delay Estimate (TDE) among all the distant microphone channel signals, 4). A speaker clustering algorithm based on GMM modeling approach, 5). Non-speech removal via speech/non-speech verification mechanism and, 6). Silence removal via "Double-Layer Windowing"(DLW) method. We achieves error rate of 31.02% on the 2006 Spring (RT-06s) MDM evaluation task and a competitive overall error rate of 15.32% for the NIST Rich Transcription 2007 (RT-07) MDM evaluation task.

  8. Integrating conceptual knowledge within and across representational modalities.

    PubMed

    McNorgan, Chris; Reid, Jackie; McRae, Ken

    2011-02-01

    Research suggests that concepts are distributed across brain regions specialized for processing information from different sensorimotor modalities. Multimodal semantic models fall into one of two broad classes differentiated by the assumed hierarchy of convergence zones over which information is integrated. In shallow models, communication within- and between-modality is accomplished using either direct connectivity, or a central semantic hub. In deep models, modalities are connected via cascading integration sites with successively wider receptive fields. Four experiments provide the first direct behavioral tests of these models using speeded tasks involving feature inference and concept activation. Shallow models predict no within-modal versus cross-modal difference in either task, whereas deep models predict a within-modal advantage for feature inference, but a cross-modal advantage for concept activation. Experiments 1 and 2 used relatedness judgments to tap participants' knowledge of relations for within- and cross-modal feature pairs. Experiments 3 and 4 used a dual-feature verification task. The pattern of decision latencies across Experiments 1-4 is consistent with a deep integration hierarchy. Copyright © 2010 Elsevier B.V. All rights reserved.

  9. Efficient Ada multitasking on a RISC register window architecture

    NASA Technical Reports Server (NTRS)

    Kearns, J. P.; Quammen, D.

    1987-01-01

    This work addresses the problem of reducing context switch overhead on a processor which supports a large register file - a register file much like that which is part of the Berkeley RISC processors and several other emerging architectures (which are not necessarily reduced instruction set machines in the purest sense). Such a reduction in overhead is particularly desirable in a real-time embedded application, in which task-to-task context switch overhead may result in failure to meet crucial deadlines. A storage management technique by which a context switch may be implemented as cheaply as a procedure call is presented. The essence of this technique is the avoidance of the save/restore of registers on the context switch. This is achieved through analysis of the static source text of an Ada tasking program. Information gained during that analysis directs the optimized storage management strategy for that program at run time. A formal verification of the technique in terms of an operational control model and an evaluation of the technique's performance via simulations driven by synthetic Ada program traces are presented.

  10. A training paradigm to enhance motor recovery in contused rats: effects of staircase training.

    PubMed

    Singh, Anita; Murray, Marion; Houle, John D

    2011-01-01

    Ambulating on stairs is an important aspect of daily activities for many individuals with incomplete spinal cord injury (SCI), and little is known about the effect of training for this specific task. The goal of this study was to determine whether staircase ascent training enhances motor recovery in animals with contusion injury. Rats received a midthoracic contusion lesion of moderate severity and were randomly divided into 2 groups, with one group receiving staircase ascent training for up to 8 weeks and the other receiving no training. To assess the direct effect of training, a task-specific staircase climbing test was performed. Open field test (BBB) and gait analysis (CatWalk) assessed overground recovery, and a grid test was used to assess improvement in sensorimotor tasks. Changes in muscle mass of the forelimb and hindlimb muscles were also measured, and the extent of spared white matter was determined for lesion verification and anatomical correlations. Staircase training improved the task-specific performance of ascent. Gait parameters, including base of support, stride length, regularity index (RI), and step sequence, also improved. Overground locomotion and the grid test, both showed a trend of improved performance. Finally, hindlimb muscle mass was maintained with training. Staircase ascent training after incomplete SCI has beneficial effects on task-specific as well as nonspecific motor and sensorimotor activities.

  11. Biometric recognition via texture features of eye movement trajectories in a visual searching task.

    PubMed

    Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei; Zhang, Chenggang

    2018-01-01

    Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers' temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases.

  12. Biometric recognition via texture features of eye movement trajectories in a visual searching task

    PubMed Central

    Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei

    2018-01-01

    Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers’ temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases. PMID:29617383

  13. Manipulating Objects and Telling Words: A Study on Concrete and Abstract Words Acquisition

    PubMed Central

    Borghi, Anna M.; Flumini, Andrea; Cimatti, Felice; Marocco, Davide; Scorolli, Claudia

    2011-01-01

    Four experiments (E1–E2–E3–E4) investigated whether different acquisition modalities lead to the emergence of differences typically found between concrete and abstract words, as argued by the words as tools (WAT) proposal. To mimic the acquisition of concrete and abstract concepts, participants either manipulated novel objects or observed groups of objects interacting in novel ways (Training 1). In TEST 1 participants decided whether two elements belonged to the same category. Later they read the category labels (Training 2); labels could be accompanied by an explanation of their meaning. Then participants observed previously seen exemplars and other elements, and were asked which of them could be named with a given label (TEST 2). Across the experiments, it was more difficult to form abstract than concrete categories (TEST 1); even when adding labels, abstract words remained more difficult than concrete words (TEST 2). TEST 3 differed across the experiments. In E1 participants performed a feature production task. Crucially, the associations produced with the novel words reflected the pattern evoked by existing concrete and abstract words, as the first evoked more perceptual properties. In E2–E3–E4, TEST 3 consisted of a color verification task with manual/verbal (keyboard–microphone) responses. Results showed the microphone use to have an advantage over keyboard use for abstract words, especially in the explanation condition. This supports WAT: due to their acquisition modality, concrete words evoke more manual information; abstract words elicit more verbal information. This advantage was not present when linguistic information contrasted with perceptual one. Implications for theories and computational models of language grounding are discussed. PMID:21716582

  14. Structural Verification of the Space Shuttle's External Tank Super LightWeight Design: A Lesson in Innovation

    NASA Technical Reports Server (NTRS)

    Otte, Neil

    1997-01-01

    The Super LightWeight Tank (SLWT) team was tasked with a daunting challenge from the outset: boost the payload capability of the Shuttle System by safely removing 7500 lbs. from the existing 65,400 lb. External Tank (ET). Tools they had to work with included a promising new Aluminum Lithium alloy, the concept of a more efficient structural configuration for the Liquid Hydrogen (LH2) tank, and a highly successful, mature Light Weight Tank (LWT) program. The 44 month schedule which the SLWT team was given for the task was ambitious by any measure. During this time the team had to not only design, build, and verify the new tank, but they also had to move a material from the early stages of development to maturity. The aluminum lithium alloy showed great promise, with an approximately 29% increase in yield strength, 15% increase in ultimate strength, 5 deg/O increase in modulus and 5 deg/O decrease in density when compared to the current 2219 alloy. But processes had to be developed and brought under control, manufacturing techniques perfected, properties characterized, and design allowable generated. Because of the schedule constraint, this material development activity had to occur in parallel with design and manufacturing. Initial design was performed using design allowable believed to be achievable with the Aluminum Lithium alloy system, but based on limited test data. Preliminary structural development tests were performed with material still in the process of iteration. This parallel path approach posed obvious challenges and risks, but also allowed a unique opportunity for interaction between the structures and materials disciplines in the formulation of the material.

  15. Visual Search in the Real World: Color Vision Deficiency Affects Peripheral Guidance, but Leaves Foveal Verification Largely Unaffected

    PubMed Central

    Kugler, Günter; 't Hart, Bernard M.; Kohlbecher, Stefan; Bartl, Klaus; Schumann, Frank; Einhäuser, Wolfgang; Schneider, Erich

    2015-01-01

    Background: People with color vision deficiencies report numerous limitations in daily life, restricting, for example, their access to some professions. However, they use basic color terms systematically and in a similar manner as people with normal color vision. We hypothesize that a possible explanation for this discrepancy between color perception and behavioral consequences might be found in the gaze behavior of people with color vision deficiency. Methods: A group of participants with color vision deficiencies and a control group performed several search tasks in a naturalistic setting on a lawn. All participants wore a mobile eye-tracking-driven camera with a high foveal image resolution (EyeSeeCam). Search performance as well as fixations of objects of different colors were examined. Results: Search performance was similar in both groups in a color-unrelated search task as well as in a search for yellow targets. While searching for red targets, participants with color vision deficiencies exhibited a strongly degraded performance. This was closely matched by the number of fixations on red objects shown by the two groups. Importantly, once they fixated a target, participants with color vision deficiencies exhibited only few identification errors. Conclusions: In contrast to controls, participants with color vision deficiencies are not able to enhance their search for red targets on a (green) lawn by an efficient guiding mechanism. The data indicate that the impaired guiding is the main influence on search performance, while foveal identification (verification) is largely unaffected by the color vision deficiency. PMID:26733851

  16. Identification and verification of hybridoma-derived monoclonal antibody variable region sequences using recombinant DNA technology and mass spectrometry

    USDA-ARS?s Scientific Manuscript database

    Antibody engineering requires the identification of antigen binding domains or variable regions (VR) unique to each antibody. It is the VR that define the unique antigen binding properties and proper sequence identification is essential for functional evaluation and performance of recombinant antibo...

  17. Modality Switching Cost during Property Verification by 7 Years of Age

    ERIC Educational Resources Information Center

    Ambrosi, Solene; Kalenine, Solene; Blaye, Agnes; Bonthoux, Francoise

    2011-01-01

    Recent studies in neuroimagery and cognitive psychology support the view of sensory-motor based knowledge: when processing an object concept, neural systems would re-enact previous experiences with this object. In this experiment, a conceptual switching cost paradigm derived from Pecher, Zeelenberg, and Barsalou (2003, 2004) was used to…

  18. On the verification of intransitive noninterference in mulitlevel security.

    PubMed

    Ben Hadj-Alouane, Nejib; Lafrance, Stéphane; Lin, Feng; Mullins, John; Yeddes, Mohamed Moez

    2005-10-01

    We propose an algorithmic approach to the problem of verification of the property of intransitive noninterference (INI), using tools and concepts of discrete event systems (DES). INI can be used to characterize and solve several important security problems in multilevel security systems. In a previous work, we have established the notion of iP-observability, which precisely captures the property of INI. We have also developed an algorithm for checking iP-observability by indirectly checking P-observability for systems with at most three security levels. In this paper, we generalize the results for systems with any finite number of security levels by developing a direct method for checking iP-observability, based on an insightful observation that the iP function is a left congruence in terms of relations on formal languages. To demonstrate the applicability of our approach, we propose a formal method to detect denial of service vulnerabilities in security protocols based on INI. This method is illustrated using the TCP/IP protocol. The work extends the theory of supervisory control of DES to a new application domain.

  19. A Practical Approach to Implementing Real-Time Semantics

    NASA Technical Reports Server (NTRS)

    Luettgen, Gerald; Bhat, Girish; Cleaveland, Rance

    1999-01-01

    This paper investigates implementations of process algebras which are suitable for modeling concurrent real-time systems. It suggests an approach for efficiently implementing real-time semantics using dynamic priorities. For this purpose a proces algebra with dynamic priority is defined, whose semantics corresponds one-to-one to traditional real-time semantics. The advantage of the dynamic-priority approach is that it drastically reduces the state-space sizes of the systems in question while preserving all properties of their functional and real-time behavior. The utility of the technique is demonstrated by a case study which deals with the formal modeling and verification of the SCSI-2 bus-protocol. The case study is carried out in the Concurrency Workbench of North Carolina, an automated verification tool in which the process algebra with dynamic priority is implemented. It turns out that the state space of the bus-protocol model is about an order of magnitude smaller than the one resulting from real-time semantics. The accuracy of the model is proved by applying model checking for verifying several mandatory properties of the bus protocol.

  20. Hafnium transistor design for neural interfacing.

    PubMed

    Parent, David W; Basham, Eric J

    2008-01-01

    A design methodology is presented that uses the EKV model and the g(m)/I(D) biasing technique to design hafnium oxide field effect transistors that are suitable for neural recording circuitry. The DC gain of a common source amplifier is correlated to the structural properties of a Field Effect Transistor (FET) and a Metal Insulator Semiconductor (MIS) capacitor. This approach allows a transistor designer to use a design flow that starts with simple and intuitive 1-D equations for gain that can be verified in 1-D MIS capacitor TCAD simulations, before final TCAD process verification of transistor properties. The DC gain of a common source amplifier is optimized by using fast 1-D simulations and using slower, complex 2-D simulations only for verification. The 1-D equations are used to show that the increased dielectric constant of hafnium oxide allows a higher DC gain for a given oxide thickness. An additional benefit is that the MIS capacitor can be employed to test additional performance parameters important to an open gate transistor such as dielectric stability and ionic penetration.

  1. Social Cognition Psychometric Evaluation: Results of the Final Validation Study.

    PubMed

    Pinkham, Amy E; Harvey, Philip D; Penn, David L

    2018-06-06

    Social cognition is increasingly recognized as an important treatment target in schizophrenia; however, the dearth of well-validated measures that are suitable for use in clinical trials remains a significant limitation. The Social Cognition Psychometric Evaluation (SCOPE) study addresses this need by systematically evaluating the psychometric properties of promising measures. In this final phase of SCOPE, eight new or modified tasks were evaluated. Stable outpatients with schizophrenia (n = 218) and healthy controls (n = 154) completed the battery at baseline and 2-4 weeks later across three sites. Tasks included the Bell Lysaker Emotion Recognition Task (BLERT), Penn Emotion Recognition Task (ER-40), Reading the Mind in the Eyes Task (Eyes), The Awareness of Social Inferences Test (TASIT), Hinting Task, Mini Profile of Nonverbal Sensitivity (MiniPONS), Social Attribution Task-Multiple Choice (SAT-MC), and Intentionality Bias Task (IBT). BLERT and ER-40 modifications included response time and confidence ratings. The Eyes task was modified to include definitions of terms and TASIT to include response time. Hinting was scored with more stringent criteria. MiniPONS, SAT-MC, and IBT were new to this phase. Tasks were evaluated on (1) test-retest reliability, (2) utility as a repeated measure, (3) relationship to functional outcome, (4) practicality and tolerability, (5) sensitivity to group differences, and (6) internal consistency. Hinting, BLERT, and ER-40 showed the strongest psychometric properties and are recommended for use in clinical trials. Eyes, TASIT, and IBT showed somewhat weaker psychometric properties and require further study. MiniPONS and SAT-MC showed poorer psychometric properties that suggest caution for their use in clinical trials.

  2. Universal Verification Methodology Based Register Test Automation Flow.

    PubMed

    Woo, Jae Hun; Cho, Yong Kwan; Park, Sun Kyu

    2016-05-01

    In today's SoC design, the number of registers has been increased along with complexity of hardware blocks. Register validation is a time-consuming and error-pron task. Therefore, we need an efficient way to perform verification with less effort in shorter time. In this work, we suggest register test automation flow based UVM (Universal Verification Methodology). UVM provides a standard methodology, called a register model, to facilitate stimulus generation and functional checking of registers. However, it is not easy for designers to create register models for their functional blocks or integrate models in test-bench environment because it requires knowledge of SystemVerilog and UVM libraries. For the creation of register models, many commercial tools support a register model generation from register specification described in IP-XACT, but it is time-consuming to describe register specification in IP-XACT format. For easy creation of register model, we propose spreadsheet-based register template which is translated to IP-XACT description, from which register models can be easily generated using commercial tools. On the other hand, we also automate all the steps involved integrating test-bench and generating test-cases, so that designers may use register model without detailed knowledge of UVM or SystemVerilog. This automation flow involves generating and connecting test-bench components (e.g., driver, checker, bus adaptor, etc.) and writing test sequence for each type of register test-case. With the proposed flow, designers can save considerable amount of time to verify functionality of registers.

  3. Proceedings of the Second NASA Formal Methods Symposium

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar (Editor)

    2010-01-01

    This publication contains the proceedings of the Second NASA Formal Methods Symposium sponsored by the National Aeronautics and Space Administration and held in Washington D.C. April 13-15, 2010. Topics covered include: Decision Engines for Software Analysis using Satisfiability Modulo Theories Solvers; Verification and Validation of Flight-Critical Systems; Formal Methods at Intel -- An Overview; Automatic Review of Abstract State Machines by Meta Property Verification; Hardware-independent Proofs of Numerical Programs; Slice-based Formal Specification Measures -- Mapping Coupling and Cohesion Measures to Formal Z; How Formal Methods Impels Discovery: A Short History of an Air Traffic Management Project; A Machine-Checked Proof of A State-Space Construction Algorithm; Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications; Modeling Regular Replacement for String Constraint Solving; Using Integer Clocks to Verify the Timing-Sync Sensor Network Protocol; Can Regulatory Bodies Expect Efficient Help from Formal Methods?; Synthesis of Greedy Algorithms Using Dominance Relations; A New Method for Incremental Testing of Finite State Machines; Verification of Faulty Message Passing Systems with Continuous State Space in PVS; Phase Two Feasibility Study for Software Safety Requirements Analysis Using Model Checking; A Prototype Embedding of Bluespec System Verilog in the PVS Theorem Prover; SimCheck: An Expressive Type System for Simulink; Coverage Metrics for Requirements-Based Testing: Evaluation of Effectiveness; Software Model Checking of ARINC-653 Flight Code with MCP; Evaluation of a Guideline by Formal Modelling of Cruise Control System in Event-B; Formal Verification of Large Software Systems; Symbolic Computation of Strongly Connected Components Using Saturation; Towards the Formal Verification of a Distributed Real-Time Automotive System; Slicing AADL Specifications for Model Checking; Model Checking with Edge-valued Decision Diagrams; and Data-flow based Model Analysis.

  4. Technical Note: Range verification system using edge detection method for a scintillator and a CCD camera system.

    PubMed

    Saotome, Naoya; Furukawa, Takuji; Hara, Yousuke; Mizushima, Kota; Tansho, Ryohei; Saraya, Yuichi; Shirai, Toshiyuki; Noda, Koji

    2016-04-01

    Three-dimensional irradiation with a scanned carbon-ion beam has been performed from 2011 at the authors' facility. The authors have developed the rotating-gantry equipped with the scanning irradiation system. The number of combinations of beam properties to measure for the commissioning is more than 7200, i.e., 201 energy steps, 3 intensities, and 12 gantry angles. To compress the commissioning time, quick and simple range verification system is required. In this work, the authors develop a quick range verification system using scintillator and charge-coupled device (CCD) camera and estimate the accuracy of the range verification. A cylindrical plastic scintillator block and a CCD camera were installed on the black box. The optical spatial resolution of the system is 0.2 mm/pixel. The camera control system was connected and communicates with the measurement system that is part of the scanning system. The range was determined by image processing. Reference range for each energy beam was determined by a difference of Gaussian (DOG) method and the 80% of distal dose of the depth-dose distribution that were measured by a large parallel-plate ionization chamber. The authors compared a threshold method and a DOG method. The authors found that the edge detection method (i.e., the DOG method) is best for the range detection. The accuracy of range detection using this system is within 0.2 mm, and the reproducibility of the same energy measurement is within 0.1 mm without setup error. The results of this study demonstrate that the authors' range check system is capable of quick and easy range verification with sufficient accuracy.

  5. Technical Note: Range verification system using edge detection method for a scintillator and a CCD camera system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saotome, Naoya, E-mail: naosao@nirs.go.jp; Furukawa, Takuji; Hara, Yousuke

    Purpose: Three-dimensional irradiation with a scanned carbon-ion beam has been performed from 2011 at the authors’ facility. The authors have developed the rotating-gantry equipped with the scanning irradiation system. The number of combinations of beam properties to measure for the commissioning is more than 7200, i.e., 201 energy steps, 3 intensities, and 12 gantry angles. To compress the commissioning time, quick and simple range verification system is required. In this work, the authors develop a quick range verification system using scintillator and charge-coupled device (CCD) camera and estimate the accuracy of the range verification. Methods: A cylindrical plastic scintillator blockmore » and a CCD camera were installed on the black box. The optical spatial resolution of the system is 0.2 mm/pixel. The camera control system was connected and communicates with the measurement system that is part of the scanning system. The range was determined by image processing. Reference range for each energy beam was determined by a difference of Gaussian (DOG) method and the 80% of distal dose of the depth-dose distribution that were measured by a large parallel-plate ionization chamber. The authors compared a threshold method and a DOG method. Results: The authors found that the edge detection method (i.e., the DOG method) is best for the range detection. The accuracy of range detection using this system is within 0.2 mm, and the reproducibility of the same energy measurement is within 0.1 mm without setup error. Conclusions: The results of this study demonstrate that the authors’ range check system is capable of quick and easy range verification with sufficient accuracy.« less

  6. Acoustic-based proton range verification in heterogeneous tissue: simulation studies

    NASA Astrophysics Data System (ADS)

    Jones, Kevin C.; Nie, Wei; Chu, James C. H.; Turian, Julius V.; Kassaee, Alireza; Sehgal, Chandra M.; Avery, Stephen

    2018-01-01

    Acoustic-based proton range verification (protoacoustics) is a potential in vivo technique for determining the Bragg peak position. Previous measurements and simulations have been restricted to homogeneous water tanks. Here, a CT-based simulation method is proposed and applied to a liver and prostate case to model the effects of tissue heterogeneity on the protoacoustic amplitude and time-of-flight range verification accuracy. For the liver case, posterior irradiation with a single proton pencil beam was simulated for detectors placed on the skin. In the prostate case, a transrectal probe measured the protoacoustic pressure generated by irradiation with five separate anterior proton beams. After calculating the proton beam dose deposition, each CT voxel’s material properties were mapped based on Hounsfield Unit values, and thermoacoustically-generated acoustic wave propagation was simulated with the k-Wave MATLAB toolbox. By comparing the simulation results for the original liver CT to homogenized variants, the effects of heterogeneity were assessed. For the liver case, 1.4 cGy of dose at the Bragg peak generated 50 mPa of pressure (13 cm distal), a 2×  lower amplitude than simulated in a homogeneous water tank. Protoacoustic triangulation of the Bragg peak based on multiple detector measurements resulted in 0.4 mm accuracy for a δ-function proton pulse irradiation of the liver. For the prostate case, higher amplitudes are simulated (92-1004 mPa) for closer detectors (<8 cm). For four of the prostate beams, the protoacoustic range triangulation was accurate to  ⩽1.6 mm (δ-function proton pulse). Based on the results, application of protoacoustic range verification to heterogeneous tissue will result in decreased signal amplitudes relative to homogeneous water tank measurements, but accurate range verification is still expected to be possible.

  7. Simulation, Model Verification and Controls Development of Brayton Cycle PM Alternator: Testing and Simulation of 2 KW PM Generator with Diode Bridge Output

    NASA Technical Reports Server (NTRS)

    Stankovic, Ana V.

    2003-01-01

    Professor Stankovic will be developing and refining Simulink based models of the PM alternator and comparing the simulation results with experimental measurements taken from the unit. Her first task is to validate the models using the experimental data. Her next task is to develop alternative control techniques for the application of the Brayton Cycle PM Alternator in a nuclear electric propulsion vehicle. The control techniques will be first simulated using the validated models then tried experimentally with hardware available at NASA. Testing and simulation of a 2KW PM synchronous generator with diode bridge output is described. The parameters of a synchronous PM generator have been measured and used in simulation. Test procedures have been developed to verify the PM generator model with diode bridge output. Experimental and simulation results are in excellent agreement.

  8. Blade loss transient dynamics analysis, volume 2. Task 2: Theoretical and analytical development. Task 3: Experimental verification

    NASA Technical Reports Server (NTRS)

    Gallardo, V. C.; Storace, A. S.; Gaffney, E. F.; Bach, L. J.; Stallone, M. J.

    1981-01-01

    The component element method was used to develop a transient dynamic analysis computer program which is essentially based on modal synthesis combined with a central, finite difference, numerical integration scheme. The methodology leads to a modular or building-block technique that is amenable to computer programming. To verify the analytical method, turbine engine transient response analysis (TETRA), was applied to two blade-out test vehicles that had been previously instrumented and tested. Comparison of the time dependent test data with those predicted by TETRA led to recommendations for refinement or extension of the analytical method to improve its accuracy and overcome its shortcomings. The development of working equations, their discretization, numerical solution scheme, the modular concept of engine modelling, the program logical structure and some illustrated results are discussed. The blade-loss test vehicles (rig full engine), the type of measured data, and the engine structural model are described.

  9. Anterior temporal cortex and semantic memory: reconciling findings from neuropsychology and functional imaging.

    PubMed

    Rogers, Timothy T; Hocking, Julia; Noppeney, Uta; Mechelli, Andrea; Gorno-Tempini, Maria Luisa; Patterson, Karalyn; Price, Cathy J

    2006-09-01

    Studies of semantic impairment arising from brain disease suggest that the anterior temporal lobes are critical for semantic abilities in humans; yet activation of these regions is rarely reported in functional imaging studies of healthy controls performing semantic tasks. Here, we combined neuropsychological and PET functional imaging data to show that when healthy subjects identify concepts at a specific level, the regions activated correspond to the site of maximal atrophy in patients with relatively pure semantic impairment. The stimuli were color photographs of common animals or vehicles, and the task was category verification at specific (e.g., robin), intermediate (e.g., bird), or general (e.g., animal) levels. Specific, relative to general, categorization activated the antero-lateral temporal cortices bilaterally, despite matching of these experimental conditions for difficulty. Critically, in patients with atrophy in precisely these areas, the most pronounced deficit was in the retrieval of specific semantic information.

  10. NASA Aerospace Flight Battery Program: Generic Safety, Handling and Qualification Guidelines for Lithium-Ion (Li-Ion) Batteries; Availability of Source Materials for Lithium-Ion (Li-Ion) Batteries; Maintaining Technical Communications Related to Aerospace Batteries (NASA Aerospace Battery Workshop). Volume 2, Part 1

    NASA Technical Reports Server (NTRS)

    Manzo, Michelle A.; Brewer, Jeffrey C.; Bugga, Ratnakumar V.; Darcy, Eric C.; Jeevarajan, Judith A.; McKissock, Barbara I.; Schmitz, Paul C.

    2010-01-01

    This NASA Aerospace Flight Battery Systems Working Group was chartered within the NASA Engineering and Safety Center (NESC). The Battery Working Group was tasked to complete tasks and to propose proactive work to address battery related, agency-wide issues on an annual basis. In its first year of operation, this proactive program addressed various aspects of the validation and verification of aerospace battery systems for NASA missions. Studies were performed, issues were discussed and in many cases, test programs were executed to generate recommendations and guidelines to reduce risk associated with various aspects of implementing battery technology in the aerospace industry. This report contains the Appendices to the findings from the first year of the program's operations.

  11. Experimental Verification of Entanglement Generated in a Plasmonic System.

    PubMed

    Dieleman, F; Tame, M S; Sonnefraud, Y; Kim, M S; Maier, S A

    2017-12-13

    A core process in many quantum tasks is the generation of entanglement. It is being actively studied in a variety of physical settings-from simple bipartite systems to complex multipartite systems. In this work we experimentally study the generation of bipartite entanglement in a nanophotonic system. Entanglement is generated via the quantum interference of two surface plasmon polaritons in a beamsplitter structure, i.e., utilizing the Hong-Ou-Mandel (HOM) effect, and its presence is verified using quantum state tomography. The amount of entanglement is quantified by the concurrence and we find values of up to 0.77 ± 0.04. Verifying entanglement in the output state from HOM interference is a nontrivial task and cannot be inferred from the visibility alone. The techniques we use to verify entanglement could be applied to other types of photonic system and therefore may be useful for the characterization of a range of different nanophotonic quantum devices.

  12. Marshall Space Flight Center Ground Systems Development and Integration

    NASA Technical Reports Server (NTRS)

    Wade, Gina

    2016-01-01

    Ground Systems Development and Integration performs a variety of tasks in support of the Mission Operations Laboratory (MOL) and other Center and Agency projects. These tasks include various systems engineering processes such as performing system requirements development, system architecture design, integration, verification and validation, software development, and sustaining engineering of mission operations systems that has evolved the Huntsville Operations Support Center (HOSC) into a leader in remote operations for current and future NASA space projects. The group is also responsible for developing and managing telemetry and command configuration and calibration databases. Personnel are responsible for maintaining and enhancing their disciplinary skills in the areas of project management, software engineering, software development, software process improvement, telecommunications, networking, and systems management. Domain expertise in the ground systems area is also maintained and includes detailed proficiency in the areas of real-time telemetry systems, command systems, voice, video, data networks, and mission planning systems.

  13. WORDGRAPH: Keyword-in-Context Visualization for NETSPEAK's Wildcard Search.

    PubMed

    Riehmann, Patrick; Gruendl, Henning; Potthast, Martin; Trenkmann, Martin; Stein, Benno; Froehlich, Benno

    2012-09-01

    The WORDGRAPH helps writers in visually choosing phrases while writing a text. It checks for the commonness of phrases and allows for the retrieval of alternatives by means of wildcard queries. To support such queries, we implement a scalable retrieval engine, which returns high-quality results within milliseconds using a probabilistic retrieval strategy. The results are displayed as WORDGRAPH visualization or as a textual list. The graphical interface provides an effective means for interactive exploration of search results using filter techniques, query expansion, and navigation. Our observations indicate that, of three investigated retrieval tasks, the textual interface is sufficient for the phrase verification task, wherein both interfaces support context-sensitive word choice, and the WORDGRAPH best supports the exploration of a phrase's context or the underlying corpus. Our user study confirms these observations and shows that WORDGRAPH is generally the preferred interface over the textual result list for queries containing multiple wildcards.

  14. Priming trait inferences through pictures and moving pictures: the impact of open and closed mindsets.

    PubMed

    Fiedler, Klaus; Schenck, Wolfram; Watling, Marlin; Menges, Jochen I

    2005-02-01

    A newly developed paradigm for studying spontaneous trait inferences (STI) was applied in 3 experiments. The authors primed dyadic stimulus behaviors involving a subject (S) and an object (O) person through degraded pictures or movies. An encoding task called for the verification of either a graphical feature or a semantic interpretation, which either fit or did not fit the primed behavior. Next, participants had to identify a trait word that appeared gradually behind a mask and that either matched or did not match the primed behavior. STI effects, defined as shorter identification latencies for matching than nonmatching traits, were stronger for S than for O traits, after graphical rather than semantic encoding decisions and after encoding failures. These findings can be explained by assuming that trait inferences are facilitated by open versus closed mindsets supposed to result from distracting (graphical) encoding tasks or encoding failures (involving nonfitting interpretations).

  15. Spacecraft command verification: The AI solution

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Stephan, Amy; Smith, Brian K.

    1990-01-01

    Recently, a knowledge-based approach was used to develop a system called the Command Constraint Checker (CCC) for TRW. CCC was created to automate the process of verifying spacecraft command sequences. To check command files by hand for timing and sequencing errors is a time-consuming and error-prone task. Conventional software solutions were rejected when it was estimated that it would require 36 man-months to build an automated tool to check constraints by conventional methods. Using rule-based representation to model the various timing and sequencing constraints of the spacecraft, CCC was developed and tested in only three months. By applying artificial intelligence techniques, CCC designers were able to demonstrate the viability of AI as a tool to transform difficult problems into easily managed tasks. The design considerations used in developing CCC are discussed and the potential impact of this system on future satellite programs is examined.

  16. Verification and Planning Based on Coinductive Logic Programming

    NASA Technical Reports Server (NTRS)

    Bansal, Ajay; Min, Richard; Simon, Luke; Mallya, Ajay; Gupta, Gopal

    2008-01-01

    Coinduction is a powerful technique for reasoning about unfounded sets, unbounded structures, infinite automata, and interactive computations [6]. Where induction corresponds to least fixed point's semantics, coinduction corresponds to greatest fixed point semantics. Recently coinduction has been incorporated into logic programming and an elegant operational semantics developed for it [11, 12]. This operational semantics is the greatest fix point counterpart of SLD resolution (SLD resolution imparts operational semantics to least fix point based computations) and is termed co- SLD resolution. In co-SLD resolution, a predicate goal p( t) succeeds if it unifies with one of its ancestor calls. In addition, rational infinite terms are allowed as arguments of predicates. Infinite terms are represented as solutions to unification equations and the occurs check is omitted during the unification process. Coinductive Logic Programming (Co-LP) and Co-SLD resolution can be used to elegantly perform model checking and planning. A combined SLD and Co-SLD resolution based LP system forms the common basis for planning, scheduling, verification, model checking, and constraint solving [9, 4]. This is achieved by amalgamating SLD resolution, co-SLD resolution, and constraint logic programming [13] in a single logic programming system. Given that parallelism in logic programs can be implicitly exploited [8], complex, compute-intensive applications (planning, scheduling, model checking, etc.) can be executed in parallel on multi-core machines. Parallel execution can result in speed-ups as well as in larger instances of the problems being solved. In the remainder we elaborate on (i) how planning can be elegantly and efficiently performed under real-time constraints, (ii) how real-time systems can be elegantly and efficiently model- checked, as well as (iii) how hybrid systems can be verified in a combined system with both co-SLD and SLD resolution. Implementations of co-SLD resolution as well as preliminary implementations of the planning and verification applications have been developed [4]. Co-LP and Model Checking: The vast majority of properties that are to be verified can be classified into safety properties and liveness properties. It is well known within model checking that safety properties can be verified by reachability analysis, i.e, if a counter-example to the property exists, it can be finitely determined by enumerating all the reachable states of the Kripke structure.

  17. Development of a new lattice physics code robin for PWR application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, S.; Chen, G.

    2013-07-01

    This paper presents a description of methodologies and preliminary verification results of a new lattice physics code ROBIN, being developed for PWR application at Shanghai NuStar Nuclear Power Technology Co., Ltd. The methods used in ROBIN to fulfill various tasks of lattice physics analysis are an integration of historical methods and new methods that came into being very recently. Not only these methods like equivalence theory for resonance treatment and method of characteristics for neutron transport calculation are adopted, as they are applied in many of today's production-level LWR lattice codes, but also very useful new methods like the enhancedmore » neutron current method for Dancoff correction in large and complicated geometry and the log linear rate constant power depletion method for Gd-bearing fuel are implemented in the code. A small sample of verification results are provided to illustrate the type of accuracy achievable using ROBIN. It is demonstrated that ROBIN is capable of satisfying most of the needs for PWR lattice analysis and has the potential to become a production quality code in the future. (authors)« less

  18. Self-priorization processes in action and perception.

    PubMed

    Frings, Christian; Wentura, Dirk

    2014-10-01

    Recently, Sui, He, and Humphreys (2012) introduced a new paradigm to investigate prioritized processing of self-related information. In a balanced design, they arbitrarily assigned simple geometric shapes to the participant and 2 others. Subsequently, the task was to judge whether label-shape pairings matched. The authors found a remarkable self-prioritization effect, that is, for matching self-related trials verification was very fast and accurate in comparison to the other conditions. We tested the hypothesis that the self-priorization effect extends from perception-self links to action-self links. In particular, we assigned simple movements (i.e., up, down, left, right) to the participant, 2 others (i.e., the mother; a stranger), and a neutral label, respectively. In each trial participants executed a movement (triggered by a cue), which was followed by a briefly presented label. Participants had to judge whether label-movement pairings matched. In accordance with Sui et al. (2012) we found a remarkable self-prioritization effect, that is, for matching self-related trials verification was very fast and accurate in comparison to the other conditions.

  19. Fusing face-verification algorithms and humans.

    PubMed

    O'Toole, Alice J; Abdi, Hervé; Jiang, Fang; Phillips, P Jonathon

    2007-10-01

    It has been demonstrated recently that state-of-the-art face-recognition algorithms can surpass human accuracy at matching faces over changes in illumination. The ranking of algorithms and humans by accuracy, however, does not provide information about whether algorithms and humans perform the task comparably or whether algorithms and humans can be fused to improve performance. In this paper, we fused humans and algorithms using partial least square regression (PLSR). In the first experiment, we applied PLSR to face-pair similarity scores generated by seven algorithms participating in the Face Recognition Grand Challenge. The PLSR produced an optimal weighting of the similarity scores, which we tested for generality with a jackknife procedure. Fusing the algorithms' similarity scores using the optimal weights produced a twofold reduction of error rate over the most accurate algorithm. Next, human-subject-generated similarity scores were added to the PLSR analysis. Fusing humans and algorithms increased the performance to near-perfect classification accuracy. These results are discussed in terms of maximizing face-verification accuracy with hybrid systems consisting of multiple algorithms and humans.

  20. Verification of Electromagnetic Physics Models for Parallel Computing Architectures in the GeantV Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amadio, G.; et al.

    An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physicsmore » models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.« less

Top