Computer Modeling and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pronskikh, V. S.
2014-05-09
Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossiblemore » to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes« less
Gaia challenging performances verification: combination of spacecraft models and test results
NASA Astrophysics Data System (ADS)
Ecale, Eric; Faye, Frédéric; Chassat, François
2016-08-01
To achieve the ambitious scientific objectives of the Gaia mission, extremely stringent performance requirements have been given to the spacecraft contractor (Airbus Defence and Space). For a set of those key-performance requirements (e.g. end-of-mission parallax, maximum detectable magnitude, maximum sky density or attitude control system stability), this paper describes how they are engineered during the whole spacecraft development process, with a focus on the end-to-end performance verification. As far as possible, performances are usually verified by end-to-end tests onground (i.e. before launch). However, the challenging Gaia requirements are not verifiable by such a strategy, principally because no test facility exists to reproduce the expected flight conditions. The Gaia performance verification strategy is therefore based on a mix between analyses (based on spacecraft models) and tests (used to directly feed the models or to correlate them). Emphasis is placed on how to maximize the test contribution to performance verification while keeping the test feasible within an affordable effort. In particular, the paper highlights the contribution of the Gaia Payload Module Thermal Vacuum test to the performance verification before launch. Eventually, an overview of the in-flight payload calibration and in-flight performance verification is provided.
The formal verification of generic interpreters
NASA Technical Reports Server (NTRS)
Windley, P.; Levitt, K.; Cohen, G. C.
1991-01-01
The task assignment 3 of the design and validation of digital flight control systems suitable for fly-by-wire applications is studied. Task 3 is associated with formal verification of embedded systems. In particular, results are presented that provide a methodological approach to microprocessor verification. A hierarchical decomposition strategy for specifying microprocessors is also presented. A theory of generic interpreters is presented that can be used to model microprocessor behavior. The generic interpreter theory abstracts away the details of instruction functionality, leaving a general model of what an interpreter does.
P-8A Poseidon strategy for modeling & simulation verification validation & accreditation (VV&A)
NASA Astrophysics Data System (ADS)
Kropp, Derek L.
2009-05-01
One of the first challenges in addressing the need for Modeling & Simulation (M&S) Verification, Validation, & Accreditation (VV&A) is to develop an approach for applying structured and formalized VV&A processes. The P-8A Poseidon Multi-Mission Maritime Aircraft (MMA) Program Modeling and Simulation Accreditation Strategy documents the P-8A program's approach to VV&A. The P-8A strategy tailors a risk-based approach and leverages existing bodies of knowledge, such as the Defense Modeling and Simulation Office Recommended Practice Guide (DMSO RPG), to make the process practical and efficient. As the program progresses, the M&S team must continue to look for ways to streamline the process, add supplemental steps to enhance the process, and identify and overcome procedural, organizational, and cultural challenges. This paper includes some of the basics of the overall strategy, examples of specific approaches that have worked well, and examples of challenges that the M&S team has faced.
Integrating Model-Based Verification into Software Design Education
ERIC Educational Resources Information Center
Yilmaz, Levent; Wang, Shuo
2005-01-01
Proper design analysis is indispensable to assure quality and reduce emergent costs due to faulty software. Teaching proper design verification skills early during pedagogical development is crucial, as such analysis is the only tractable way of resolving software problems early when they are easy to fix. The premise of the presented strategy is…
Design for Verification: Enabling Verification of High Dependability Software-Intensive Systems
NASA Technical Reports Server (NTRS)
Mehlitz, Peter C.; Penix, John; Markosian, Lawrence Z.; Koga, Dennis (Technical Monitor)
2003-01-01
Strategies to achieve confidence that high-dependability applications are correctly implemented include testing and automated verification. Testing deals mainly with a limited number of expected execution paths. Verification usually attempts to deal with a larger number of possible execution paths. While the impact of architecture design on testing is well known, its impact on most verification methods is not as well understood. The Design for Verification approach considers verification from the application development perspective, in which system architecture is designed explicitly according to the application's key properties. The D4V-hypothesis is that the same general architecture and design principles that lead to good modularity, extensibility and complexity/functionality ratio can be adapted to overcome some of the constraints on verification tools, such as the production of hand-crafted models and the limits on dynamic and static analysis caused by state space explosion.
Verification and Validation Studies for the LAVA CFD Solver
NASA Technical Reports Server (NTRS)
Moini-Yekta, Shayan; Barad, Michael F; Sozer, Emre; Brehm, Christoph; Housman, Jeffrey A.; Kiris, Cetin C.
2013-01-01
The verification and validation of the Launch Ascent and Vehicle Aerodynamics (LAVA) computational fluid dynamics (CFD) solver is presented. A modern strategy for verification and validation is described incorporating verification tests, validation benchmarks, continuous integration and version control methods for automated testing in a collaborative development environment. The purpose of the approach is to integrate the verification and validation process into the development of the solver and improve productivity. This paper uses the Method of Manufactured Solutions (MMS) for the verification of 2D Euler equations, 3D Navier-Stokes equations as well as turbulence models. A method for systematic refinement of unstructured grids is also presented. Verification using inviscid vortex propagation and flow over a flat plate is highlighted. Simulation results using laminar and turbulent flow past a NACA 0012 airfoil and ONERA M6 wing are validated against experimental and numerical data.
Developing a NASA strategy for the verification of large space telescope observatories
NASA Astrophysics Data System (ADS)
Crooke, Julie A.; Gunderson, Johanna A.; Hagopian, John G.; Levine, Marie
2006-06-01
In July 2005, the Office of Program Analysis and Evaluation (PA&E) at NASA Headquarters was directed to develop a strategy for verification of the performance of large space telescope observatories, which occurs predominantly in a thermal vacuum test facility. A mission model of the expected astronomical observatory missions over the next 20 years was identified along with performance, facility and resource requirements. Ground testing versus alternatives was analyzed to determine the pros, cons and break points in the verification process. Existing facilities and their capabilities were examined across NASA, industry and other government agencies as well as the future demand for these facilities across NASA's Mission Directorates. Options were developed to meet the full suite of mission verification requirements, and performance, cost, risk and other analyses were performed. Findings and recommendations from the study were presented to the NASA Administrator and the NASA Strategic Management Council (SMC) in February 2006. This paper details the analysis, results, and findings from this study.
Evaluation of the 29-km Eta Model. Part 1; Objective Verification at Three Selected Stations
NASA Technical Reports Server (NTRS)
Nutter, Paul A.; Manobianco, John; Merceret, Francis J. (Technical Monitor)
1998-01-01
This paper describes an objective verification of the National Centers for Environmental Prediction (NCEP) 29-km eta model from May 1996 through January 1998. The evaluation was designed to assess the model's surface and upper-air point forecast accuracy at three selected locations during separate warm (May - August) and cool (October - January) season periods. In order to enhance sample sizes available for statistical calculations, the objective verification includes two consecutive warm and cool season periods. Systematic model deficiencies comprise the larger portion of the total error in most of the surface forecast variables that were evaluated. The error characteristics for both surface and upper-air forecasts vary widely by parameter, season, and station location. At upper levels, a few characteristic biases are identified. Overall however, the upper-level errors are more nonsystematic in nature and could be explained partly by observational measurement uncertainty. With a few exceptions, the upper-air results also indicate that 24-h model error growth is not statistically significant. In February and August 1997, NCEP implemented upgrades to the eta model's physical parameterizations that were designed to change some of the model's error characteristics near the surface. The results shown in this paper indicate that these upgrades led to identifiable and statistically significant changes in forecast accuracy for selected surface parameters. While some of the changes were expected, others were not consistent with the intent of the model updates and further emphasize the need for ongoing sensitivity studies and localized statistical verification efforts. Objective verification of point forecasts is a stringent measure of model performance, but when used alone, is not enough to quantify the overall value that model guidance may add to the forecast process. Therefore, results from a subjective verification of the meso-eta model over the Florida peninsula are discussed in the companion paper by Manobianco and Nutter. Overall verification results presented here and in part two should establish a reasonable benchmark from which model users and developers may pursue the ongoing eta model verification strategies in the future.
Students' Verification Strategies for Combinatorial Problems
ERIC Educational Resources Information Center
Mashiach Eizenberg, Michal; Zaslavsky, Orit
2004-01-01
We focus on a major difficulty in solving combinatorial problems, namely, on the verification of a solution. Our study aimed at identifying undergraduate students' tendencies to verify their solutions, and the verification strategies that they employ when solving these problems. In addition, an attempt was made to evaluate the level of efficiency…
A comparative verification of high resolution precipitation forecasts using model output statistics
NASA Astrophysics Data System (ADS)
van der Plas, Emiel; Schmeits, Maurice; Hooijman, Nicolien; Kok, Kees
2017-04-01
Verification of localized events such as precipitation has become even more challenging with the advent of high-resolution meso-scale numerical weather prediction (NWP). The realism of a forecast suggests that it should compare well against precipitation radar imagery with similar resolution, both spatially and temporally. Spatial verification methods solve some of the representativity issues that point verification gives rise to. In this study a verification strategy based on model output statistics is applied that aims to address both double penalty and resolution effects that are inherent to comparisons of NWP models with different resolutions. Using predictors based on spatial precipitation patterns around a set of stations, an extended logistic regression (ELR) equation is deduced, leading to a probability forecast distribution of precipitation for each NWP model, analysis and lead time. The ELR equations are derived for predictands based on areal calibrated radar precipitation and SYNOP observations. The aim is to extract maximum information from a series of precipitation forecasts, like a trained forecaster would. The method is applied to the non-hydrostatic model Harmonie (2.5 km resolution), Hirlam (11 km resolution) and the ECMWF model (16 km resolution), overall yielding similar Brier skill scores for the 3 post-processed models, but larger differences for individual lead times. Besides, the Fractions Skill Score is computed using the 3 deterministic forecasts, showing somewhat better skill for the Harmonie model. In other words, despite the realism of Harmonie precipitation forecasts, they only perform similarly or somewhat better than precipitation forecasts from the 2 lower resolution models, at least in the Netherlands.
NASA Technical Reports Server (NTRS)
Neal, G.
1988-01-01
Flexible walled wind tunnels have for some time been used to reduce wall interference effects at the model. A necessary part of the 3-D wall adjustment strategy being developed for the Transonic Self-Streamlining Wind Tunnel (TSWT) of Southampton University is the use of influence coefficients. The influence of a wall bump on the centerline flow in TSWT has been calculated theoretically using a streamline curvature program. This report details the experimental verification of these influence coefficients and concludes that it is valid to use the theoretically determined values in 3-D model testing.
Verification of road databases using multiple road models
NASA Astrophysics Data System (ADS)
Ziems, Marcel; Rottensteiner, Franz; Heipke, Christian
2017-08-01
In this paper a new approach for automatic road database verification based on remote sensing images is presented. In contrast to existing methods, the applicability of the new approach is not restricted to specific road types, context areas or geographic regions. This is achieved by combining several state-of-the-art road detection and road verification approaches that work well under different circumstances. Each one serves as an independent module representing a unique road model and a specific processing strategy. All modules provide independent solutions for the verification problem of each road object stored in the database in form of two probability distributions, the first one for the state of a database object (correct or incorrect), and a second one for the state of the underlying road model (applicable or not applicable). In accordance with the Dempster-Shafer Theory, both distributions are mapped to a new state space comprising the classes correct, incorrect and unknown. Statistical reasoning is applied to obtain the optimal state of a road object. A comparison with state-of-the-art road detection approaches using benchmark datasets shows that in general the proposed approach provides results with larger completeness. Additional experiments reveal that based on the proposed method a highly reliable semi-automatic approach for road data base verification can be designed.
NASA Technical Reports Server (NTRS)
Briand, Lionel C.; Basili, Victor R.; Hetmanski, Christopher J.
1992-01-01
Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault density components so that the testing/verification effort can be concentrated where needed. Such a strategy is expected to detect more faults and thus improve the resulting reliability of the overall system. This paper presents an alternative approach for constructing such models that is intended to fulfill specific software engineering needs (i.e. dealing with partial/incomplete information and creating models that are easy to interpret). Our approach to classification is as follows: (1) to measure the software system to be considered; and (2) to build multivariate stochastic models for prediction. We present experimental results obtained by classifying FORTRAN components developed at the NASA/GSFC into two fault density classes: low and high. Also we evaluate the accuracy of the model and the insights it provides into the software process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jason D. Hales; Veena Tikare
2014-04-01
The Used Fuel Disposition (UFD) program has initiated a project to develop a hydride formation modeling tool using a hybrid Pottsphase field approach. The Potts model is incorporated in the SPPARKS code from Sandia National Laboratories. The phase field model is provided through MARMOT from Idaho National Laboratory.
Analytical Verifications in Cryogenic Testing of NGST Advanced Mirror System Demonstrators
NASA Technical Reports Server (NTRS)
Cummings, Ramona; Levine, Marie; VanBuren, Dave; Kegley, Jeff; Green, Joseph; Hadaway, James; Presson, Joan; Cline, Todd; Stahl, H. Philip (Technical Monitor)
2002-01-01
Ground based testing is a critical and costly part of component, assembly, and system verifications of large space telescopes. At such tests, however, with integral teamwork by planners, analysts, and test personnel, segments can be included to validate specific analytical parameters and algorithms at relatively low additional cost. This paper opens with strategy of analytical verification segments added to vacuum cryogenic testing of Advanced Mirror System Demonstrator (AMSD) assemblies. These AMSD assemblies incorporate material and architecture concepts being considered in the Next Generation Space Telescope (NGST) design. The test segments for workmanship testing, cold survivability, and cold operation optical throughput are supplemented by segments for analytical verifications of specific structural, thermal, and optical parameters. Utilizing integrated modeling and separate materials testing, the paper continues with support plan for analyses, data, and observation requirements during the AMSD testing, currently slated for late calendar year 2002 to mid calendar year 2003. The paper includes anomaly resolution as gleaned by authors from similar analytical verification support of a previous large space telescope, then closes with draft of plans for parameter extrapolations, to form a well-verified portion of the integrated modeling being done for NGST performance predictions.
Efficient model checking of network authentication protocol based on SPIN
NASA Astrophysics Data System (ADS)
Tan, Zhi-hua; Zhang, Da-fang; Miao, Li; Zhao, Dan
2013-03-01
Model checking is a very useful technique for verifying the network authentication protocols. In order to improve the efficiency of modeling and verification on the protocols with the model checking technology, this paper first proposes a universal formalization description method of the protocol. Combined with the model checker SPIN, the method can expediently verify the properties of the protocol. By some modeling simplified strategies, this paper can model several protocols efficiently, and reduce the states space of the model. Compared with the previous literature, this paper achieves higher degree of automation, and better efficiency of verification. Finally based on the method described in the paper, we model and verify the Privacy and Key Management (PKM) authentication protocol. The experimental results show that the method of model checking is effective, which is useful for the other authentication protocols.
SU-E-I-56: Scan Angle Reduction for a Limited-Angle Intrafraction Verification (LIVE) System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, L; Zhang, Y; Yin, F
Purpose: To develop a novel adaptive reconstruction strategy to further reduce the scanning angle required by the limited-angle intrafraction verification (LIVE) system for intrafraction verification. Methods: LIVE acquires limited angle MV projections from the exit fluence of the arc treatment beam or during gantry rotation between static beams. Orthogonal limited-angle kV projections are also acquired simultaneously to provide additional information. LIVE considers the on-board 4D-CBCT images as a deformation of the prior 4D-CT images, and solves the deformation field based on deformation models and data fidelity constraint. LIVE reaches a checkpoint after a limited-angle scan, and reconstructs 4D-CBCT for intrafractionmore » verification at the checkpoint. In adaptive reconstruction strategy, a larger scanning angle of 30° is used for the first checkpoint, and smaller scanning angles of 15° are used for subsequent checkpoints. The onboard images reconstructed at the previous adjacent checkpoint are used as the prior images for reconstruction at the current checkpoint. As the algorithm only needs to reconstruct the small deformation occurred between adjacent checkpoints, projections from a smaller scan angle provide enough information for the reconstruction. XCAT was used to simulate tumor motion baseline drift of 2mm along sup-inf direction at every subsequent checkpoint, which are 15° apart. Adaptive reconstruction strategy was used to reconstruct the images at each checkpoint using orthogonal 15° kV and MV projections. Results: Results showed that LIVE reconstructed the tumor volumes accurately using orthogonal 15° kV-MV projections. Volume percentage differences (VPDs) were within 5% and center of mass shifts (COMS) were within 1mm for reconstruction at all checkpoints. Conclusion: It's feasible to use an adaptive reconstruction strategy to further reduce the scan angle needed by LIVE to allow faster and more frequent intrafraction verification to minimize the treatment errors in lung cancer treatments. Grant from Varian Medical System.« less
Verification in Referral-Based Crowdsourcing
Naroditskiy, Victor; Rahwan, Iyad; Cebrian, Manuel; Jennings, Nicholas R.
2012-01-01
Online social networks offer unprecedented potential for rallying a large number of people to accomplish a given task. Here we focus on information gathering tasks where rare information is sought through “referral-based crowdsourcing”: the information request is propagated recursively through invitations among members of a social network. Whereas previous work analyzed incentives for the referral process in a setting with only correct reports, misreporting is known to be both pervasive in crowdsourcing applications, and difficult/costly to filter out. A motivating example for our work is the DARPA Red Balloon Challenge where the level of misreporting was very high. In order to undertake a formal study of verification, we introduce a model where agents can exert costly effort to perform verification and false reports can be penalized. This is the first model of verification and it provides many directions for future research, which we point out. Our main theoretical result is the compensation scheme that minimizes the cost of retrieving the correct answer. Notably, this optimal compensation scheme coincides with the winning strategy of the Red Balloon Challenge. PMID:23071530
A verification strategy for web services composition using enhanced stacked automata model.
Nagamouttou, Danapaquiame; Egambaram, Ilavarasan; Krishnan, Muthumanickam; Narasingam, Poonkuzhali
2015-01-01
Currently, Service-Oriented Architecture (SOA) is becoming the most popular software architecture of contemporary enterprise applications, and one crucial technique of its implementation is web services. Individual service offered by some service providers may symbolize limited business functionality; however, by composing individual services from different service providers, a composite service describing the intact business process of an enterprise can be made. Many new standards have been defined to decipher web service composition problem namely Business Process Execution Language (BPEL). BPEL provides an initial work for forming an Extended Markup Language (XML) specification language for defining and implementing business practice workflows for web services. The problems with most realistic approaches to service composition are the verification of composed web services. It has to depend on formal verification method to ensure the correctness of composed services. A few research works has been carried out in the literature survey for verification of web services for deterministic system. Moreover the existing models did not address the verification properties like dead transition, deadlock, reachability and safetyness. In this paper, a new model to verify the composed web services using Enhanced Stacked Automata Model (ESAM) has been proposed. The correctness properties of the non-deterministic system have been evaluated based on the properties like dead transition, deadlock, safetyness, liveness and reachability. Initially web services are composed using Business Process Execution Language for Web Service (BPEL4WS) and it is converted into ESAM (combination of Muller Automata (MA) and Push Down Automata (PDA)) and it is transformed into Promela language, an input language for Simple ProMeLa Interpreter (SPIN) tool. The model is verified using SPIN tool and the results revealed better recital in terms of finding dead transition and deadlock in contrast to the existing models.
A Verification-Driven Approach to Control Analysis and Tuning
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2008-01-01
This paper proposes a methodology for the analysis and tuning of controllers using control verification metrics. These metrics, which are introduced in a companion paper, measure the size of the largest uncertainty set of a given class for which the closed-loop specifications are satisfied. This framework integrates deterministic and probabilistic uncertainty models into a setting that enables the deformation of sets in the parameter space, the control design space, and in the union of these two spaces. In regard to control analysis, we propose strategies that enable bounding regions of the design space where the specifications are satisfied by all the closed-loop systems associated with a prescribed uncertainty set. When this is unfeasible, we bound regions where the probability of satisfying the requirements exceeds a prescribed value. In regard to control tuning, we propose strategies for the improvement of the robust characteristics of a baseline controller. Some of these strategies use multi-point approximations to the control verification metrics in order to alleviate the numerical burden of solving a min-max problem. Since this methodology targets non-linear systems having an arbitrary, possibly implicit, functional dependency on the uncertain parameters and for which high-fidelity simulations are available, they are applicable to realistic engineering problems..
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Bernardi, E., E-mail: elisabetta.debernardi@unimib.it; Ricotti, R.; Riboldi, M.
2016-02-15
Purpose: An innovative strategy to improve the sensitivity of positron emission tomography (PET)-based treatment verification in ion beam radiotherapy is proposed. Methods: Low counting statistics PET images acquired during or shortly after the treatment (Measured PET) and a Monte Carlo estimate of the same PET images derived from the treatment plan (Expected PET) are considered as two frames of a 4D dataset. A 4D maximum likelihood reconstruction strategy was adapted to iteratively estimate the annihilation events distribution in a reference frame and the deformation motion fields that map it in the Expected PET and Measured PET frames. The outputs generatedmore » by the proposed strategy are as follows: (1) an estimate of the Measured PET with an image quality comparable to the Expected PET and (2) an estimate of the motion field mapping Expected PET to Measured PET. The details of the algorithm are presented and the strategy is preliminarily tested on analytically simulated datasets. Results: The algorithm demonstrates (1) robustness against noise, even in the worst conditions where 1.5 × 10{sup 4} true coincidences and a random fraction of 73% are simulated; (2) a proper sensitivity to different kind and grade of mismatches ranging between 1 and 10 mm; (3) robustness against bias due to incorrect washout modeling in the Monte Carlo simulation up to 1/3 of the original signal amplitude; and (4) an ability to describe the mismatch even in presence of complex annihilation distributions such as those induced by two perpendicular superimposed ion fields. Conclusions: The promising results obtained in this work suggest the applicability of the method as a quantification tool for PET-based treatment verification in ion beam radiotherapy. An extensive assessment of the proposed strategy on real treatment verification data is planned.« less
Test and Verification Approach for the NASA Constellation Program
NASA Technical Reports Server (NTRS)
Strong, Edward
2008-01-01
This viewgraph presentation is a test and verification approach for the NASA Constellation Program. The contents include: 1) The Vision for Space Exploration: Foundations for Exploration; 2) Constellation Program Fleet of Vehicles; 3) Exploration Roadmap; 4) Constellation Vehicle Approximate Size Comparison; 5) Ares I Elements; 6) Orion Elements; 7) Ares V Elements; 8) Lunar Lander; 9) Map of Constellation content across NASA; 10) CxP T&V Implementation; 11) Challenges in CxP T&V Program; 12) T&V Strategic Emphasis and Key Tenets; 13) CxP T&V Mission & Vision; 14) Constellation Program Organization; 15) Test and Evaluation Organization; 16) CxP Requirements Flowdown; 17) CxP Model Based Systems Engineering Approach; 18) CxP Verification Planning Documents; 19) Environmental Testing; 20) Scope of CxP Verification; 21) CxP Verification - General Process Flow; 22) Avionics and Software Integrated Testing Approach; 23) A-3 Test Stand; 24) Space Power Facility; 25) MEIT and FEIT; 26) Flight Element Integrated Test (FEIT); 27) Multi-Element Integrated Testing (MEIT); 28) Flight Test Driving Principles; and 29) Constellation s Integrated Flight Test Strategy Low Earth Orbit Servicing Capability.
Study of Measurement Strategies of Geometric Deviation of the Position of the Threaded Holes
NASA Astrophysics Data System (ADS)
Drbul, Mário; Martikan, Pavol; Sajgalik, Michal; Czan, Andrej; Broncek, Jozef; Babik, Ondrej
2017-12-01
Verification of product and quality control is an integral part of current production process. In terms of functional requirements and product interoperability, it is necessary to analyze their dimensional and also geometric specifications. Threaded holes are verified elements too, which are a substantial part of detachable screw connections and have a broad presence in engineering products. This paper deals with on the analysing of measurement strategies of verification geometric deviation of the position of the threaded holes, which are the indirect method of measuring threaded pins when applying different measurement strategies which can affect the result of the verification of the product..
Remarks to Eighth Annual State of Modeling and Simulation
1999-06-04
organization, training as well as materiel Discovery vice Verification Tolerance for Surprise Free play Red Team Iterative Process Push to failure...Account for responsive & innovative future adversaries – free play , adaptive strategies and tactics by professional red teams • Address C2 issues & human
Methods of increasing efficiency and maintainability of pipeline systems
NASA Astrophysics Data System (ADS)
Ivanov, V. A.; Sokolov, S. M.; Ogudova, E. V.
2018-05-01
This study is dedicated to the issue of pipeline transportation system maintenance. The article identifies two classes of technical-and-economic indices, which are used to select an optimal pipeline transportation system structure. Further, the article determines various system maintenance strategies and strategy selection criteria. Meanwhile, the maintenance strategies turn out to be not sufficiently effective due to non-optimal values of maintenance intervals. This problem could be solved by running the adaptive maintenance system, which includes a pipeline transportation system reliability improvement algorithm, especially an equipment degradation computer model. In conclusion, three model building approaches for determining optimal technical systems verification inspections duration were considered.
Manipulation strategies for massive space payloads
NASA Technical Reports Server (NTRS)
Book, Wayne J.
1989-01-01
Control for the bracing strategy is being examined. It was concluded earlier that trajectory planning must be improved to best achieve the bracing motion. Very interesting results were achieved which enable the inverse dynamics of flexible arms to be calculated for linearized motion in a more efficient manner than previously published. The desired motion of the end point beginning at t=0 and ending at t=t sub f is used to calculate the required torque at the joint. The solution is separated into a causal function that is zero for t is less than 0 and an accusal function which is zero for t is greater than t sub f. A number of alternative end point trajectories were explored in terms of the peak torque required, the amount of anticipatory action, and other issues. The single link case is the immediate subject and an experimental verification of that case is being performed. Modeling with experimental verification of closed chain dynamics continues. Modeling effort has pointed out inaccuracies that result from the choice of numerical techniques used to incorporate the closed chain constraints when modeling our experimental prototype RALF (Robotic Arm Large and Flexible). Results were compared to TREETOPS, a multi body code. The experimental verification work is suggesting new ways to make comparisons with systems having structural linearity and joint and geometric nonlinearity. The generation of inertial forces was studied with a small arm that will damp the large arm's vibration.
Development and verification of an agent-based model of opinion leadership.
Anderson, Christine A; Titler, Marita G
2014-09-27
The use of opinion leaders is a strategy used to speed the process of translating research into practice. Much is still unknown about opinion leader attributes and activities and the context in which they are most effective. Agent-based modeling is a methodological tool that enables demonstration of the interactive and dynamic effects of individuals and their behaviors on other individuals in the environment. The purpose of this study was to develop and test an agent-based model of opinion leadership. The details of the design and verification of the model are presented. The agent-based model was developed by using a software development platform to translate an underlying conceptual model of opinion leadership into a computer model. Individual agent attributes (for example, motives and credibility) and behaviors (seeking or providing an opinion) were specified as variables in the model in the context of a fictitious patient care unit. The verification process was designed to test whether or not the agent-based model was capable of reproducing the conditions of the preliminary conceptual model. The verification methods included iterative programmatic testing ('debugging') and exploratory analysis of simulated data obtained from execution of the model. The simulation tests included a parameter sweep, in which the model input variables were adjusted systematically followed by an individual time series experiment. Statistical analysis of model output for the 288 possible simulation scenarios in the parameter sweep revealed that the agent-based model was performing, consistent with the posited relationships in the underlying model. Nurse opinion leaders act on the strength of their beliefs and as a result, become an opinion resource for their uncertain colleagues, depending on their perceived credibility. Over time, some nurses consistently act as this type of resource and have the potential to emerge as opinion leaders in a context where uncertainty exists. The development and testing of agent-based models is an iterative process. The opinion leader model presented here provides a basic structure for continued model development, ongoing verification, and the establishment of validation procedures, including empirical data collection.
Thermal/structural design verification strategies for large space structures
NASA Technical Reports Server (NTRS)
Benton, David
1988-01-01
Requirements for space structures of increasing size, complexity, and precision have engendered a search for thermal design verification methods that do not impose unreasonable costs, that fit within the capabilities of existing facilities, and that still adequately reduce technical risk. This requires a combination of analytical and testing methods. This requires two approaches. The first is to limit thermal testing to sub-elements of the total system only in a compact configuration (i.e., not fully deployed). The second approach is to use a simplified environment to correlate analytical models with test results. These models can then be used to predict flight performance. In practice, a combination of these approaches is needed to verify the thermal/structural design of future very large space systems.
Testing a Model of Participant Retention in Longitudinal Substance Abuse Research
ERIC Educational Resources Information Center
Gilmore, Devin; Kuperminc, Gabriel P.
2014-01-01
Longitudinal substance abuse research has often been compromised by high rates of attrition, thought to be the result of the lifestyle that often accompanies addiction. Several studies have used strategies including collection of locator information at the baseline assessment, verification of the information, and interim contacts prior to…
NASA Technical Reports Server (NTRS)
Briand, Lionel C.; Basili, Victor R.; Hetmanski, Christopher J.
1993-01-01
Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault frequency components so that testing/verification effort can be concentrated where needed. Such a strategy is expected to detect more faults and thus improve the resulting reliability of the overall system. This paper presents the Optimized Set Reduction approach for constructing such models, intended to fulfill specific software engineering needs. Our approach to classification is to measure the software system and build multivariate stochastic models for predicting high risk system components. We present experimental results obtained by classifying Ada components into two classes: is or is not likely to generate faults during system and acceptance test. Also, we evaluate the accuracy of the model and the insights it provides into the error making process.
Optimal Verification of Entangled States with Local Measurements
NASA Astrophysics Data System (ADS)
Pallister, Sam; Linden, Noah; Montanaro, Ashley
2018-04-01
Consider the task of verifying that a given quantum device, designed to produce a particular entangled state, does indeed produce that state. One natural approach would be to characterize the output state by quantum state tomography, or alternatively, to perform some kind of Bell test, tailored to the state of interest. We show here that neither approach is optimal among local verification strategies for 2-qubit states. We find the optimal strategy in this case and show that quadratically fewer total measurements are needed to verify to within a given fidelity than in published results for quantum state tomography, Bell test, or fidelity estimation protocols. We also give efficient verification protocols for any stabilizer state. Additionally, we show that requiring that the strategy be constructed from local, nonadaptive, and noncollective measurements only incurs a constant-factor penalty over a strategy without these restrictions.
Automatic Verification of Serializers.
1980-03-01
31 2.5 Using semaphores to implement sei ;alizers ......................... 32 2.6 A comparison of...of concurrency control, while Hewitt has concentrated on more primitive control of concurrency in a context where programs communicate by passing...translation oflserializers into clusters and semaphores is given as a possible implementation strategy. Chapter 3 presents a simple semantic model that supl
Analyzing Personalized Policies for Online Biometric Verification
Sadhwani, Apaar; Yang, Yan; Wein, Lawrence M.
2014-01-01
Motivated by India’s nationwide biometric program for social inclusion, we analyze verification (i.e., one-to-one matching) in the case where we possess similarity scores for 10 fingerprints and two irises between a resident’s biometric images at enrollment and his biometric images during his first verification. At subsequent verifications, we allow individualized strategies based on these 12 scores: we acquire a subset of the 12 images, get new scores for this subset that quantify the similarity to the corresponding enrollment images, and use the likelihood ratio (i.e., the likelihood of observing these scores if the resident is genuine divided by the corresponding likelihood if the resident is an imposter) to decide whether a resident is genuine or an imposter. We also consider two-stage policies, where additional images are acquired in a second stage if the first-stage results are inconclusive. Using performance data from India’s program, we develop a new probabilistic model for the joint distribution of the 12 similarity scores and find near-optimal individualized strategies that minimize the false reject rate (FRR) subject to constraints on the false accept rate (FAR) and mean verification delay for each resident. Our individualized policies achieve the same FRR as a policy that acquires (and optimally fuses) 12 biometrics for each resident, which represents a five (four, respectively) log reduction in FRR relative to fingerprint (iris, respectively) policies previously proposed for India’s biometric program. The mean delay is sec for our proposed policy, compared to 30 sec for a policy that acquires one fingerprint and 107 sec for a policy that acquires all 12 biometrics. This policy acquires iris scans from 32–41% of residents (depending on the FAR) and acquires an average of 1.3 fingerprints per resident. PMID:24787752
Analyzing personalized policies for online biometric verification.
Sadhwani, Apaar; Yang, Yan; Wein, Lawrence M
2014-01-01
Motivated by India's nationwide biometric program for social inclusion, we analyze verification (i.e., one-to-one matching) in the case where we possess similarity scores for 10 fingerprints and two irises between a resident's biometric images at enrollment and his biometric images during his first verification. At subsequent verifications, we allow individualized strategies based on these 12 scores: we acquire a subset of the 12 images, get new scores for this subset that quantify the similarity to the corresponding enrollment images, and use the likelihood ratio (i.e., the likelihood of observing these scores if the resident is genuine divided by the corresponding likelihood if the resident is an imposter) to decide whether a resident is genuine or an imposter. We also consider two-stage policies, where additional images are acquired in a second stage if the first-stage results are inconclusive. Using performance data from India's program, we develop a new probabilistic model for the joint distribution of the 12 similarity scores and find near-optimal individualized strategies that minimize the false reject rate (FRR) subject to constraints on the false accept rate (FAR) and mean verification delay for each resident. Our individualized policies achieve the same FRR as a policy that acquires (and optimally fuses) 12 biometrics for each resident, which represents a five (four, respectively) log reduction in FRR relative to fingerprint (iris, respectively) policies previously proposed for India's biometric program. The mean delay is [Formula: see text] sec for our proposed policy, compared to 30 sec for a policy that acquires one fingerprint and 107 sec for a policy that acquires all 12 biometrics. This policy acquires iris scans from 32-41% of residents (depending on the FAR) and acquires an average of 1.3 fingerprints per resident.
The purpose of this SOP is to define the coding strategy for coding and coding verification of hand-entered data. It applies to the coding of all physical forms, especially those coded by hand. The strategy was developed for use in the Arizona NHEXAS project and the "Border" st...
A Perspective on Computational Human Performance Models as Design Tools
NASA Technical Reports Server (NTRS)
Jones, Patricia M.
2010-01-01
The design of interactive systems, including levels of automation, displays, and controls, is usually based on design guidelines and iterative empirical prototyping. A complementary approach is to use computational human performance models to evaluate designs. An integrated strategy of model-based and empirical test and evaluation activities is particularly attractive as a methodology for verification and validation of human-rated systems for commercial space. This talk will review several computational human performance modeling approaches and their applicability to design of display and control requirements.
Parallel Software Model Checking
2015-01-08
checker. This project will explore this strategy to parallelize the generalized PDR algorithm for software model checking. It belongs to TF1 due to its ... focus on formal verification . Generalized PDR. Generalized Property Driven Rechability (GPDR) i is an algorithm for solving HORN-SMT reachability...subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 08
NASA Technical Reports Server (NTRS)
Landano, M. R.; Easter, R. W.
1984-01-01
Aspects of Space Station automated systems testing and verification are discussed, taking into account several program requirements. It is found that these requirements lead to a number of issues of uncertainties which require study and resolution during the Space Station definition phase. Most, if not all, of the considered uncertainties have implications for the overall testing and verification strategy adopted by the Space Station Program. A description is given of the Galileo Orbiter fault protection design/verification approach. Attention is given to a mission description, an Orbiter description, the design approach and process, the fault protection design verification approach/process, and problems of 'stress' testing.
INF and IAEA: A comparative analysis of verification strategy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scheinman, L.; Kratzer, M.
1992-07-01
This is the final report of a study on the relevance and possible lessons of Intermediate Range Nuclear Force (INF) verification to the International Atomic Energy Agency (IAEA) international safeguards activities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, David A.
2012-08-16
Oak Ridge Associated Universities (ORAU) conducted in-process inspections and independent verification (IV) surveys in support of DOE's remedial efforts in Zone 1 of East Tennessee Technology Park (ETTP) in Oak Ridge, Tennessee. Inspections concluded that the remediation contractor's soil removal and survey objectives were satisfied and the dynamic verification strategy (DVS) was implemented as designed. Independent verification (IV) activities included gamma walkover surveys and soil sample collection/analysis over multiple exposure units (EUs).
Cleaning and Cleanliness Verification Techniques for Mars Returned Sample Handling
NASA Technical Reports Server (NTRS)
Mickelson, E. T.; Lindstrom, D. J.; Allton, J. H.; Hittle, J. D.
2002-01-01
Precision cleaning and cleanliness verification techniques are examined as a subset of a comprehensive contamination control strategy for a Mars sample return mission. Additional information is contained in the original extended abstract.
INF and IAEA: A comparative analysis of verification strategy. [Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scheinman, L.; Kratzer, M.
1992-07-01
This is the final report of a study on the relevance and possible lessons of Intermediate Range Nuclear Force (INF) verification to the International Atomic Energy Agency (IAEA) international safeguards activities.
Very fast road database verification using textured 3D city models obtained from airborne imagery
NASA Astrophysics Data System (ADS)
Bulatov, Dimitri; Ziems, Marcel; Rottensteiner, Franz; Pohl, Melanie
2014-10-01
Road databases are known to be an important part of any geodata infrastructure, e.g. as the basis for urban planning or emergency services. Updating road databases for crisis events must be performed quickly and with the highest possible degree of automation. We present a semi-automatic algorithm for road verification using textured 3D city models, starting from aerial or even UAV-images. This algorithm contains two processes, which exchange input and output, but basically run independently from each other. These processes are textured urban terrain reconstruction and road verification. The first process contains a dense photogrammetric reconstruction of 3D geometry of the scene using depth maps. The second process is our core procedure, since it contains various methods for road verification. Each method represents a unique road model and a specific strategy, and thus is able to deal with a specific type of roads. Each method is designed to provide two probability distributions, where the first describes the state of a road object (correct, incorrect), and the second describes the state of its underlying road model (applicable, not applicable). Based on the Dempster-Shafer Theory, both distributions are mapped to a single distribution that refers to three states: correct, incorrect, and unknown. With respect to the interaction of both processes, the normalized elevation map and the digital orthophoto generated during 3D reconstruction are the necessary input - together with initial road database entries - for the road verification process. If the entries of the database are too obsolete or not available at all, sensor data evaluation enables classification of the road pixels of the elevation map followed by road map extraction by means of vectorization and filtering of the geometrically and topologically inconsistent objects. Depending on the time issue and availability of a geo-database for buildings, the urban terrain reconstruction procedure has semantic models of buildings, trees, and ground as output. Building s and ground are textured by means of available images. This facilitates the orientation in the model and the interactive verification of the road objects that where initially classified as unknown. The three main modules of the texturing algorithm are: Pose estimation (if the videos are not geo-referenced), occlusion analysis, and texture synthesis.
Benchmarking on Tsunami Currents with ComMIT
NASA Astrophysics Data System (ADS)
Sharghi vand, N.; Kanoglu, U.
2015-12-01
There were no standards for the validation and verification of tsunami numerical models before 2004 Indian Ocean tsunami. Even, number of numerical models has been used for inundation mapping effort, evaluation of critical structures, etc. without validation and verification. After 2004, NOAA Center for Tsunami Research (NCTR) established standards for the validation and verification of tsunami numerical models (Synolakis et al. 2008 Pure Appl. Geophys. 165, 2197-2228), which will be used evaluation of critical structures such as nuclear power plants against tsunami attack. NCTR presented analytical, experimental and field benchmark problems aimed to estimate maximum runup and accepted widely by the community. Recently, benchmark problems were suggested by the US National Tsunami Hazard Mitigation Program Mapping & Modeling Benchmarking Workshop: Tsunami Currents on February 9-10, 2015 at Portland, Oregon, USA (http://nws.weather.gov/nthmp/index.html). These benchmark problems concentrated toward validation and verification of tsunami numerical models on tsunami currents. Three of the benchmark problems were: current measurement of the Japan 2011 tsunami in Hilo Harbor, Hawaii, USA and in Tauranga Harbor, New Zealand, and single long-period wave propagating onto a small-scale experimental model of the town of Seaside, Oregon, USA. These benchmark problems were implemented in the Community Modeling Interface for Tsunamis (ComMIT) (Titov et al. 2011 Pure Appl. Geophys. 168, 2121-2131), which is a user-friendly interface to the validated and verified Method of Splitting Tsunami (MOST) (Titov and Synolakis 1995 J. Waterw. Port Coastal Ocean Eng. 121, 308-316) model and is developed by NCTR. The modeling results are compared with the required benchmark data, providing good agreements and results are discussed. Acknowledgment: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement no 603839 (Project ASTARTE - Assessment, Strategy and Risk Reduction for Tsunamis in Europe)
NASA Astrophysics Data System (ADS)
Gianoli, Chiara; Kurz, Christopher; Riboldi, Marco; Bauer, Julia; Fontana, Giulia; Baroni, Guido; Debus, Jürgen; Parodi, Katia
2016-06-01
A clinical trial named PROMETHEUS is currently ongoing for inoperable hepatocellular carcinoma (HCC) at the Heidelberg Ion Beam Therapy Center (HIT, Germany). In this framework, 4D PET-CT datasets are acquired shortly after the therapeutic treatment to compare the irradiation induced PET image with a Monte Carlo PET prediction resulting from the simulation of treatment delivery. The extremely low count statistics of this measured PET image represents a major limitation of this technique, especially in presence of target motion. The purpose of the study is to investigate two different 4D PET motion compensation strategies towards the recovery of the whole count statistics for improved image quality of the 4D PET-CT datasets for PET-based treatment verification. The well-known 4D-MLEM reconstruction algorithm, embedding the motion compensation in the reconstruction process of 4D PET sinograms, was compared to a recently proposed pre-reconstruction motion compensation strategy, which operates in sinogram domain by applying the motion compensation to the 4D PET sinograms. With reference to phantom and patient datasets, advantages and drawbacks of the two 4D PET motion compensation strategies were identified. The 4D-MLEM algorithm was strongly affected by inverse inconsistency of the motion model but demonstrated the capability to mitigate the noise-break-up effects. Conversely, the pre-reconstruction warping showed less sensitivity to inverse inconsistency but also more noise in the reconstructed images. The comparison was performed by relying on quantification of PET activity and ion range difference, typically yielding similar results. The study demonstrated that treatment verification of moving targets could be accomplished by relying on the whole count statistics image quality, as obtained from the application of 4D PET motion compensation strategies. In particular, the pre-reconstruction warping was shown to represent a promising choice when combined with intra-reconstruction smoothing.
Geoengineering as a design problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kravitz, Ben; MacMartin, Douglas G.; Wang, Hailong
2016-01-01
Understanding the climate impacts of solar geoengineering is essential for evaluating its benefits and risks. Most previous simulations have prescribed a particular strategy and evaluated its modeled effects. Here we turn this approach around by first choosing example climate objectives and then designing a strategy to meet those objectives in climate models. There are four essential criteria for designing a strategy: (i) an explicit specification of the objectives, (ii) defining what climate forcing agents to modify so the objectives are met, (iii) a method for managing uncertainties, and (iv) independent verification of the strategy in an evaluation model. We demonstrate this design perspective throughmore » two multi-objective examples. First, changes in Arctic temperature and the position of tropical precipitation due to CO 2 increases are offset by adjusting high-latitude insolation in each hemisphere independently. Second, three different latitude-dependent patterns of insolation are modified to offset CO 2-induced changes in global mean temperature, interhemispheric temperature asymmetry, and the Equator-to-pole temperature gradient. In both examples, the "design" and "evaluation" models are state-of-the-art fully coupled atmosphere–ocean general circulation models.« less
Verification of the CFD simulation system SAUNA for complex aircraft configurations
NASA Astrophysics Data System (ADS)
Shaw, Jonathon A.; Peace, Andrew J.; May, Nicholas E.; Pocock, Mark F.
1994-04-01
This paper is concerned with the verification for complex aircraft configurations of an advanced CFD simulation system known by the acronym SAUNA. A brief description of the complete system is given, including its unique use of differing grid generation strategies (structured, unstructured or both) depending on the geometric complexity of the addressed configuration. The majority of the paper focuses on the application of SAUNA to a variety of configurations from the military aircraft, civil aircraft and missile areas. Mesh generation issues are discussed for each geometry and experimental data are used to assess the accuracy of the inviscid (Euler) model used. It is shown that flexibility and accuracy are combined in an efficient manner, thus demonstrating the value of SAUNA in aerodynamic design.
National Centers for Environmental Prediction
Products Operational Forecast Graphics Experimental Forecast Graphics Verification and Diagnostics Model PARALLEL/EXPERIMENTAL MODEL FORECAST GRAPHICS OPERATIONAL VERIFICATION / DIAGNOSTICS PARALLEL VERIFICATION Developmental Air Quality Forecasts and Verification Back to Table of Contents 2. PARALLEL/EXPERIMENTAL GRAPHICS
National Centers for Environmental Prediction
Operational Forecast Graphics Experimental Forecast Graphics Verification and Diagnostics Model Configuration /EXPERIMENTAL MODEL FORECAST GRAPHICS OPERATIONAL VERIFICATION / DIAGNOSTICS PARALLEL VERIFICATION / DIAGNOSTICS Developmental Air Quality Forecasts and Verification Back to Table of Contents 2. PARALLEL/EXPERIMENTAL GRAPHICS
NASA Astrophysics Data System (ADS)
Ament, F.; Weusthoff, T.; Arpagaus, M.; Rotach, M.
2009-04-01
The main aim of the WWRP Forecast Demonstration Project MAP D-PHASE is to demonstrate the performance of today's models to forecast heavy precipitation and flood events in the Alpine region. Therefore an end-to-end, real-time forecasting system was installed and operated during the D PHASE Operations Period from June to November 2007. Part of this system are 30 numerical weather prediction models (deterministic as well as ensemble systems) operated by weather services and research institutes, which issue alerts if predicted precipitation accumulations exceed critical thresholds. Additionally to the real-time alerts, all relevant model fields of these simulations are stored in a central data archive. This comprehensive data set allows a detailed assessment of today's quantitative precipitation forecast (QPF) performance in the Alpine region. We will present results of QPF verifications against Swiss radar and rain gauge data both from a qualitative point of view, in terms of alerts, as well as from a quantitative perspective, in terms of precipitation rate. Various influencing factors like lead time, accumulation time, selection of warning thresholds, or bias corrections will be discussed. Additional to traditional verifications of area average precipitation amounts, the performance of the models to predict the correct precipitation statistics without requiring a point-to-point match will be described by using modern Fuzzy verification techniques. Both analyses reveal significant advantages of deep convection resolving models compared to coarser models with parameterized convection. An intercomparison of the model forecasts themselves reveals a remarkably high variability between different models, and makes it worthwhile to evaluate the potential of a multi-model ensemble. Various multi-model ensemble strategies will be tested by combining D-PHASE models to virtual ensemble systems.
Verification of Functional Fault Models and the Use of Resource Efficient Verification Tools
NASA Technical Reports Server (NTRS)
Bis, Rachael; Maul, William A.
2015-01-01
Functional fault models (FFMs) are a directed graph representation of the failure effect propagation paths within a system's physical architecture and are used to support development and real-time diagnostics of complex systems. Verification of these models is required to confirm that the FFMs are correctly built and accurately represent the underlying physical system. However, a manual, comprehensive verification process applied to the FFMs was found to be error prone due to the intensive and customized process necessary to verify each individual component model and to require a burdensome level of resources. To address this problem, automated verification tools have been developed and utilized to mitigate these key pitfalls. This paper discusses the verification of the FFMs and presents the tools that were developed to make the verification process more efficient and effective.
Model Based Verification of Cyber Range Event Environments
2015-12-10
Model Based Verification of Cyber Range Event Environments Suresh K. Damodaran MIT Lincoln Laboratory 244 Wood St., Lexington, MA, USA...apply model based verification to cyber range event environment configurations, allowing for the early detection of errors in event environment...Environment Representation (CCER) ontology. We also provide an overview of a methodology to specify verification rules and the corresponding error
Self-Verification of Ability through Biased Performance Memory.
ERIC Educational Resources Information Center
Karabenick, Stuart A.; LeBlanc, Daniel
Evidence points to a pervasive tendency for persons to behave to maintain their existing cognitive structures. One strategy by which this self-verification is made more probable involves information processing. Through attention, encoding and retrieval, and the interpretation of events, persons process information so that self-confirmatory…
CD volume design and verification
NASA Technical Reports Server (NTRS)
Li, Y. P.; Hughes, J. S.
1993-01-01
In this paper, we describe a prototype for CD-ROM volume design and verification. This prototype allows users to create their own model of CD volumes by modifying a prototypical model. Rule-based verification of the test volumes can then be performed later on against the volume definition. This working prototype has proven the concept of model-driven rule-based design and verification for large quantity of data. The model defined for the CD-ROM volumes becomes a data model as well as an executable specification.
Qiu, Shi; Yang, Wen-Zhi; Yao, Chang-Liang; Qiu, Zhi-Dong; Shi, Xiao-Jian; Zhang, Jing-Xian; Hou, Jin-Jun; Wang, Qiu-Rong; Wu, Wan-Ying; Guo, De-An
2016-07-01
A key segment in authentication of herbal medicines is the establishment of robust biomarkers that embody the intrinsic metabolites difference independent of the growing environment or processing technics. We present a strategy by nontargeted metabolomics and "Commercial-homophyletic" comparison-induced biomarkers verification with new bioinformatic vehicles, to improve the efficiency and reliability in authentication of herbal medicines. The chemical differentiation of five different parts (root, leaf, flower bud, berry, and seed) of Panax ginseng was illustrated as a case study. First, an optimized ultra-performance liquid chromatography/quadrupole time-of-flight-MS(E) (UPLC/QTOF-MS(E)) approach was established for global metabolites profiling. Second, UNIFI™ combined with search of an in-house library was employed to automatically characterize the metabolites. Third, pattern recognition multivariate statistical analysis of the MS(E) data of different parts of commercial and homophyletic samples were separately performed to explore potential biomarkers. Fourth, potential biomarkers deduced from commercial and homophyletic root and leaf samples were cross-compared to infer robust biomarkers. Fifth, discriminating models by artificial neutral network (ANN) were established to identify different parts of P. ginseng. Consequently, 164 compounds were characterized, and 11 robust biomarkers enabling the differentiation among root, leaf, flower bud, and berry, were discovered by removing those structurally unstable and possibly processing-related ones. The ANN models using the robust biomarkers managed to exactly discriminate four different parts and root adulterant with leaf as well. Conclusively, biomarkers verification using homophyletic samples conduces to the discovery of robust biomarkers. The integrated strategy facilitates authentication of herbal medicines in a more efficient and more intelligent manner. Copyright © 2016 Elsevier B.V. All rights reserved.
Volumetric Verification of Multiaxis Machine Tool Using Laser Tracker
Aguilar, Juan José
2014-01-01
This paper aims to present a method of volumetric verification in machine tools with linear and rotary axes using a laser tracker. Beyond a method for a particular machine, it presents a methodology that can be used in any machine type. Along this paper, the schema and kinematic model of a machine with three axes of movement, two linear and one rotational axes, including the measurement system and the nominal rotation matrix of the rotational axis are presented. Using this, the machine tool volumetric error is obtained and nonlinear optimization techniques are employed to improve the accuracy of the machine tool. The verification provides a mathematical, not physical, compensation, in less time than other methods of verification by means of the indirect measurement of geometric errors of the machine from the linear and rotary axes. This paper presents an extensive study about the appropriateness and drawbacks of the regression function employed depending on the types of movement of the axes of any machine. In the same way, strengths and weaknesses of measurement methods and optimization techniques depending on the space available to place the measurement system are presented. These studies provide the most appropriate strategies to verify each machine tool taking into consideration its configuration and its available work space. PMID:25202744
Inconsistent Investment and Consumption Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kronborg, Morten Tolver, E-mail: mtk@atp.dk; Steffensen, Mogens, E-mail: mogens@math.ku.dk
In a traditional Black–Scholes market we develop a verification theorem for a general class of investment and consumption problems where the standard dynamic programming principle does not hold. The theorem is an extension of the standard Hamilton–Jacobi–Bellman equation in the form of a system of non-linear differential equations. We derive the optimal investment and consumption strategy for a mean-variance investor without pre-commitment endowed with labor income. In the case of constant risk aversion it turns out that the optimal amount of money to invest in stocks is independent of wealth. The optimal consumption strategy is given as a deterministic bang-bangmore » strategy. In order to have a more realistic model we allow the risk aversion to be time and state dependent. Of special interest is the case were the risk aversion is inversely proportional to present wealth plus the financial value of future labor income net of consumption. Using the verification theorem we give a detailed analysis of this problem. It turns out that the optimal amount of money to invest in stocks is given by a linear function of wealth plus the financial value of future labor income net of consumption. The optimal consumption strategy is again given as a deterministic bang-bang strategy. We also calculate, for a general time and state dependent risk aversion function, the optimal investment and consumption strategy for a mean-standard deviation investor without pre-commitment. In that case, it turns out that it is optimal to take no risk at all.« less
47 CFR 73.151 - Field strength measurements to establish performance of directional antennas.
Code of Federal Regulations, 2010 CFR
2010-10-01
... verified either by field strength measurement or by computer modeling and sampling system verification. (a... specifically identified by the Commission. (c) Computer modeling and sample system verification of modeled... performance verified by computer modeling and sample system verification. (1) A matrix of impedance...
2009-01-01
Background The study of biological networks has led to the development of increasingly large and detailed models. Computer tools are essential for the simulation of the dynamical behavior of the networks from the model. However, as the size of the models grows, it becomes infeasible to manually verify the predictions against experimental data or identify interesting features in a large number of simulation traces. Formal verification based on temporal logic and model checking provides promising methods to automate and scale the analysis of the models. However, a framework that tightly integrates modeling and simulation tools with model checkers is currently missing, on both the conceptual and the implementational level. Results We have developed a generic and modular web service, based on a service-oriented architecture, for integrating the modeling and formal verification of genetic regulatory networks. The architecture has been implemented in the context of the qualitative modeling and simulation tool GNA and the model checkers NUSMV and CADP. GNA has been extended with a verification module for the specification and checking of biological properties. The verification module also allows the display and visual inspection of the verification results. Conclusions The practical use of the proposed web service is illustrated by means of a scenario involving the analysis of a qualitative model of the carbon starvation response in E. coli. The service-oriented architecture allows modelers to define the model and proceed with the specification and formal verification of the biological properties by means of a unified graphical user interface. This guarantees a transparent access to formal verification technology for modelers of genetic regulatory networks. PMID:20042075
Strategies for Validation Testing of Ground Systems
NASA Technical Reports Server (NTRS)
Annis, Tammy; Sowards, Stephanie
2009-01-01
In order to accomplish the full Vision for Space Exploration announced by former President George W. Bush in 2004, NASA will have to develop a new space transportation system and supporting infrastructure. The main portion of this supporting infrastructure will reside at the Kennedy Space Center (KSC) in Florida and will either be newly developed or a modification of existing vehicle processing and launch facilities, including Ground Support Equipment (GSE). This type of large-scale launch site development is unprecedented since the time of the Apollo Program. In order to accomplish this successfully within the limited budget and schedule constraints a combination of traditional and innovative strategies for Verification and Validation (V&V) have been developed. The core of these strategies consists of a building-block approach to V&V, starting with component V&V and ending with a comprehensive end-to-end validation test of the complete launch site, called a Ground Element Integration Test (GEIT). This paper will outline these strategies and provide the high level planning for meeting the challenges of implementing V&V on a large-scale development program. KEY WORDS: Systems, Elements, Subsystem, Integration Test, Ground Systems, Ground Support Equipment, Component, End Item, Test and Verification Requirements (TVR), Verification Requirements (VR)
, GFS, RAP, HRRR, HIRESW, SREF mean, International Global Models, HPC analysis Precipitation Skill Scores : 1995-Present NAM, GFS, NAM CONUS nest, International Models EMC Forecast Verfication Stats: NAM ) Real Time Verification of NCEP Operational Models against observations Real Time Verification of NCEP
Validation of mesoscale models
NASA Technical Reports Server (NTRS)
Kuo, Bill; Warner, Tom; Benjamin, Stan; Koch, Steve; Staniforth, Andrew
1993-01-01
The topics discussed include the following: verification of cloud prediction from the PSU/NCAR mesoscale model; results form MAPS/NGM verification comparisons and MAPS observation sensitivity tests to ACARS and profiler data; systematic errors and mesoscale verification for a mesoscale model; and the COMPARE Project and the CME.
VAVUQ, Python and Matlab freeware for Verification and Validation, Uncertainty Quantification
NASA Astrophysics Data System (ADS)
Courtney, J. E.; Zamani, K.; Bombardelli, F. A.; Fleenor, W. E.
2015-12-01
A package of scripts is presented for automated Verification and Validation (V&V) and Uncertainty Quantification (UQ) for engineering codes that approximate Partial Differential Equations (PDFs). The code post-processes model results to produce V&V and UQ information. This information can be used to assess model performance. Automated information on code performance can allow for a systematic methodology to assess the quality of model approximations. The software implements common and accepted code verification schemes. The software uses the Method of Manufactured Solutions (MMS), the Method of Exact Solution (MES), Cross-Code Verification, and Richardson Extrapolation (RE) for solution (calculation) verification. It also includes common statistical measures that can be used for model skill assessment. Complete RE can be conducted for complex geometries by implementing high-order non-oscillating numerical interpolation schemes within the software. Model approximation uncertainty is quantified by calculating lower and upper bounds of numerical error from the RE results. The software is also able to calculate the Grid Convergence Index (GCI), and to handle adaptive meshes and models that implement mixed order schemes. Four examples are provided to demonstrate the use of the software for code and solution verification, model validation and uncertainty quantification. The software is used for code verification of a mixed-order compact difference heat transport solver; the solution verification of a 2D shallow-water-wave solver for tidal flow modeling in estuaries; the model validation of a two-phase flow computation in a hydraulic jump compared to experimental data; and numerical uncertainty quantification for 3D CFD modeling of the flow patterns in a Gust erosion chamber.
NASA Astrophysics Data System (ADS)
Wentworth, Mami Tonoe
Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.
Efficient and Scalable Graph Similarity Joins in MapReduce
Chen, Yifan; Zhang, Weiming; Tang, Jiuyang
2014-01-01
Along with the emergence of massive graph-modeled data, it is of great importance to investigate graph similarity joins due to their wide applications for multiple purposes, including data cleaning, and near duplicate detection. This paper considers graph similarity joins with edit distance constraints, which return pairs of graphs such that their edit distances are no larger than a given threshold. Leveraging the MapReduce programming model, we propose MGSJoin, a scalable algorithm following the filtering-verification framework for efficient graph similarity joins. It relies on counting overlapping graph signatures for filtering out nonpromising candidates. With the potential issue of too many key-value pairs in the filtering phase, spectral Bloom filters are introduced to reduce the number of key-value pairs. Furthermore, we integrate the multiway join strategy to boost the verification, where a MapReduce-based method is proposed for GED calculation. The superior efficiency and scalability of the proposed algorithms are demonstrated by extensive experimental results. PMID:25121135
Efficient and scalable graph similarity joins in MapReduce.
Chen, Yifan; Zhao, Xiang; Xiao, Chuan; Zhang, Weiming; Tang, Jiuyang
2014-01-01
Along with the emergence of massive graph-modeled data, it is of great importance to investigate graph similarity joins due to their wide applications for multiple purposes, including data cleaning, and near duplicate detection. This paper considers graph similarity joins with edit distance constraints, which return pairs of graphs such that their edit distances are no larger than a given threshold. Leveraging the MapReduce programming model, we propose MGSJoin, a scalable algorithm following the filtering-verification framework for efficient graph similarity joins. It relies on counting overlapping graph signatures for filtering out nonpromising candidates. With the potential issue of too many key-value pairs in the filtering phase, spectral Bloom filters are introduced to reduce the number of key-value pairs. Furthermore, we integrate the multiway join strategy to boost the verification, where a MapReduce-based method is proposed for GED calculation. The superior efficiency and scalability of the proposed algorithms are demonstrated by extensive experimental results.
Calibration and simulation of two large wastewater treatment plants operated for nutrient removal.
Ferrer, J; Morenilla, J J; Bouzas, A; García-Usach, F
2004-01-01
Control and optimisation of plant processes has become a priority for WWTP managers. The calibration and verification of a mathematical model provides an important tool for the investigation of advanced control strategies that may assist in the design or optimization of WWTPs. This paper describes the calibration of the ASM2d model for two full scale biological nitrogen and phosphorus removal plants in order to characterize the biological process and to upgrade the plants' performance. Results from simulation showed a good correspondence with experimental data demonstrating that the model and the calibrated parameters were able to predict the behaviour of both WWTPs. Once the calibration and simulation process was finished, a study for each WWTP was done with the aim of improving its performance. Modifications focused on reactor configuration and operation strategies were proposed.
NASA Astrophysics Data System (ADS)
Hendricks Franssen, H. J.; Post, H.; Vrugt, J. A.; Fox, A. M.; Baatz, R.; Kumbhar, P.; Vereecken, H.
2015-12-01
Estimation of net ecosystem exchange (NEE) by land surface models is strongly affected by uncertain ecosystem parameters and initial conditions. A possible approach is the estimation of plant functional type (PFT) specific parameters for sites with measurement data like NEE and application of the parameters at other sites with the same PFT and no measurements. This upscaling strategy was evaluated in this work for sites in Germany and France. Ecosystem parameters and initial conditions were estimated with NEE-time series of one year length, or a time series of only one season. The DREAM(zs) algorithm was used for the estimation of parameters and initial conditions. DREAM(zs) is not limited to Gaussian distributions and can condition to large time series of measurement data simultaneously. DREAM(zs) was used in combination with the Community Land Model (CLM) v4.5. Parameter estimates were evaluated by model predictions at the same site for an independent verification period. In addition, the parameter estimates were evaluated at other, independent sites situated >500km away with the same PFT. The main conclusions are: i) simulations with estimated parameters reproduced better the NEE measurement data in the verification periods, including the annual NEE-sum (23% improvement), annual NEE-cycle and average diurnal NEE course (error reduction by factor 1,6); ii) estimated parameters based on seasonal NEE-data outperformed estimated parameters based on yearly data; iii) in addition, those seasonal parameters were often also significantly different from their yearly equivalents; iv) estimated parameters were significantly different if initial conditions were estimated together with the parameters. We conclude that estimated PFT-specific parameters improve land surface model predictions significantly at independent verification sites and for independent verification periods so that their potential for upscaling is demonstrated. However, simulation results also indicate that possibly the estimated parameters mask other model errors. This would imply that their application at climatic time scales would not improve model predictions. A central question is whether the integration of many different data streams (e.g., biomass, remotely sensed LAI) could solve the problems indicated here.
Systematic Model-in-the-Loop Test of Embedded Control Systems
NASA Astrophysics Data System (ADS)
Krupp, Alexander; Müller, Wolfgang
Current model-based development processes offer new opportunities for verification automation, e.g., in automotive development. The duty of functional verification is the detection of design flaws. Current functional verification approaches exhibit a major gap between requirement definition and formal property definition, especially when analog signals are involved. Besides lack of methodical support for natural language formalization, there does not exist a standardized and accepted means for formal property definition as a target for verification planning. This article addresses several shortcomings of embedded system verification. An Enhanced Classification Tree Method is developed based on the established Classification Tree Method for Embeded Systems CTM/ES which applies a hardware verification language to define a verification environment.
Review and verification of CARE 3 mathematical model and code
NASA Technical Reports Server (NTRS)
Rose, D. M.; Altschul, R. E.; Manke, J. W.; Nelson, D. L.
1983-01-01
The CARE-III mathematical model and code verification performed by Boeing Computer Services were documented. The mathematical model was verified for permanent and intermittent faults. The transient fault model was not addressed. The code verification was performed on CARE-III, Version 3. A CARE III Version 4, which corrects deficiencies identified in Version 3, is being developed.
National Centers for Environmental Prediction
/ VISION | About EMC EMC > NAM > EXPERIMENTAL DATA Home NAM Operational Products HIRESW Operational Products Operational Forecast Graphics Experimental Forecast Graphics Verification and Diagnostics Model PARALLEL/EXPERIMENTAL MODEL FORECAST GRAPHICS OPERATIONAL VERIFICATION / DIAGNOSTICS PARALLEL VERIFICATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1979-03-01
The certification and verification of the Northrup Model NSC-01-0732 Fresnel lens tracking solar collector are presented. A certification statement is included with signatures and a separate report on the structural analysis of the collector system. System verification against the Interim Performance Criteria are indicated by matrices with verification discussion, analysis, and enclosed test results.
Wright, Kevin B; King, Shawn; Rosenberg, Jenny
2014-01-01
This study investigated the influence of social support and self-verification on loneliness, depression, and stress among 477 college students. The authors propose and test a theoretical model using structural equation modeling. The results indicated empirical support for the model, with self-verification mediating the relation between social support and health outcomes. The results have implications for social support and self-verification research, which are discussed along with directions for future research and limitations of the study.
Memory for Verbally Presented Routes: A Comparison of Strategies Used by Blind and Sighted People.
ERIC Educational Resources Information Center
Easton, R. D.; Bentzen, B. L.
1987-01-01
Congenitally-blind (N=16) and sighted (N=16) young adults listened to descriptions of routes and then finger traced routes through a raised line matrix. Route tracing speed and accuracy revealed that spatial sentence verification interfered with route memory more than abstract/verbal sentence verification for all subjects. (Author/CB)
NASA Technical Reports Server (NTRS)
Hale, Joseph P.
2006-01-01
Models and simulations (M&S) are critical resources in the exploration of space. They support program management, systems engineering, integration, analysis, test, and operations and provide critical information and data supporting key analyses and decisions (technical, cost and schedule). Consequently, there is a clear need to establish a solid understanding of M&S strengths and weaknesses, and the bounds within which they can credibly support decision-making. Their usage requires the implementation of a rigorous approach to verification, validation and accreditation (W&A) and establishment of formal process and practices associated with their application. To ensure decision-making is suitably supported by information (data, models, test beds) from activities (studies, exercises) from M&S applications that are understood and characterized, ESMD is establishing formal, tailored W&A processes and practices. In addition, to ensure the successful application of M&S within ESMD, a formal process for the certification of analysts that use M&S is being implemented. This presentation will highlight NASA's Exploration Systems Mission Directorate (ESMD) management approach for M&S W&A to ensure decision-makers receive timely information on the model's fidelity, credibility, and quality.
User Guidelines and Best Practices for CASL VUQ Analysis Using Dakota
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Coleman, Kayla; Hooper, Russell W.
2016-10-04
In general, Dakota is the Consortium for Advanced Simulation of Light Water Reactors (CASL) delivery vehicle for verification, validation, and uncertainty quantification (VUQ) algorithms. It permits ready application of the VUQ methods described above to simulation codes by CASL researchers, code developers, and application engineers. More specifically, the CASL VUQ Strategy [33] prescribes the use of Predictive Capability Maturity Model (PCMM) assessments [37]. PCMM is an expert elicitation tool designed to characterize and communicate completeness of the approaches used for computational model definition, verification, validation, and uncertainty quantification associated with an intended application. Exercising a computational model with the methodsmore » in Dakota will yield, in part, evidence for a predictive capability maturity model (PCMM) assessment. Table 1.1 summarizes some key predictive maturity related activities (see details in [33]), with examples of how Dakota fits in. This manual offers CASL partners a guide to conducting Dakota-based VUQ studies for CASL problems. It motivates various classes of Dakota methods and includes examples of their use on representative application problems. On reading, a CASL analyst should understand why and how to apply Dakota to a simulation problem.« less
Williams, Rebecca S; Derrick, Jason; Liebman, Aliza Kate; LaFleur, Kevin; Ribisl, Kurt M
2018-05-01
Identify the population of internet e-cigarette vendors (IEVs) and conduct content analyses of their age verification, purchase and delivery methods in 2013 and 2014. We used multiple sources to identify IEV websites, primarily complex search algorithms scanning more than 180 million websites. In 2013, we manually screened 32 446 websites, identifying 980 IEVs, selecting the 281 most popular for content analysis. This methodology yielded 31 239 websites for screening in 2014, identifying 3096 IEVs, with 283 selected for content analysis. The proportion of vendors that sold online-only, with no retail store, dropped significantly from 2013 (74.7%) to 2014 (64.3%) (p<0.01), with a corresponding significant decrease in US-based vendors (71.9% in 2013 and 65% in 2014). Most vendors did little to prevent youth access in either year, with 67.6% in 2013 and 63.2% in 2014 employing no age verification or relying exclusively on strategies that cannot effectively verify age. Effective age verification strategies such as online age verification services (7.1% in 2013 and 8.5% in 2014), driving licences (1.8% in 2013 and 7.4% in 2014, p<0.01) or age verification at delivery (6.4% in 2013 and 8.1% in 2104) were rarely advertised on IEV websites. Nearly all vendors advertised accepting credit cards, and about ¾ shipping via United States Postal Service, similar to the internet cigarette industry prior to federal bans. The number of IEVs grew sharply from 2013 to 2014, with poor age verification practices. New and expanded regulations for online e-cigarette sales are needed, including strict age and identity verification requirements. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
NASA Astrophysics Data System (ADS)
Song, Ke; Li, Feiqiang; Hu, Xiao; He, Lin; Niu, Wenxu; Lu, Sihao; Zhang, Tong
2018-06-01
The development of fuel cell electric vehicles can to a certain extent alleviate worldwide energy and environmental issues. While a single energy management strategy cannot meet the complex road conditions of an actual vehicle, this article proposes a multi-mode energy management strategy for electric vehicles with a fuel cell range extender based on driving condition recognition technology, which contains a patterns recognizer and a multi-mode energy management controller. This paper introduces a learning vector quantization (LVQ) neural network to design the driving patterns recognizer according to a vehicle's driving information. This multi-mode strategy can automatically switch to the genetic algorithm optimized thermostat strategy under specific driving conditions in the light of the differences in condition recognition results. Simulation experiments were carried out based on the model's validity verification using a dynamometer test bench. Simulation results show that the proposed strategy can obtain better economic performance than the single-mode thermostat strategy under dynamic driving conditions.
Research on Correlation between Vehicle Cycle and Engine Cycle in Heavy-duty commercial vehicle
NASA Astrophysics Data System (ADS)
lin, Chen; Zhong, Wang; Shuai, Liu
2017-12-01
In order to study the correlation between vehicle cycle and engine cycle in heavy commercial vehicles, the conversion model of vehicle cycle to engine cycle is constructed based on the vehicle power system theory and shift strategy, which considers the verification on diesel truck. The results show that the model has high rationality and reliability in engine operation. In the acceleration process of high speed, the difference of model gear selection leads to the actual deviation. Compared with the drum test, the engine speed distribution obtained by the model deviates to right, which fits to the lower grade. The grade selection has high influence on the model.
Djuris, Jelena; Djuric, Zorica
2017-11-30
Mathematical models can be used as an integral part of the quality by design (QbD) concept throughout the product lifecycle for variety of purposes, including appointment of the design space and control strategy, continual improvement and risk assessment. Examples of different mathematical modeling techniques (mechanistic, empirical and hybrid) in the pharmaceutical development and process monitoring or control are provided in the presented review. In the QbD context, mathematical models are predominantly used to support design space and/or control strategies. Considering their impact to the final product quality, models can be divided into the following categories: high, medium and low impact models. Although there are regulatory guidelines on the topic of modeling applications, review of QbD-based submission containing modeling elements revealed concerns regarding the scale-dependency of design spaces and verification of models predictions at commercial scale of manufacturing, especially regarding real-time release (RTR) models. Authors provide critical overview on the good modeling practices and introduce concepts of multiple-unit, adaptive and dynamic design space, multivariate specifications and methods for process uncertainty analysis. RTR specification with mathematical model and different approaches to multivariate statistical process control supporting process analytical technologies are also presented. Copyright © 2017 Elsevier B.V. All rights reserved.
Statistical analysis of NWP rainfall data from Poland..
NASA Astrophysics Data System (ADS)
Starosta, Katarzyna; Linkowska, Joanna
2010-05-01
A goal of this work is to summarize the latest results of precipitation verification in Poland. In IMGW, COSMO_PL version 4.0 has been running. The model configuration is: 14 km horizontal grid spacing, initial time at 00 UTC and 12 UTC, the forecast range 72 h. The fields from the model had been verified with Polish SYNOP stations. The verification was performed using a new verification tool. For the accumulated precipitation indices FBI, POD, FAR, ETS from contingency table are calculated. In this paper the comparison of monthly and seasonal verification of 6h, 12h, 24h accumulated precipitation in 2009 is presented. Since February 2010 the model with 7 km grid spacing will be running in IMGW. The results of precipitation verification for two different models' resolution will be shown.
Chilcott, J; Tappenden, P; Rawdin, A; Johnson, M; Kaltenthaler, E; Paisley, S; Papaioannou, D; Shippam, A
2010-05-01
Health policy decisions must be relevant, evidence-based and transparent. Decision-analytic modelling supports this process but its role is reliant on its credibility. Errors in mathematical decision models or simulation exercises are unavoidable but little attention has been paid to processes in model development. Numerous error avoidance/identification strategies could be adopted but it is difficult to evaluate the merits of strategies for improving the credibility of models without first developing an understanding of error types and causes. The study aims to describe the current comprehension of errors in the HTA modelling community and generate a taxonomy of model errors. Four primary objectives are to: (1) describe the current understanding of errors in HTA modelling; (2) understand current processes applied by the technology assessment community for avoiding errors in development, debugging and critically appraising models for errors; (3) use HTA modellers' perceptions of model errors with the wider non-HTA literature to develop a taxonomy of model errors; and (4) explore potential methods and procedures to reduce the occurrence of errors in models. It also describes the model development process as perceived by practitioners working within the HTA community. A methodological review was undertaken using an iterative search methodology. Exploratory searches informed the scope of interviews; later searches focused on issues arising from the interviews. Searches were undertaken in February 2008 and January 2009. In-depth qualitative interviews were performed with 12 HTA modellers from academic and commercial modelling sectors. All qualitative data were analysed using the Framework approach. Descriptive and explanatory accounts were used to interrogate the data within and across themes and subthemes: organisation, roles and communication; the model development process; definition of error; types of model error; strategies for avoiding errors; strategies for identifying errors; and barriers and facilitators. There was no common language in the discussion of modelling errors and there was inconsistency in the perceived boundaries of what constitutes an error. Asked about the definition of model error, there was a tendency for interviewees to exclude matters of judgement from being errors and focus on 'slips' and 'lapses', but discussion of slips and lapses comprised less than 20% of the discussion on types of errors. Interviewees devoted 70% of the discussion to softer elements of the process of defining the decision question and conceptual modelling, mostly the realms of judgement, skills, experience and training. The original focus concerned model errors, but it may be more useful to refer to modelling risks. Several interviewees discussed concepts of validation and verification, with notable consistency in interpretation: verification meaning the process of ensuring that the computer model correctly implemented the intended model, whereas validation means the process of ensuring that a model is fit for purpose. Methodological literature on verification and validation of models makes reference to the Hermeneutic philosophical position, highlighting that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Interviewees demonstrated examples of all major error types identified in the literature: errors in the description of the decision problem, in model structure, in use of evidence, in implementation of the model, in operation of the model, and in presentation and understanding of results. The HTA error classifications were compared against existing classifications of model errors in the literature. A range of techniques and processes are currently used to avoid errors in HTA models: engaging with clinical experts, clients and decision-makers to ensure mutual understanding, producing written documentation of the proposed model, explicit conceptual modelling, stepping through skeleton models with experts, ensuring transparency in reporting, adopting standard housekeeping techniques, and ensuring that those parties involved in the model development process have sufficient and relevant training. Clarity and mutual understanding were identified as key issues. However, their current implementation is not framed within an overall strategy for structuring complex problems. Some of the questioning may have biased interviewees responses but as all interviewees were represented in the analysis no rebalancing of the report was deemed necessary. A potential weakness of the literature review was its focus on spreadsheet and program development rather than specifically on model development. It should also be noted that the identified literature concerning programming errors was very narrow despite broad searches being undertaken. Published definitions of overall model validity comprising conceptual model validation, verification of the computer model, and operational validity of the use of the model in addressing the real-world problem are consistent with the views expressed by the HTA community and are therefore recommended as the basis for further discussions of model credibility. Such discussions should focus on risks, including errors of implementation, errors in matters of judgement and violations. Discussions of modelling risks should reflect the potentially complex network of cognitive breakdowns that lead to errors in models and existing research on the cognitive basis of human error should be included in an examination of modelling errors. There is a need to develop a better understanding of the skills requirements for the development, operation and use of HTA models. Interaction between modeller and client in developing mutual understanding of a model establishes that model's significance and its warranty. This highlights that model credibility is the central concern of decision-makers using models so it is crucial that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Recommendations for future research would be studies of verification and validation; the model development process; and identification of modifications to the modelling process with the aim of preventing the occurrence of errors and improving the identification of errors in models.
Managing complexity in simulations of land surface and near-surface processes
Coon, Ethan T.; Moulton, J. David; Painter, Scott L.
2016-01-12
Increasing computing power and the growing role of simulation in Earth systems science have led to an increase in the number and complexity of processes in modern simulators. We present a multiphysics framework that specifies interfaces for coupled processes and automates weak and strong coupling strategies to manage this complexity. Process management is enabled by viewing the system of equations as a tree, where individual equations are associated with leaf nodes and coupling strategies with internal nodes. A dynamically generated dependency graph connects a variable to its dependencies, streamlining and automating model evaluation, easing model development, and ensuring models aremore » modular and flexible. Additionally, the dependency graph is used to ensure that data requirements are consistent between all processes in a given simulation. Here we discuss the design and implementation of these concepts within the Arcos framework, and demonstrate their use for verification testing and hypothesis evaluation in numerical experiments.« less
The Role of Integrated Modeling in the Design and Verification of the James Webb Space Telescope
NASA Technical Reports Server (NTRS)
Mosier, Gary E.; Howard, Joseph M.; Johnston, John D.; Parrish, Keith A.; Hyde, T. Tupper; McGinnis, Mark A.; Bluth, Marcel; Kim, Kevin; Ha, Kong Q.
2004-01-01
The James Web Space Telescope (JWST) is a large, infrared-optimized space telescope scheduled for launch in 2011. System-level verification of critical optical performance requirements will rely on integrated modeling to a considerable degree. In turn, requirements for accuracy of the models are significant. The size of the lightweight observatory structure, coupled with the need to test at cryogenic temperatures, effectively precludes validation of the models and verification of optical performance with a single test in 1-g. Rather, a complex series of steps are planned by which the components of the end-to-end models are validated at various levels of subassembly, and the ultimate verification of optical performance is by analysis using the assembled models. This paper describes the critical optical performance requirements driving the integrated modeling activity, shows how the error budget is used to allocate and track contributions to total performance, and presents examples of integrated modeling methods and results that support the preliminary observatory design. Finally, the concepts for model validation and the role of integrated modeling in the ultimate verification of observatory are described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plotkowski, A.; Kirka, M. M.; Babu, S. S.
A fundamental understanding of spatial and temporal thermal distributions is crucial for predicting solidification and solid-state microstructural development in parts made by additive manufacturing. While sophisticated numerical techniques that are based on finite element or finite volume methods are useful for gaining insight into these phenomena at the length scale of the melt pool (100 - 500 µm), they are ill-suited for predicting engineering trends over full part cross-sections (> 10 x 10 cm) or many layers over long process times (> many days) due to the necessity of fully resolving the heat source characteristics. On the other hand, itmore » is extremely difficult to resolve the highly dynamic nature of the process using purely in-situ characterization techniques. This article proposes a pragmatic alternative based on a semi-analytical approach to predicting the transient heat conduction during powder bed metal additive manufacturing process. The model calculations were theoretically verified for selective laser melting of AlSi10Mg and electron beam melting of IN718 powders for simple cross-sectional geometries and the transient results are compared to steady state predictions from the Rosenthal equation. It is shown that the transient effects of the scan strategy create significant variations in the melt pool geometry and solid-liquid interface velocity, especially as the thermal diffusivity of the material decreases and the pre-heat of the process increases. With positive verification of the strategy, the model was then experimentally validated to simulate two point-melt scan strategies during electron beam melting of IN718, one intended to produce a columnar and one an equiaxed grain structure. Lastly, through comparison of the solidification conditions (i.e. transient and spatial variations of thermal gradient and liquid-solid interface velocity) predicted by the model to phenomenological CET theory, the model accurately predicted the experimental grain structures.« less
Plotkowski, A.; Kirka, M. M.; Babu, S. S.
2017-10-16
A fundamental understanding of spatial and temporal thermal distributions is crucial for predicting solidification and solid-state microstructural development in parts made by additive manufacturing. While sophisticated numerical techniques that are based on finite element or finite volume methods are useful for gaining insight into these phenomena at the length scale of the melt pool (100 - 500 µm), they are ill-suited for predicting engineering trends over full part cross-sections (> 10 x 10 cm) or many layers over long process times (> many days) due to the necessity of fully resolving the heat source characteristics. On the other hand, itmore » is extremely difficult to resolve the highly dynamic nature of the process using purely in-situ characterization techniques. This article proposes a pragmatic alternative based on a semi-analytical approach to predicting the transient heat conduction during powder bed metal additive manufacturing process. The model calculations were theoretically verified for selective laser melting of AlSi10Mg and electron beam melting of IN718 powders for simple cross-sectional geometries and the transient results are compared to steady state predictions from the Rosenthal equation. It is shown that the transient effects of the scan strategy create significant variations in the melt pool geometry and solid-liquid interface velocity, especially as the thermal diffusivity of the material decreases and the pre-heat of the process increases. With positive verification of the strategy, the model was then experimentally validated to simulate two point-melt scan strategies during electron beam melting of IN718, one intended to produce a columnar and one an equiaxed grain structure. Lastly, through comparison of the solidification conditions (i.e. transient and spatial variations of thermal gradient and liquid-solid interface velocity) predicted by the model to phenomenological CET theory, the model accurately predicted the experimental grain structures.« less
Using SysML for verification and validation planning on the Large Synoptic Survey Telescope (LSST)
NASA Astrophysics Data System (ADS)
Selvy, Brian M.; Claver, Charles; Angeli, George
2014-08-01
This paper provides an overview of the tool, language, and methodology used for Verification and Validation Planning on the Large Synoptic Survey Telescope (LSST) Project. LSST has implemented a Model Based Systems Engineering (MBSE) approach as a means of defining all systems engineering planning and definition activities that have historically been captured in paper documents. Specifically, LSST has adopted the Systems Modeling Language (SysML) standard and is utilizing a software tool called Enterprise Architect, developed by Sparx Systems. Much of the historical use of SysML has focused on the early phases of the project life cycle. Our approach is to extend the advantages of MBSE into later stages of the construction project. This paper details the methodology employed to use the tool to document the verification planning phases, including the extension of the language to accommodate the project's needs. The process includes defining the Verification Plan for each requirement, which in turn consists of a Verification Requirement, Success Criteria, Verification Method(s), Verification Level, and Verification Owner. Each Verification Method for each Requirement is defined as a Verification Activity and mapped into Verification Events, which are collections of activities that can be executed concurrently in an efficient and complementary way. Verification Event dependency and sequences are modeled using Activity Diagrams. The methodology employed also ties in to the Project Management Control System (PMCS), which utilizes Primavera P6 software, mapping each Verification Activity as a step in a planned activity. This approach leads to full traceability from initial Requirement to scheduled, costed, and resource loaded PMCS task-based activities, ensuring all requirements will be verified.
Credit Card Fraud Detection: A Realistic Modeling and a Novel Learning Strategy.
Dal Pozzolo, Andrea; Boracchi, Giacomo; Caelen, Olivier; Alippi, Cesare; Bontempi, Gianluca
2017-09-14
Detecting frauds in credit card transactions is perhaps one of the best testbeds for computational intelligence algorithms. In fact, this problem involves a number of relevant challenges, namely: concept drift (customers' habits evolve and fraudsters change their strategies over time), class imbalance (genuine transactions far outnumber frauds), and verification latency (only a small set of transactions are timely checked by investigators). However, the vast majority of learning algorithms that have been proposed for fraud detection rely on assumptions that hardly hold in a real-world fraud-detection system (FDS). This lack of realism concerns two main aspects: 1) the way and timing with which supervised information is provided and 2) the measures used to assess fraud-detection performance. This paper has three major contributions. First, we propose, with the help of our industrial partner, a formalization of the fraud-detection problem that realistically describes the operating conditions of FDSs that everyday analyze massive streams of credit card transactions. We also illustrate the most appropriate performance measures to be used for fraud-detection purposes. Second, we design and assess a novel learning strategy that effectively addresses class imbalance, concept drift, and verification latency. Third, in our experiments, we demonstrate the impact of class unbalance and concept drift in a real-world data stream containing more than 75 million transactions, authorized over a time window of three years.
NASA Astrophysics Data System (ADS)
Karam, Walid; Mokbel, Chafic; Greige, Hanna; Chollet, Gerard
2006-05-01
A GMM based audio visual speaker verification system is described and an Active Appearance Model with a linear speaker transformation system is used to evaluate the robustness of the verification. An Active Appearance Model (AAM) is used to automatically locate and track a speaker's face in a video recording. A Gaussian Mixture Model (GMM) based classifier (BECARS) is used for face verification. GMM training and testing is accomplished on DCT based extracted features of the detected faces. On the audio side, speech features are extracted and used for speaker verification with the GMM based classifier. Fusion of both audio and video modalities for audio visual speaker verification is compared with face verification and speaker verification systems. To improve the robustness of the multimodal biometric identity verification system, an audio visual imposture system is envisioned. It consists of an automatic voice transformation technique that an impostor may use to assume the identity of an authorized client. Features of the transformed voice are then combined with the corresponding appearance features and fed into the GMM based system BECARS for training. An attempt is made to increase the acceptance rate of the impostor and to analyzing the robustness of the verification system. Experiments are being conducted on the BANCA database, with a prospect of experimenting on the newly developed PDAtabase developed within the scope of the SecurePhone project.
Propel: Tools and Methods for Practical Source Code Model Checking
NASA Technical Reports Server (NTRS)
Mansouri-Samani, Massoud; Mehlitz, Peter; Markosian, Lawrence; OMalley, Owen; Martin, Dale; Moore, Lantz; Penix, John; Visser, Willem
2003-01-01
The work reported here is an overview and snapshot of a project to develop practical model checking tools for in-the-loop verification of NASA s mission-critical, multithreaded programs in Java and C++. Our strategy is to develop and evaluate both a design concept that enables the application of model checking technology to C++ and Java, and a model checking toolset for C++ and Java. The design concept and the associated model checking toolset is called Propel. It builds upon the Java PathFinder (JPF) tool, an explicit state model checker for Java applications developed by the Automated Software Engineering group at NASA Ames Research Center. The design concept that we are developing is Design for Verification (D4V). This is an adaption of existing best design practices that has the desired side-effect of enhancing verifiability by improving modularity and decreasing accidental complexity. D4V, we believe, enhances the applicability of a variety of V&V approaches; we are developing the concept in the context of model checking. The model checking toolset, Propel, is based on extending JPF to handle C++. Our principal tasks in developing the toolset are to build a translator from C++ to Java, productize JPF, and evaluate the toolset in the context of D4V. Through all these tasks we are testing Propel capabilities on customer applications.
Temporal Specification and Verification of Real-Time Systems.
1991-08-30
of concrete real - time systems can be modeled adequately. Specification: We present two conservative extensions of temporal logic that allow for the...logic. We present both model-checking algorithms for the automatic verification of finite-state real - time systems and proof methods for the deductive verification of real - time systems .
FORMED: Bringing Formal Methods to the Engineering Desktop
2016-02-01
integrates formal verification into software design and development by precisely defining semantics for a restricted subset of the Unified Modeling...input-output contract satisfaction and absence of null pointer dereferences. 15. SUBJECT TERMS Formal Methods, Software Verification , Model-Based...Domain specific languages (DSLs) drive both implementation and formal verification
78 FR 58492 - Generator Verification Reliability Standards
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-24
... Control Functions), MOD-027-1 (Verification of Models and Data for Turbine/Governor and Load Control or...), MOD-027-1 (Verification of Models and Data for Turbine/Governor and Load Control or Active Power... Category B and C contingencies, as required by wind generators in Order No. 661, or that those generators...
Tian, Yu; Kang, Xiaodong; Li, Yunyi; Li, Wei; Zhang, Aiqun; Yu, Jiangchen; Li, Yiping
2013-01-01
This article presents a strategy for identifying the source location of a chemical plume in near-shore oceanic environments where the plume is developed under the influence of turbulence, tides and waves. This strategy includes two modules: source declaration (or identification) and source verification embedded in a subsumption architecture. Algorithms for source identification are derived from the moth-inspired plume tracing strategies based on a chemical sensor. The in-water test missions, conducted in November 2002 at San Clemente Island (California, USA) in June 2003 in Duck (North Carolina, USA) and in October 2010 at Dalian Bay (China), successfully identified the source locations after autonomous underwater vehicles tracked the rhodamine dye plumes with a significant meander over 100 meters. The objective of the verification module is to verify the declared plume source using a visual sensor. Because images taken in near shore oceanic environments are very vague and colors in the images are not well-defined, we adopt a fuzzy color extractor to segment the color components and recognize the chemical plume and its source by measuring color similarity. The source verification module is tested by images taken during the CPT missions. PMID:23507823
Transmutation Fuel Performance Code Thermal Model Verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gregory K. Miller; Pavel G. Medvedev
2007-09-01
FRAPCON fuel performance code is being modified to be able to model performance of the nuclear fuels of interest to the Global Nuclear Energy Partnership (GNEP). The present report documents the effort for verification of the FRAPCON thermal model. It was found that, with minor modifications, FRAPCON thermal model temperature calculation agrees with that of the commercial software ABAQUS (Version 6.4-4). This report outlines the methodology of the verification, code input, and calculation results.
NASA Astrophysics Data System (ADS)
Zhafirah Muhammad, Nurul; Harun, A.; Hambali, N. A. M. A.; Murad, S. A. Z.; Mohyar, S. N.; Isa, M. N.; Jambek, AB
2017-11-01
Increased demand in internet of thing (IOT) application based has inadvertently forced the move towards higher complexity of integrated circuit supporting SoC. Such spontaneous increased in complexity poses unequivocal complicated validation strategies. Hence, the complexity allows researchers to come out with various exceptional methodologies in order to overcome this problem. This in essence brings about the discovery of dynamic verification, formal verification and hybrid techniques. In reserve, it is very important to discover bugs at infancy of verification process in (SoC) in order to reduce time consuming and fast time to market for the system. Ergo, in this paper we are focusing on the methodology of verification that can be done at Register Transfer Level of SoC based on the AMBA bus design. On top of that, the discovery of others verification method called Open Verification Methodology (OVM) brings out an easier way in RTL validation methodology neither as the replacement for the traditional method yet as an effort for fast time to market for the system. Thus, the method called OVM is proposed in this paper as the verification method for larger design to avert the disclosure of the bottleneck in validation platform.
Integrating Fingerprint Verification into the Smart Card-Based Healthcare Information System
NASA Astrophysics Data System (ADS)
Moon, Daesung; Chung, Yongwha; Pan, Sung Bum; Park, Jin-Won
2009-12-01
As VLSI technology has been improved, a smart card employing 32-bit processors has been released, and more personal information such as medical, financial data can be stored in the card. Thus, it becomes important to protect personal information stored in the card. Verification of the card holder's identity using a fingerprint has advantages over the present practices of Personal Identification Numbers (PINs) and passwords. However, the computational workload of fingerprint verification is much heavier than that of the typical PIN-based solution. In this paper, we consider three strategies to implement fingerprint verification in a smart card environment and how to distribute the modules of fingerprint verification between the smart card and the card reader. We first evaluate the number of instructions of each step of a typical fingerprint verification algorithm, and estimate the execution time of several cryptographic algorithms to guarantee the security/privacy of the fingerprint data transmitted in the smart card with the client-server environment. Based on the evaluation results, we analyze each scenario with respect to the security level and the real-time execution requirements in order to implement fingerprint verification in the smart card with the client-server environment.
PFLOTRAN Verification: Development of a Testing Suite to Ensure Software Quality
NASA Astrophysics Data System (ADS)
Hammond, G. E.; Frederick, J. M.
2016-12-01
In scientific computing, code verification ensures the reliability and numerical accuracy of a model simulation by comparing the simulation results to experimental data or known analytical solutions. The model is typically defined by a set of partial differential equations with initial and boundary conditions, and verification ensures whether the mathematical model is solved correctly by the software. Code verification is especially important if the software is used to model high-consequence systems which cannot be physically tested in a fully representative environment [Oberkampf and Trucano (2007)]. Justified confidence in a particular computational tool requires clarity in the exercised physics and transparency in its verification process with proper documentation. We present a quality assurance (QA) testing suite developed by Sandia National Laboratories that performs code verification for PFLOTRAN, an open source, massively-parallel subsurface simulator. PFLOTRAN solves systems of generally nonlinear partial differential equations describing multiphase, multicomponent and multiscale reactive flow and transport processes in porous media. PFLOTRAN's QA test suite compares the numerical solutions of benchmark problems in heat and mass transport against known, closed-form, analytical solutions, including documentation of the exercised physical process models implemented in each PFLOTRAN benchmark simulation. The QA test suite development strives to follow the recommendations given by Oberkampf and Trucano (2007), which describes four essential elements in high-quality verification benchmark construction: (1) conceptual description, (2) mathematical description, (3) accuracy assessment, and (4) additional documentation and user information. Several QA tests within the suite will be presented, including details of the benchmark problems and their closed-form analytical solutions, implementation of benchmark problems in PFLOTRAN simulations, and the criteria used to assess PFLOTRAN's performance in the code verification procedure. References Oberkampf, W. L., and T. G. Trucano (2007), Verification and Validation Benchmarks, SAND2007-0853, 67 pgs., Sandia National Laboratories, Albuquerque, NM.
Formally verifying Ada programs which use real number types
NASA Technical Reports Server (NTRS)
Sutherland, David
1986-01-01
Formal verification is applied to programs which use real number arithmetic operations (mathematical programs). Formal verification of a program P consists of creating a mathematical model of F, stating the desired properties of P in a formal logical language, and proving that the mathematical model has the desired properties using a formal proof calculus. The development and verification of the mathematical model are discussed.
Dynamic testing for shuttle design verification
NASA Technical Reports Server (NTRS)
Green, C. E.; Leadbetter, S. A.; Rheinfurth, M. H.
1972-01-01
Space shuttle design verification requires dynamic data from full scale structural component and assembly tests. Wind tunnel and other scaled model tests are also required early in the development program to support the analytical models used in design verification. Presented is a design philosophy based on mathematical modeling of the structural system strongly supported by a comprehensive test program; some of the types of required tests are outlined.
NASA Astrophysics Data System (ADS)
Yue, S. S.; Wen, Y. N.; Lv, G. N.; Hu, D.
2013-10-01
In recent years, the increasing development of cloud computing technologies laid critical foundation for efficiently solving complicated geographic issues. However, it is still difficult to realize the cooperative operation of massive heterogeneous geographical models. Traditional cloud architecture is apt to provide centralized solution to end users, while all the required resources are often offered by large enterprises or special agencies. Thus, it's a closed framework from the perspective of resource utilization. Solving comprehensive geographic issues requires integrating multifarious heterogeneous geographical models and data. In this case, an open computing platform is in need, with which the model owners can package and deploy their models into cloud conveniently, while model users can search, access and utilize those models with cloud facility. Based on this concept, the open cloud service strategies for the sharing of heterogeneous geographic analysis models is studied in this article. The key technology: unified cloud interface strategy, sharing platform based on cloud service, and computing platform based on cloud service are discussed in detail, and related experiments are conducted for further verification.
NASA Astrophysics Data System (ADS)
Strunz, Richard; Herrmann, Jeffrey W.
2011-12-01
The hot fire test strategy for liquid rocket engines has always been a concern of space industry and agency alike because no recognized standard exists. Previous hot fire test plans focused on the verification of performance requirements but did not explicitly include reliability as a dimensioning variable. The stakeholders are, however, concerned about a hot fire test strategy that balances reliability, schedule, and affordability. A multiple criteria test planning model is presented that provides a framework to optimize the hot fire test strategy with respect to stakeholder concerns. The Staged Combustion Rocket Engine Demonstrator, a program of the European Space Agency, is used as example to provide the quantitative answer to the claim that a reduced thrust scale demonstrator is cost beneficial for a subsequent flight engine development. Scalability aspects of major subsystems are considered in the prior information definition inside the Bayesian framework. The model is also applied to assess the impact of an increase of the demonstrated reliability level on schedule and affordability.
Space shuttle flying qualities and criteria assessment
NASA Technical Reports Server (NTRS)
Myers, T. T.; Johnston, D. E.; Mcruer, Duane T.
1987-01-01
Work accomplished under a series of study tasks for the Flying Qualities and Flight Control Systems Design Criteria Experiment (OFQ) of the Shuttle Orbiter Experiments Program (OEX) is summarized. The tasks involved review of applicability of existing flying quality and flight control system specification and criteria for the Shuttle; identification of potentially crucial flying quality deficiencies; dynamic modeling of the Shuttle Orbiter pilot/vehicle system in the terminal flight phases; devising a nonintrusive experimental program for extraction and identification of vehicle dynamics, pilot control strategy, and approach and landing performance metrics, and preparation of an OEX approach to produce a data archive and optimize use of the data to develop flying qualities for future space shuttle craft in general. Analytic modeling of the Orbiter's unconventional closed-loop dynamics in landing, modeling pilot control strategies, verification of vehicle dynamics and pilot control strategy from flight data, review of various existent or proposed aircraft flying quality parameters and criteria in comparison with the unique dynamic characteristics and control aspects of the Shuttle in landing; and finally a summary of conclusions and recommendations for developing flying quality criteria and design guides for future Shuttle craft.
HDL to verification logic translator
NASA Technical Reports Server (NTRS)
Gambles, J. W.; Windley, P. J.
1992-01-01
The increasingly higher number of transistors possible in VLSI circuits compounds the difficulty in insuring correct designs. As the number of possible test cases required to exhaustively simulate a circuit design explodes, a better method is required to confirm the absence of design faults. Formal verification methods provide a way to prove, using logic, that a circuit structure correctly implements its specification. Before verification is accepted by VLSI design engineers, the stand alone verification tools that are in use in the research community must be integrated with the CAD tools used by the designers. One problem facing the acceptance of formal verification into circuit design methodology is that the structural circuit descriptions used by the designers are not appropriate for verification work and those required for verification lack some of the features needed for design. We offer a solution to this dilemma: an automatic translation from the designers' HDL models into definitions for the higher-ordered logic (HOL) verification system. The translated definitions become the low level basis of circuit verification which in turn increases the designer's confidence in the correctness of higher level behavioral models.
NASA Technical Reports Server (NTRS)
Windley, P. J.
1991-01-01
In this paper we explore the specification and verification of VLSI designs. The paper focuses on abstract specification and verification of functionality using mathematical logic as opposed to low-level boolean equivalence verification such as that done using BDD's and Model Checking. Specification and verification, sometimes called formal methods, is one tool for increasing computer dependability in the face of an exponentially increasing testing effort.
NASA Technical Reports Server (NTRS)
Lee, S. S.; Sengupta, S.; Nwadike, E. V.; Sinha, S. K.
1980-01-01
The rigid lid model was developed to predict three dimensional temperature and velocity distributions in lakes. This model was verified at various sites (Lake Belews, Biscayne Bay, etc.) and th verification at Lake Keowee was the last of these series of verification runs. The verification at Lake Keowee included the following: (1) selecting the domain of interest, grid systems, and comparing the preliminary results with archival data; (2) obtaining actual ground truth and infrared scanner data both for summer and winter; and (3) using the model to predict the measured data for the above periods and comparing the predicted results with the actual data. The model results compared well with measured data. Thus, the model can be used as an effective predictive tool for future sites.
Verification of component mode techniques for flexible multibody systems
NASA Technical Reports Server (NTRS)
Wiens, Gloria J.
1990-01-01
Investigations were conducted in the modeling aspects of flexible multibodies undergoing large angular displacements. Models were to be generated and analyzed through application of computer simulation packages employing the 'component mode synthesis' techniques. Multibody Modeling, Verification and Control Laboratory (MMVC) plan was implemented, which includes running experimental tests on flexible multibody test articles. From these tests, data was to be collected for later correlation and verification of the theoretical results predicted by the modeling and simulation process.
Formal verification of automated teller machine systems using SPIN
NASA Astrophysics Data System (ADS)
Iqbal, Ikhwan Mohammad; Adzkiya, Dieky; Mukhlash, Imam
2017-08-01
Formal verification is a technique for ensuring the correctness of systems. This work focuses on verifying a model of the Automated Teller Machine (ATM) system against some specifications. We construct the model as a state transition diagram that is suitable for verification. The specifications are expressed as Linear Temporal Logic (LTL) formulas. We use Simple Promela Interpreter (SPIN) model checker to check whether the model satisfies the formula. This model checker accepts models written in Process Meta Language (PROMELA), and its specifications are specified in LTL formulas.
A Practitioners Perspective on Verification
NASA Astrophysics Data System (ADS)
Steenburgh, R. A.
2017-12-01
NOAAs Space Weather Prediction Center offers a wide range of products and services to meet the needs of an equally wide range of customers. A robust verification program is essential to the informed use of model guidance and other tools by both forecasters and end users alike. In this talk, we present current SWPC practices and results, and examine emerging requirements and potential approaches to satisfy them. We explore the varying verification needs of forecasters and end users, as well as the role of subjective and objective verification. Finally, we describe a vehicle used in the meteorological community to unify approaches to model verification and facilitate intercomparison.
Mathematical Modeling of Ni/H2 and Li-Ion Batteries
NASA Technical Reports Server (NTRS)
Weidner, John W.; White, Ralph E.; Dougal, Roger A.
2001-01-01
The modelling effort outlined in this viewgraph presentation encompasses the following topics: 1) Electrochemical Deposition of Nickel Hydroxide; 2) Deposition rates of thin films; 3) Impregnation of porous electrodes; 4) Experimental Characterization of Nickel Hydroxide; 5) Diffusion coefficients of protons; 6) Self-discharge rates (i.e., oxygen-evolution kinetics); 7) Hysteresis between charge and discharge; 8) Capacity loss on cycling; 9) Experimental Verification of the Ni/H2 Battery Model; 10) Mathematical Modeling Li-Ion Batteries; 11) Experimental Verification of the Li-Ion Battery Model; 11) Integrated Power System Models for Satellites; and 12) Experimental Verification of Integrated-Systems Model.
Trojanowicz, Karol; Wójcik, Włodzimierz
2011-01-01
The article presents a case-study on the calibration and verification of mathematical models of organic carbon removal kinetics in biofilm. The chosen Harremöes and Wanner & Reichert models were calibrated with a set of model parameters obtained both during dedicated studies conducted at pilot- and lab-scales for petrochemical wastewater conditions and from the literature. Next, the models were successfully verified through studies carried out utilizing a pilot ASFBBR type bioreactor installed in an oil-refinery wastewater treatment plant. During verification the pilot biofilm reactor worked under varying surface organic loading rates (SOL), dissolved oxygen concentrations and temperatures. The verification proved that the models can be applied in practice to petrochemical wastewater treatment engineering for e.g. biofilm bioreactor dimensioning.
Verification testing of the Brome Agri Sales Ltd. Maximizer Separator, Model MAX 1016 (Maximizer) was conducted at the Lake Wheeler Road Field Laboratory Swine Educational Unit in Raleigh, North Carolina. The Maximizer is an inclined screen solids separator that can be used to s...
NASA Astrophysics Data System (ADS)
Kim, Cheol-kyun; Kim, Jungchan; Choi, Jaeseung; Yang, Hyunjo; Yim, Donggyu; Kim, Jinwoong
2007-03-01
As the minimum transistor length is getting smaller, the variation and uniformity of transistor length seriously effect device performance. So, the importance of optical proximity effects correction (OPC) and resolution enhancement technology (RET) cannot be overemphasized. However, OPC process is regarded by some as a necessary evil in device performance. In fact, every group which includes process and design, are interested in whole chip CD variation trend and CD uniformity, which represent real wafer. Recently, design based metrology systems are capable of detecting difference between data base to wafer SEM image. Design based metrology systems are able to extract information of whole chip CD variation. According to the results, OPC abnormality was identified and design feedback items are also disclosed. The other approaches are accomplished on EDA companies, like model based OPC verifications. Model based verification will be done for full chip area by using well-calibrated model. The object of model based verification is the prediction of potential weak point on wafer and fast feed back to OPC and design before reticle fabrication. In order to achieve robust design and sufficient device margin, appropriate combination between design based metrology system and model based verification tools is very important. Therefore, we evaluated design based metrology system and matched model based verification system for optimum combination between two systems. In our study, huge amount of data from wafer results are classified and analyzed by statistical method and classified by OPC feedback and design feedback items. Additionally, novel DFM flow would be proposed by using combination of design based metrology and model based verification tools.
Exploration of Uncertainty in Glacier Modelling
NASA Technical Reports Server (NTRS)
Thompson, David E.
1999-01-01
There are procedures and methods for verification of coding algebra and for validations of models and calculations that are in use in the aerospace computational fluid dynamics (CFD) community. These methods would be efficacious if used by the glacier dynamics modelling community. This paper is a presentation of some of those methods, and how they might be applied to uncertainty management supporting code verification and model validation for glacier dynamics. The similarities and differences between their use in CFD analysis and the proposed application of these methods to glacier modelling are discussed. After establishing sources of uncertainty and methods for code verification, the paper looks at a representative sampling of verification and validation efforts that are underway in the glacier modelling community, and establishes a context for these within overall solution quality assessment. Finally, an information architecture and interactive interface is introduced and advocated. This Integrated Cryospheric Exploration (ICE) Environment is proposed for exploring and managing sources of uncertainty in glacier modelling codes and methods, and for supporting scientific numerical exploration and verification. The details and functionality of this Environment are described based on modifications of a system already developed for CFD modelling and analysis.
RF model of the distribution system as a communication channel, phase 2. Volume 2: Task reports
NASA Technical Reports Server (NTRS)
Rustay, R. C.; Gajjar, J. T.; Rankin, R. W.; Wentz, R. C.; Wooding, R.
1982-01-01
Based on the established feasibility of predicting, via a model, the propagation of Power Line Frequency on radial type distribution feeders, verification studies comparing model predictions against measurements were undertaken using more complicated feeder circuits and situations. Detailed accounts of the major tasks are presented. These include: (1) verification of model; (2) extension, implementation, and verification of perturbation theory; (3) parameter sensitivity; (4) transformer modeling; and (5) compensation of power distribution systems for enhancement of power line carrier communication reliability.
Krasteva, Vessela; Jekova, Irena; Schmid, Ramun
2018-01-01
This study aims to validate the 12-lead electrocardiogram (ECG) as a biometric modality based on two straightforward binary QRS template matching characteristics. Different perspectives of the human verification problem are considered, regarding the optimal lead selection and stability over sample size, gender, age, heart rate (HR). A clinical 12-lead resting ECG database, including a population of 460 subjects with two-session recordings (>1 year apart) is used. Cost-effective strategies for extraction of personalized QRS patterns (100ms) and binary template matching estimate similarity in the time scale (matching time) and dissimilarity in the amplitude scale (mismatch area). The two-class person verification task, taking the decision to validate or to reject the subject identity is managed by linear discriminant analysis (LDA). Non-redundant LDA models for different lead configurations (I,II,III,aVF,aVL,aVF,V1-V6) are trained on the first half of 230 subjects by stepwise feature selection until maximization of the area under the receiver operating characteristic curve (ROC AUC). The operating point on the training ROC at equal error rate (EER) is tested on the independent dataset (second half of 230 subjects) to report unbiased validation of test-ROC AUC and true verification rate (TVR = 100-EER). The test results are further evaluated in groups by sample size, gender, age, HR. The optimal QRS pattern projection for single-lead ECG biometric modality is found in the frontal plane sector (60°-0°) with best (Test-AUC/TVR) for lead II (0.941/86.8%) and slight accuracy drop for -aVR (-0.017/-1.4%), I (-0.01/-1.5%). Chest ECG leads have degrading accuracy from V1 (0.885/80.6%) to V6 (0.799/71.8%). The multi-lead ECG improves verification: 6-chest (0.97/90.9%), 6-limb (0.986/94.3%), 12-leads (0.995/97.5%). The QRS pattern matching model shows stable performance for verification of 10 to 230 individuals; insignificant degradation of TVR in women by (1.2-3.6%), adults ≥70 years (3.7%), younger <40 years (1.9%), HR<60bpm (1.2%), HR>90bpm (3.9%), no degradation for HR change (0 to >20bpm).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Jun Soo; Choi, Yong Joon; Smith, Curtis Lee
2016-09-01
This document addresses two subjects involved with the RELAP-7 Software Verification and Validation Plan (SVVP): (i) the principles and plan to assure the independence of RELAP-7 assessment through the code development process, and (ii) the work performed to establish the RELAP-7 assessment plan, i.e., the assessment strategy, literature review, and identification of RELAP-7 requirements. Then, the Requirements Traceability Matrices (RTMs) proposed in previous document (INL-EXT-15-36684) are updated. These RTMs provide an efficient way to evaluate the RELAP-7 development status as well as the maturity of RELAP-7 assessment through the development process.
NASA Technical Reports Server (NTRS)
Powell, John D.
2003-01-01
This document discusses the verification of the Secure Socket Layer (SSL) communication protocol as a demonstration of the Model Based Verification (MBV) portion of the verification instrument set being developed under the Reducing Software Security Risk (RSSR) Trough an Integrated Approach research initiative. Code Q of the National Aeronautics and Space Administration (NASA) funds this project. The NASA Goddard Independent Verification and Validation (IV&V) facility manages this research program at the NASA agency level and the Assurance Technology Program Office (ATPO) manages the research locally at the Jet Propulsion Laboratory (California institute of Technology) where the research is being carried out.
He, Hua; McDermott, Michael P.
2012-01-01
Sensitivity and specificity are common measures of the accuracy of a diagnostic test. The usual estimators of these quantities are unbiased if data on the diagnostic test result and the true disease status are obtained from all subjects in an appropriately selected sample. In some studies, verification of the true disease status is performed only for a subset of subjects, possibly depending on the result of the diagnostic test and other characteristics of the subjects. Estimators of sensitivity and specificity based on this subset of subjects are typically biased; this is known as verification bias. Methods have been proposed to correct verification bias under the assumption that the missing data on disease status are missing at random (MAR), that is, the probability of missingness depends on the true (missing) disease status only through the test result and observed covariate information. When some of the covariates are continuous, or the number of covariates is relatively large, the existing methods require parametric models for the probability of disease or the probability of verification (given the test result and covariates), and hence are subject to model misspecification. We propose a new method for correcting verification bias based on the propensity score, defined as the predicted probability of verification given the test result and observed covariates. This is estimated separately for those with positive and negative test results. The new method classifies the verified sample into several subsamples that have homogeneous propensity scores and allows correction for verification bias. Simulation studies demonstrate that the new estimators are more robust to model misspecification than existing methods, but still perform well when the models for the probability of disease and probability of verification are correctly specified. PMID:21856650
A methodology for the rigorous verification of plasma simulation codes
NASA Astrophysics Data System (ADS)
Riva, Fabio
2016-10-01
The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.
SU-F-T-494: A Multi-Institutional Study of Independent Dose Verification Using Golden Beam Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Itano, M; Yamazaki, T; Tachibana, R
Purpose: In general, beam data of individual linac is measured for independent dose verification software program and the verification is performed as a secondary check. In this study, independent dose verification using golden beam data was compared to that using individual linac’s beam data. Methods: Six institutions were participated and three different beam data were prepared. The one was individual measured data (Original Beam Data, OBD) .The others were generated by all measurements from same linac model (Model-GBD) and all linac models (All-GBD). The three different beam data were registered to the independent verification software program for each institute. Subsequently,more » patient’s plans in eight sites (brain, head and neck, lung, esophagus, breast, abdomen, pelvis and bone) were analyzed using the verification program to compare doses calculated using the three different beam data. Results: 1116 plans were collected from six institutes. Compared to using the OBD, the results shows the variation using the Model-GBD based calculation and the All-GBD was 0.0 ± 0.3% and 0.0 ± 0.6%, respectively. The maximum variations were 1.2% and 2.3%, respectively. The plans with the variation over 1% shows the reference points were located away from the central axis with/without physical wedge. Conclusion: The confidence limit (2SD) using the Model-GBD and the All-GBD was within 0.6% and 1.2%, respectively. Thus, the use of golden beam data may be feasible for independent verification. In addition to it, the verification using golden beam data provide quality assurance of planning from the view of audit. This research is partially supported by Japan Agency for Medical Research and Development(AMED)« less
Verification of a VRF Heat Pump Computer Model in EnergyPlus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nigusse, Bereket; Raustad, Richard
2013-06-15
This paper provides verification results of the EnergyPlus variable refrigerant flow (VRF) heat pump computer model using manufacturer's performance data. The paper provides an overview of the VRF model, presents the verification methodology, and discusses the results. The verification provides quantitative comparison of full and part-load performance to manufacturer's data in cooling-only and heating-only modes of operation. The VRF heat pump computer model uses dual range bi-quadratic performance curves to represent capacity and Energy Input Ratio (EIR) as a function of indoor and outdoor air temperatures, and dual range quadratic performance curves as a function of part-load-ratio for modeling part-loadmore » performance. These performance curves are generated directly from manufacturer's published performance data. The verification compared the simulation output directly to manufacturer's performance data, and found that the dual range equation fit VRF heat pump computer model predicts the manufacturer's performance data very well over a wide range of indoor and outdoor temperatures and part-load conditions. The predicted capacity and electric power deviations are comparbale to equation-fit HVAC computer models commonly used for packaged and split unitary HVAC equipment.« less
Razzaq, Misbah; Ahmad, Jamil
2015-01-01
Internet worms are analogous to biological viruses since they can infect a host and have the ability to propagate through a chosen medium. To prevent the spread of a worm or to grasp how to regulate a prevailing worm, compartmental models are commonly used as a means to examine and understand the patterns and mechanisms of a worm spread. However, one of the greatest challenge is to produce methods to verify and validate the behavioural properties of a compartmental model. This is why in this study we suggest a framework based on Petri Nets and Model Checking through which we can meticulously examine and validate these models. We investigate Susceptible-Exposed-Infectious-Recovered (SEIR) model and propose a new model Susceptible-Exposed-Infectious-Recovered-Delayed-Quarantined (Susceptible/Recovered) (SEIDQR(S/I)) along with hybrid quarantine strategy, which is then constructed and analysed using Stochastic Petri Nets and Continuous Time Markov Chain. The analysis shows that the hybrid quarantine strategy is extremely effective in reducing the risk of propagating the worm. Through Model Checking, we gained insight into the functionality of compartmental models. Model Checking results validate simulation ones well, which fully support the proposed framework. PMID:26713449
Razzaq, Misbah; Ahmad, Jamil
2015-01-01
Internet worms are analogous to biological viruses since they can infect a host and have the ability to propagate through a chosen medium. To prevent the spread of a worm or to grasp how to regulate a prevailing worm, compartmental models are commonly used as a means to examine and understand the patterns and mechanisms of a worm spread. However, one of the greatest challenge is to produce methods to verify and validate the behavioural properties of a compartmental model. This is why in this study we suggest a framework based on Petri Nets and Model Checking through which we can meticulously examine and validate these models. We investigate Susceptible-Exposed-Infectious-Recovered (SEIR) model and propose a new model Susceptible-Exposed-Infectious-Recovered-Delayed-Quarantined (Susceptible/Recovered) (SEIDQR(S/I)) along with hybrid quarantine strategy, which is then constructed and analysed using Stochastic Petri Nets and Continuous Time Markov Chain. The analysis shows that the hybrid quarantine strategy is extremely effective in reducing the risk of propagating the worm. Through Model Checking, we gained insight into the functionality of compartmental models. Model Checking results validate simulation ones well, which fully support the proposed framework.
Verification testing of the Triton Systems, LLC Solid Bowl Centrifuge Model TS-5000 (TS-5000) was conducted at the Lake Wheeler Road Field Laboratory Swine Educational Unit in Raleigh, North Carolina. The TS-5000 was 48" in diameter and 30" deep, with a bowl capacity of 16 ft3. ...
Authoring and verification of clinical guidelines: a model driven approach.
Pérez, Beatriz; Porres, Ivan
2010-08-01
The goal of this research is to provide a framework to enable authoring and verification of clinical guidelines. The framework is part of a larger research project aimed at improving the representation, quality and application of clinical guidelines in daily clinical practice. The verification process of a guideline is based on (1) model checking techniques to verify guidelines against semantic errors and inconsistencies in their definition, (2) combined with Model Driven Development (MDD) techniques, which enable us to automatically process manually created guideline specifications and temporal-logic statements to be checked and verified regarding these specifications, making the verification process faster and cost-effective. Particularly, we use UML statecharts to represent the dynamics of guidelines and, based on this manually defined guideline specifications, we use a MDD-based tool chain to automatically process them to generate the input model of a model checker. The model checker takes the resulted model together with the specific guideline requirements, and verifies whether the guideline fulfils such properties. The overall framework has been implemented as an Eclipse plug-in named GBDSSGenerator which, particularly, starting from the UML statechart representing a guideline, allows the verification of the guideline against specific requirements. Additionally, we have established a pattern-based approach for defining commonly occurring types of requirements in guidelines. We have successfully validated our overall approach by verifying properties in different clinical guidelines resulting in the detection of some inconsistencies in their definition. The proposed framework allows (1) the authoring and (2) the verification of clinical guidelines against specific requirements defined based on a set of property specification patterns, enabling non-experts to easily write formal specifications and thus easing the verification process. Copyright 2010 Elsevier Inc. All rights reserved.
Verification of Autonomous Systems for Space Applications
NASA Technical Reports Server (NTRS)
Brat, G.; Denney, E.; Giannakopoulou, D.; Frank, J.; Jonsson, A.
2006-01-01
Autonomous software, especially if it is based on model, can play an important role in future space applications. For example, it can help streamline ground operations, or, assist in autonomous rendezvous and docking operations, or even, help recover from problems (e.g., planners can be used to explore the space of recovery actions for a power subsystem and implement a solution without (or with minimal) human intervention). In general, the exploration capabilities of model-based systems give them great flexibility. Unfortunately, it also makes them unpredictable to our human eyes, both in terms of their execution and their verification. The traditional verification techniques are inadequate for these systems since they are mostly based on testing, which implies a very limited exploration of their behavioral space. In our work, we explore how advanced V&V techniques, such as static analysis, model checking, and compositional verification, can be used to gain trust in model-based systems. We also describe how synthesis can be used in the context of system reconfiguration and in the context of verification.
Cleared for Launch - Lessons Learned from the OSIRIS-REx System Requirements Verification Program
NASA Technical Reports Server (NTRS)
Stevens, Craig; Adams, Angela; Williams, Bradley; Goodloe, Colby
2017-01-01
Requirements verification of a large flight system is a challenge. It is especially challenging for engineers taking on their first role in space systems engineering. This paper describes our approach to verification of the Origins, Spectral Interpretation, Resource Identification, Security-Regolith Explorer (OSIRIS-REx) system requirements. It also captures lessons learned along the way from developing systems engineers embroiled in this process. We begin with an overview of the mission and science objectives as well as the project requirements verification program strategy. A description of the requirements flow down is presented including our implementation for managing the thousands of program and element level requirements and associated verification data. We discuss both successes and methods to improve the managing of this data across multiple organizational interfaces. Our approach to verifying system requirements at multiple levels of assembly is presented using examples from our work at instrument, spacecraft, and ground segment levels. We include a discussion of system end-to-end testing limitations and their impacts to the verification program. Finally, we describe lessons learned that are applicable to all emerging space systems engineers using our unique perspectives across multiple organizations of a large NASA program.
National Centers for Environmental Prediction
/ VISION | About EMC EMC > Mesoscale Modeling > PEOPLE Home Mission Models R & D Collaborators Documentation Change Log People Calendar References Verification/Diagnostics Tropical & Extratropical Cyclone Tracks & Verification Implementation Info FAQ Disclaimer More Info MESOSCALE MODELING PEOPLE
Verification testing of the Waterloo Biofilter Systems (WBS), Inc. Waterloo Biofilter® Model 4-Bedroom system was conducted over a thirteen month period at the Massachusetts Alternative Septic System Test Center (MASSTC) located at Otis Air National Guard Base in Bourne, Mas...
Verification testing of the SeptiTech Model 400 System was conducted over a twelve month period at the Massachusetts Alternative Septic System Test Center (MASSTC) located at the Otis Air National Guard Base in Borne, MA. Sanitary Sewerage from the base residential housing was u...
Verification testing of the Aquapoint, Inc. (AQP) BioclereTM Model 16/12 was conducted over a thirteen month period at the Massachusetts Alternative Septic System Test Center (MASSTC), located at Otis Air National Guard Base in Bourne, Massachusetts. Sanitary sewerage from the ba...
Use of metaknowledge in the verification of knowledge-based systems
NASA Technical Reports Server (NTRS)
Morell, Larry J.
1989-01-01
Knowledge-based systems are modeled as deductive systems. The model indicates that the two primary areas of concern in verification are demonstrating consistency and completeness. A system is inconsistent if it asserts something that is not true of the modeled domain. A system is incomplete if it lacks deductive capability. Two forms of consistency are discussed along with appropriate verification methods. Three forms of incompleteness are discussed. The use of metaknowledge, knowledge about knowledge, is explored in connection to each form of incompleteness.
Spacecraft attitude calibration/verification baseline study
NASA Technical Reports Server (NTRS)
Chen, L. C.
1981-01-01
A baseline study for a generalized spacecraft attitude calibration/verification system is presented. It can be used to define software specifications for three major functions required by a mission: the pre-launch parameter observability and data collection strategy study; the in-flight sensor calibration; and the post-calibration attitude accuracy verification. Analytical considerations are given for both single-axis and three-axis spacecrafts. The three-axis attitudes considered include the inertial-pointing attitudes, the reference-pointing attitudes, and attitudes undergoing specific maneuvers. The attitude sensors and hardware considered include the Earth horizon sensors, the plane-field Sun sensors, the coarse and fine two-axis digital Sun sensors, the three-axis magnetometers, the fixed-head star trackers, and the inertial reference gyros.
Toward Automatic Verification of Goal-Oriented Flow Simulations
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis, Michael J.
2014-01-01
We demonstrate the power of adaptive mesh refinement with adjoint-based error estimates in verification of simulations governed by the steady Euler equations. The flow equations are discretized using a finite volume scheme on a Cartesian mesh with cut cells at the wall boundaries. The discretization error in selected simulation outputs is estimated using the method of adjoint-weighted residuals. Practical aspects of the implementation are emphasized, particularly in the formulation of the refinement criterion and the mesh adaptation strategy. Following a thorough code verification example, we demonstrate simulation verification of two- and three-dimensional problems. These involve an airfoil performance database, a pressure signature of a body in supersonic flow and a launch abort with strong jet interactions. The results show reliable estimates and automatic control of discretization error in all simulations at an affordable computational cost. Moreover, the approach remains effective even when theoretical assumptions, e.g., steady-state and solution smoothness, are relaxed.
The SeaHorn Verification Framework
NASA Technical Reports Server (NTRS)
Gurfinkel, Arie; Kahsai, Temesghen; Komuravelli, Anvesh; Navas, Jorge A.
2015-01-01
In this paper, we present SeaHorn, a software verification framework. The key distinguishing feature of SeaHorn is its modular design that separates the concerns of the syntax of the programming language, its operational semantics, and the verification semantics. SeaHorn encompasses several novelties: it (a) encodes verification conditions using an efficient yet precise inter-procedural technique, (b) provides flexibility in the verification semantics to allow different levels of precision, (c) leverages the state-of-the-art in software model checking and abstract interpretation for verification, and (d) uses Horn-clauses as an intermediate language to represent verification conditions which simplifies interfacing with multiple verification tools based on Horn-clauses. SeaHorn provides users with a powerful verification tool and researchers with an extensible and customizable framework for experimenting with new software verification techniques. The effectiveness and scalability of SeaHorn are demonstrated by an extensive experimental evaluation using benchmarks from SV-COMP 2015 and real avionics code.
Verification, Validation and Sensitivity Studies in Computational Biomechanics
Anderson, Andrew E.; Ellis, Benjamin J.; Weiss, Jeffrey A.
2012-01-01
Computational techniques and software for the analysis of problems in mechanics have naturally moved from their origins in the traditional engineering disciplines to the study of cell, tissue and organ biomechanics. Increasingly complex models have been developed to describe and predict the mechanical behavior of such biological systems. While the availability of advanced computational tools has led to exciting research advances in the field, the utility of these models is often the subject of criticism due to inadequate model verification and validation. The objective of this review is to present the concepts of verification, validation and sensitivity studies with regard to the construction, analysis and interpretation of models in computational biomechanics. Specific examples from the field are discussed. It is hoped that this review will serve as a guide to the use of verification and validation principles in the field of computational biomechanics, thereby improving the peer acceptance of studies that use computational modeling techniques. PMID:17558646
NASA Astrophysics Data System (ADS)
Kennedy, Joseph H.; Bennett, Andrew R.; Evans, Katherine J.; Price, Stephen; Hoffman, Matthew; Lipscomb, William H.; Fyke, Jeremy; Vargo, Lauren; Boghozian, Adrianna; Norman, Matthew; Worley, Patrick H.
2017-06-01
To address the pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice sheet models is underway. Concurrent to the development of the Community Ice Sheet Model (CISM), the corresponding verification and validation (V&V) process is being coordinated through a new, robust, Python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVVkit). Incorporated into the typical ice sheet model development cycle, it provides robust and automated numerical verification, software verification, performance validation, and physical validation analyses on a variety of platforms, from personal laptops to the largest supercomputers. LIVVkit operates on sets of regression test and reference data sets, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-for-bit evaluation, and plots of model variables to indicate where differences occur. LIVVkit also provides an easily extensible framework to incorporate and analyze results of new intercomparison projects, new observation data, and new computing platforms. LIVVkit is designed for quick adaptation to additional ice sheet models via abstraction of model specific code, functions, and configurations into an ice sheet model description bundle outside the main LIVVkit structure. Ultimately, through shareable and accessible analysis output, LIVVkit is intended to help developers build confidence in their models and enhance the credibility of ice sheet models overall.
NASA Technical Reports Server (NTRS)
1979-01-01
Structural analysis and certification of the collector system is presented. System verification against the interim performance criteria is presented and indicated by matrices. The verification discussion, analysis, and test results are also given.
Evaluation of the 29-km Eta Model for Weather Support to the United States Space Program
NASA Technical Reports Server (NTRS)
Manobianco, John; Nutter, Paul
1997-01-01
The Applied Meteorology Unit (AMU) conducted a year-long evaluation of NCEP's 29-km mesoscale Eta (meso-eta) weather prediction model in order to identify added value to forecast operations in support of the United States space program. The evaluation was stratified over warm and cool seasons and considered both objective and subjective verification methodologies. Objective verification results generally indicate that meso-eta model point forecasts at selected stations exhibit minimal error growth in terms of RMS errors and are reasonably unbiased. Conversely, results from the subjective verification demonstrate that model forecasts of developing weather events such as thunderstorms, sea breezes, and cold fronts, are not always as accurate as implied by the seasonal error statistics. Sea-breeze case studies reveal that the model generates a dynamically-consistent thermally direct circulation over the Florida peninsula, although at a larger scale than observed. Thunderstorm verification reveals that the meso-eta model is capable of predicting areas of organized convection, particularly during the late afternoon hours but is not capable of forecasting individual thunderstorms. Verification of cold fronts during the cool season reveals that the model is capable of forecasting a majority of cold frontal passages through east central Florida to within +1-h of observed frontal passage.
Summary of the 2014 Sandia V&V Challenge Workshop
Schroeder, Benjamin B.; Hu, Kenneth T.; Mullins, Joshua Grady; ...
2016-02-19
A discussion of the five responses to the 2014 Sandia Verification and Validation (V&V) Challenge Problem, presented within this special issue, is provided hereafter. Overviews of the challenge problem workshop, workshop participants, and the problem statement are also included. Brief summations of teams' responses to the challenge problem are provided. Issues that arose throughout the responses that are deemed applicable to the general verification, validation, and uncertainty quantification (VVUQ) community are the main focal point of this paper. The discussion is oriented and organized into big picture comparison of data and model usage, VVUQ activities, and differentiating conceptual themes behindmore » the teams' VVUQ strategies. Significant differences are noted in the teams' approaches toward all VVUQ activities, and those deemed most relevant are discussed. Beyond the specific details of VVUQ implementations, thematic concepts are found to create differences among the approaches; some of the major themes are discussed. Lastly, an encapsulation of the key contributions, the lessons learned, and advice for the future are presented.« less
2015-03-05
launched on its rocket- estimated completion date of May 2015. Air Force will require verification that SpaceX can meet payload integration...design and accelerate integration capability at Space Exploration Technologies Corporation ( SpaceX )1 launch sites. o The Air Force does not intend to...accelerate integration capabilities at SpaceX launch sites because of the studies it directed, but will require verification that SpaceX can meet
Mehraei, Mani; Bashirov, Rza; Tüzmen, Şükrü
2016-10-01
Recent molecular studies provide important clues into treatment of [Formula: see text]-thalassemia, sickle-cell anaemia and other [Formula: see text]-globin disorders revealing that increased production of fetal hemoglobin, that is normally suppressed in adulthood, can ameliorate the severity of these diseases. In this paper, we present a novel approach for drug prediction for [Formula: see text]-globin disorders. Our approach is centered upon quantitative modeling of interactions in human fetal-to-adult hemoglobin switch network using hybrid functional Petri nets. In accordance with the reverse pharmacology approach, we pose a hypothesis regarding modulation of specific protein targets that induce [Formula: see text]-globin and consequently fetal hemoglobin. Comparison of simulation results for the proposed strategy with the ones obtained for already existing drugs shows that our strategy is the optimal as it leads to highest level of [Formula: see text]-globin induction and thereby has potential beneficial therapeutic effects on [Formula: see text]-globin disorders. Simulation results enable verification of model coherence demonstrating that it is consistent with qPCR data available for known strategies and/or drugs.
Program Model Checking as a New Trend
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Visser, Willem; Clancy, Daniel (Technical Monitor)
2002-01-01
This paper introduces a special section of STTT (International Journal on Software Tools for Technology Transfer) containing a selection of papers that were presented at the 7th International SPIN workshop, Stanford, August 30 - September 1, 2000. The workshop was named SPIN Model Checking and Software Verification, with an emphasis on model checking of programs. The paper outlines the motivation for stressing software verification, rather than only design and model verification, by presenting the work done in the Automated Software Engineering group at NASA Ames Research Center within the last 5 years. This includes work in software model checking, testing like technologies and static analysis.
NASA Technical Reports Server (NTRS)
Zoutendyk, J. A.; Smith, L. S.; Soli, G. A.; Thieberger, P.; Wegner, H. E.
1985-01-01
Single-Event Upset (SEU) response of a bipolar low-power Schottky-diode-clamped TTL static RAM has been observed using Br ions in the 100-240 MeV energy range and O ions in the 20-100 MeV range. These data complete the experimental verification of circuit-simulation SEU modeling for this device. The threshold for onset of SEU has been observed by the variation of energy, ion species and angle of incidence. The results obtained from the computer circuit-simulation modeling and experimental model verification demonstrate a viable methodology for modeling SEU in bipolar integrated circuits.
Multibody modeling and verification
NASA Technical Reports Server (NTRS)
Wiens, Gloria J.
1989-01-01
A summary of a ten week project on flexible multibody modeling, verification and control is presented. Emphasis was on the need for experimental verification. A literature survey was conducted for gathering information on the existence of experimental work related to flexible multibody systems. The first portion of the assigned task encompassed the modeling aspects of flexible multibodies that can undergo large angular displacements. Research in the area of modeling aspects were also surveyed, with special attention given to the component mode approach. Resulting from this is a research plan on various modeling aspects to be investigated over the next year. The relationship between the large angular displacements, boundary conditions, mode selection, and system modes is of particular interest. The other portion of the assigned task was the generation of a test plan for experimental verification of analytical and/or computer analysis techniques used for flexible multibody systems. Based on current and expected frequency ranges of flexible multibody systems to be used in space applications, an initial test article was selected and designed. A preliminary TREETOPS computer analysis was run to ensure frequency content in the low frequency range, 0.1 to 50 Hz. The initial specifications of experimental measurement and instrumentation components were also generated. Resulting from this effort is the initial multi-phase plan for a Ground Test Facility of Flexible Multibody Systems for Modeling Verification and Control. The plan focusses on the Multibody Modeling and Verification (MMV) Laboratory. General requirements of the Unobtrusive Sensor and Effector (USE) and the Robot Enhancement (RE) laboratories were considered during the laboratory development.
ENVIRONMENTAL TECHNOLOGY VERIFICATION OF URBAN RUNOFF MODELS
This paper will present the verification process and available results of the XP-SWMM modeling system produced by XP-Software conducted unde the USEPA's ETV Program. Wet weather flow (WWF) models are used throughout the US for the evaluation of storm and combined sewer systems. M...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doebling, Scott William
The purpose of the verification project is to establish, through rigorous convergence analysis, that each ASC computational physics code correctly implements a set of physics models and algorithms (code verification); Evaluate and analyze the uncertainties of code outputs associated with the choice of temporal and spatial discretization (solution or calculation verification); and Develop and maintain the capability to expand and update these analyses on demand. This presentation describes project milestones.
The U.S. Environmental Protection Agency has created the Environmental Technology Verification Program to facilitate the deployment of innovative or improved environmental technologies through performance verification and dissemination of information. The goal of the ETV Program...
WRAP-RIB antenna technology development
NASA Technical Reports Server (NTRS)
Freeland, R. E.; Garcia, N. F.; Iwamoto, H.
1985-01-01
The wrap-rib deployable antenna concept development is based on a combination of hardware development and testing along with extensive supporting analysis. The proof-of-concept hardware models are large in size so they will address the same basic problems associated with the design fabrication, assembly and test as the full-scale systems which were selected to be 100 meters at the beginning of the program. The hardware evaluation program consists of functional performance tests, design verification tests and analytical model verification tests. Functional testing consists of kinematic deployment, mesh management and verification of mechanical packaging efficiencies. Design verification consists of rib contour precision measurement, rib cross-section variation evaluation, rib materials characterizations and manufacturing imperfections assessment. Analytical model verification and refinement include mesh stiffness measurement, rib static and dynamic testing, mass measurement, and rib cross-section characterization. This concept was considered for a number of potential applications that include mobile communications, VLBI, and aircraft surveillance. In fact, baseline system configurations were developed by JPL, using the appropriate wrap-rib antenna, for all three classes of applications.
Verification of operational solar flare forecast: Case of Regional Warning Center Japan
NASA Astrophysics Data System (ADS)
Kubo, Yûki; Den, Mitsue; Ishii, Mamoru
2017-08-01
In this article, we discuss a verification study of an operational solar flare forecast in the Regional Warning Center (RWC) Japan. The RWC Japan has been issuing four-categorical deterministic solar flare forecasts for a long time. In this forecast verification study, we used solar flare forecast data accumulated over 16 years (from 2000 to 2015). We compiled the forecast data together with solar flare data obtained with the Geostationary Operational Environmental Satellites (GOES). Using the compiled data sets, we estimated some conventional scalar verification measures with 95% confidence intervals. We also estimated a multi-categorical scalar verification measure. These scalar verification measures were compared with those obtained by the persistence method and recurrence method. As solar activity varied during the 16 years, we also applied verification analyses to four subsets of forecast-observation pair data with different solar activity levels. We cannot conclude definitely that there are significant performance differences between the forecasts of RWC Japan and the persistence method, although a slightly significant difference is found for some event definitions. We propose to use a scalar verification measure to assess the judgment skill of the operational solar flare forecast. Finally, we propose a verification strategy for deterministic operational solar flare forecasting. For dichotomous forecast, a set of proposed verification measures is a frequency bias for bias, proportion correct and critical success index for accuracy, probability of detection for discrimination, false alarm ratio for reliability, Peirce skill score for forecast skill, and symmetric extremal dependence index for association. For multi-categorical forecast, we propose a set of verification measures as marginal distributions of forecast and observation for bias, proportion correct for accuracy, correlation coefficient and joint probability distribution for association, the likelihood distribution for discrimination, the calibration distribution for reliability and resolution, and the Gandin-Murphy-Gerrity score and judgment skill score for skill.
Computer simulation of storm runoff for three watersheds in Albuquerque, New Mexico
Knutilla, R.L.; Veenhuis, J.E.
1994-01-01
Rainfall-runoff data from three watersheds were selected for calibration and verification of the U.S. Geological Survey's Distributed Routing Rainfall-Runoff Model. The watersheds chosen are residentially developed. The conceptually based model uses an optimization process that adjusts selected parameters to achieve the best fit between measured and simulated runoff volumes and peak discharges. Three of these optimization parameters represent soil-moisture conditions, three represent infiltration, and one accounts for effective impervious area. Each watershed modeled was divided into overland-flow segments and channel segments. The overland-flow segments were further subdivided to reflect pervious and impervious areas. Each overland-flow and channel segment was assigned representative values of area, slope, percentage of imperviousness, and roughness coefficients. Rainfall-runoff data for each watershed were separated into two sets for use in calibration and verification. For model calibration, seven input parameters were optimized to attain a best fit of the data. For model verification, parameter values were set using values from model calibration. The standard error of estimate for calibration of runoff volumes ranged from 19 to 34 percent, and for peak discharge calibration ranged from 27 to 44 percent. The standard error of estimate for verification of runoff volumes ranged from 26 to 31 percent, and for peak discharge verification ranged from 31 to 43 percent.
Statement Verification: A Stochastic Model of Judgment and Response.
ERIC Educational Resources Information Center
Wallsten, Thomas S.; Gonzalez-Vallejo, Claudia
1994-01-01
A stochastic judgment model (SJM) is presented as a framework for addressing issues in statement verification and probability judgment. Results of 5 experiments with 264 undergraduates support the validity of the model and provide new information that is interpreted in terms of the SJM. (SLD)
On improving the performance of nonphotochemical quenching in CP29 light-harvesting antenna complex
NASA Astrophysics Data System (ADS)
Berman, Gennady P.; Nesterov, Alexander I.; Sayre, Richard T.; Still, Susanne
2016-03-01
We model and simulate the performance of charge-transfer in nonphotochemical quenching (NPQ) in the CP29 light-harvesting antenna-complex associated with photosystem II (PSII). The model consists of five discrete excitonic energy states and two sinks, responsible for the potentially damaging processes and charge-transfer channels, respectively. We demonstrate that by varying (i) the parameters of the chlorophyll-based dimer, (ii) the resonant properties of the protein-solvent environment interaction, and (iii) the energy transfer rates to the sinks, one can significantly improve the performance of the NPQ. Our analysis suggests strategies for improving the performance of the NPQ in response to environmental changes, and may stimulate experimental verification.
Land Ice Verification and Validation Kit
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-07-15
To address a pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice-sheet models is underway. The associated verification and validation process of these models is being coordinated through a new, robust, python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVV). This release provides robust and automated verification and a performance evaluation on LCF platforms. The performance V&V involves a comprehensive comparison of model performance relative to expected behavior on a given computing platform. LIVV operates on a set of benchmark and testmore » data, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-4-bit evaluation, and plots of tests where differences occur.« less
UIVerify: A Web-Based Tool for Verification and Automatic Generation of User Interfaces
NASA Technical Reports Server (NTRS)
Shiffman, Smadar; Degani, Asaf; Heymann, Michael
2004-01-01
In this poster, we describe a web-based tool for verification and automatic generation of user interfaces. The verification component of the tool accepts as input a model of a machine and a model of its interface, and checks that the interface is adequate (correct). The generation component of the tool accepts a model of a given machine and the user's task, and then generates a correct and succinct interface. This write-up will demonstrate the usefulness of the tool by verifying the correctness of a user interface to a flight-control system. The poster will include two more examples of using the tool: verification of the interface to an espresso machine, and automatic generation of a succinct interface to a large hypothetical machine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kennedy, Joseph H.; Bennett, Andrew R.; Evans, Katherine J.
To address the pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice sheet models is underway. Concurrent to the development of the Community Ice Sheet Model (CISM), the corresponding verification and validation (V&V) process is being coordinated through a new, robust, Python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVVkit). Incorporated into the typical ice sheet model development cycle, it provides robust and automated numerical verification, software verification, performance validation, and physical validation analyses on a variety of platforms, from personal laptopsmore » to the largest supercomputers. LIVVkit operates on sets of regression test and reference data sets, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-for-bit evaluation, and plots of model variables to indicate where differences occur. LIVVkit also provides an easily extensible framework to incorporate and analyze results of new intercomparison projects, new observation data, and new computing platforms. LIVVkit is designed for quick adaptation to additional ice sheet models via abstraction of model specific code, functions, and configurations into an ice sheet model description bundle outside the main LIVVkit structure. Furthermore, through shareable and accessible analysis output, LIVVkit is intended to help developers build confidence in their models and enhance the credibility of ice sheet models overall.« less
Kennedy, Joseph H.; Bennett, Andrew R.; Evans, Katherine J.; ...
2017-03-23
To address the pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice sheet models is underway. Concurrent to the development of the Community Ice Sheet Model (CISM), the corresponding verification and validation (V&V) process is being coordinated through a new, robust, Python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVVkit). Incorporated into the typical ice sheet model development cycle, it provides robust and automated numerical verification, software verification, performance validation, and physical validation analyses on a variety of platforms, from personal laptopsmore » to the largest supercomputers. LIVVkit operates on sets of regression test and reference data sets, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-for-bit evaluation, and plots of model variables to indicate where differences occur. LIVVkit also provides an easily extensible framework to incorporate and analyze results of new intercomparison projects, new observation data, and new computing platforms. LIVVkit is designed for quick adaptation to additional ice sheet models via abstraction of model specific code, functions, and configurations into an ice sheet model description bundle outside the main LIVVkit structure. Furthermore, through shareable and accessible analysis output, LIVVkit is intended to help developers build confidence in their models and enhance the credibility of ice sheet models overall.« less
Automated Deployment of Advanced Controls and Analytics in Buildings
NASA Astrophysics Data System (ADS)
Pritoni, Marco
Buildings use 40% of primary energy in the US. Recent studies show that developing energy analytics and enhancing control strategies can significantly improve their energy performance. However, the deployment of advanced control software applications has been mostly limited to academic studies. Larger-scale implementations are prevented by the significant engineering time and customization required, due to significant differences among buildings. This study demonstrates how physics-inspired data-driven models can be used to develop portable analytics and control applications for buildings. Specifically, I demonstrate application of these models in all phases of the deployment of advanced controls and analytics in buildings: in the first phase, "Site Preparation and Interface with Legacy Systems" I used models to discover or map relationships among building components, automatically gathering metadata (information about data points) necessary to run the applications. During the second phase: "Application Deployment and Commissioning", models automatically learn system parameters, used for advanced controls and analytics. In the third phase: "Continuous Monitoring and Verification" I utilized models to automatically measure the energy performance of a building that has implemented advanced control strategies. In the conclusions, I discuss future challenges and suggest potential strategies for these innovative control systems to be widely deployed in the market. This dissertation provides useful new tools in terms of procedures, algorithms, and models to facilitate the automation of deployment of advanced controls and analytics and accelerate their wide adoption in buildings.
Security Verification Techniques Applied to PatchLink COTS Software
NASA Technical Reports Server (NTRS)
Gilliam, David P.; Powell, John D.; Bishop, Matt; Andrew, Chris; Jog, Sameer
2006-01-01
Verification of the security of software artifacts is a challenging task. An integrated approach that combines verification techniques can increase the confidence in the security of software artifacts. Such an approach has been developed by the Jet Propulsion Laboratory (JPL) and the University of California at Davis (UC Davis). Two security verification instruments were developed and then piloted on PatchLink's UNIX Agent, a Commercial-Off-The-Shelf (COTS) software product, to assess the value of the instruments and the approach. The two instruments are the Flexible Modeling Framework (FMF) -- a model-based verification instrument (JPL), and a Property-Based Tester (UC Davis). Security properties were formally specified for the COTS artifact and then verified using these instruments. The results were then reviewed to determine the effectiveness of the approach and the security of the COTS product.
Beer, Lynn A; Wang, Huan; Tang, Hsin-Yao; Cao, Zhijun; Chang-Wong, Tony; Tanyi, Janos L; Zhang, Rugang; Liu, Qin; Speicher, David W
2013-01-01
The most cancer-specific biomarkers in blood are likely to be proteins shed directly by the tumor rather than less specific inflammatory or other host responses. The use of xenograft mouse models together with in-depth proteome analysis for identification of human proteins in the mouse blood is an under-utilized strategy that can clearly identify proteins shed by the tumor. In the current study, 268 human proteins shed into mouse blood from human OVCAR-3 serous tumors were identified based upon human vs. mouse species differences using a four-dimensional plasma proteome fractionation strategy. A multi-step prioritization and verification strategy was subsequently developed to efficiently select some of the most promising biomarkers from this large number of candidates. A key step was parallel analysis of human proteins detected in the tumor supernatant, because substantially greater sequence coverage for many of the human proteins initially detected in the xenograft mouse plasma confirmed assignments as tumor-derived human proteins. Verification of candidate biomarkers in patient sera was facilitated by in-depth, label-free quantitative comparisons of serum pools from patients with ovarian cancer and benign ovarian tumors. The only proteins that advanced to multiple reaction monitoring (MRM) assay development were those that exhibited increases in ovarian cancer patients compared with benign tumor controls. MRM assays were facilely developed for all 11 novel biomarker candidates selected by this process and analysis of larger pools of patient sera suggested that all 11 proteins are promising candidate biomarkers that should be further evaluated on individual patient blood samples.
RF model of the distribution system as a communication channel, phase 2. Volume 1: Summary Report
NASA Technical Reports Server (NTRS)
Rustay, R. C.; Gajjar, J. T.; Rankin, R. W.; Wentz, R. C.; Wooding, R.
1982-01-01
The design, implementation, and verification of a computerized model for predicting the steady-state sinusoidal response of radial (tree) configured distribution feeders was undertaken. That work demonstrated the feasibility and validity based on verification measurements made on a limited size portion of an actual live feeder. On that basis a follow-on effort concerned with (1) extending the verification based on a greater variety of situations and network size, (2) extending the model capabilities for reverse direction propagation, (3) investigating parameter sensitivities, (4) improving transformer models, and (5) investigating procedures/fixes for ameliorating propagation trouble spots was conducted. Results are summarized.
NASA Technical Reports Server (NTRS)
Rushby, John
1991-01-01
The formal specification and mechanically checked verification for a model of fault-masking and transient-recovery among the replicated computers of digital flight-control systems are presented. The verification establishes, subject to certain carefully stated assumptions, that faults among the component computers are masked so that commands sent to the actuators are the same as those that would be sent by a single computer that suffers no failures.
Evaluation and Quality Control for the Copernicus Seasonal Forecast Systems
NASA Astrophysics Data System (ADS)
Manubens, N.; Hunter, A.; Bedia, J.; Bretonnière, P. A.; Bhend, J.; Doblas-Reyes, F. J.
2017-12-01
The EU funded Copernicus Climate Change Service (C3S) will provide authoritative information about past, current and future climate for a wide range of users, from climate scientists to stakeholders from a wide range of sectors including insurance, energy or transport. It has been recognized that providing information about the products' quality and provenance is paramount to establish trust in the service and allow users to make best use of the available information. This presentation outlines the work being conducted within the Quality Assurance for Multi-model Seasonal Forecast Products project (QA4Seas). The aim of QA4Seas is to develop a strategy for the evaluation and quality control (EQC) of the multi-model seasonal forecasts provided by C3S. First, we present the set of guidelines the data providers must comply with, ensuring the data is fully traceable and harmonized across data sets. Second, we discuss the ongoing work on defining a provenance and metadata model that is able to encode such information, and that can be extended to describe the steps followed to obtain the final verification products such as maps and time series of forecast quality measures. The metadata model is based on the Resource Description Framework W3C standard, being thus extensible and reusable. It benefits from widely adopted vocabularies to describe data provenance and workflows, as well as from expert consensus and community-support for the development of the verification and downscaling specific ontologies. Third, we describe the open source software being developed to generate fully reproducible and certifiable seasonal forecast products, which also attaches provenance and metadata information to the verification measures and enables the user to visually inspect the quality of the C3S products. QA4Seas is seeking collaboration with similar initiatives, as well as extending the discussion to interested parties outside the C3S community to share experiences and establish global common guidelines or best practices regarding data provenance.
This report is a product of the U.S. EPA's Environmental Technoloy Verification (ETV) Program and is focused on the Smart Sonics Ultrasonic Aqueous Cleaning Systems. The verification is based on three main objectives. (1) The Smart Sonic Aqueous Cleaning Systems, Model 2000 and...
Student-Teacher Linkage Verification: Model Process and Recommendations
ERIC Educational Resources Information Center
Watson, Jeffery; Graham, Matthew; Thorn, Christopher A.
2012-01-01
As momentum grows for tracking the role of individual educators in student performance, school districts across the country are implementing projects that involve linking teachers to their students. Programs that link teachers to student outcomes require a verification process for student-teacher linkages. Linkage verification improves accuracy by…
Considerations in STS payload environmental verification
NASA Technical Reports Server (NTRS)
Keegan, W. B.
1978-01-01
Considerations regarding the Space Transportation System (STS) payload environmental verification are reviewed. It is noted that emphasis is placed on testing at the subassembly level and that the basic objective of structural dynamic payload verification is to ensure reliability in a cost-effective manner. Structural analyses consist of: (1) stress analysis for critical loading conditions, (2) model analysis for launch and orbital configurations, (3) flight loads analysis, (4) test simulation analysis to verify models, (5) kinematic analysis of deployment/retraction sequences, and (6) structural-thermal-optical program analysis. In addition to these approaches, payload verification programs are being developed in the thermal-vacuum area. These include the exposure to extreme temperatures, temperature cycling, thermal-balance testing and thermal-vacuum testing.
Magnetic cleanliness verification approach on tethered satellite
NASA Technical Reports Server (NTRS)
Messidoro, Piero; Braghin, Massimo; Grande, Maurizio
1990-01-01
Magnetic cleanliness testing was performed on the Tethered Satellite as the last step of an articulated verification campaign aimed at demonstrating the capability of the satellite to support its TEMAG (TEthered MAgnetometer) experiment. Tests at unit level and analytical predictions/correlations using a dedicated mathematical model (GANEW program) are also part of the verification activities. Details of the tests are presented, and the results of the verification are described together with recommendations for later programs.
NASA Technical Reports Server (NTRS)
Coppolino, Robert N.
2018-01-01
Verification and validation (V&V) is a highly challenging undertaking for SLS structural dynamics models due to the magnitude and complexity of SLS subassemblies and subassemblies. Responses to challenges associated with V&V of Space Launch System (SLS) structural dynamics models are presented in Volume I of this paper. Four methodologies addressing specific requirements for V&V are discussed. (1) Residual Mode Augmentation (RMA). (2) Modified Guyan Reduction (MGR) and Harmonic Reduction (HR, introduced in 1976). (3) Mode Consolidation (MC). Finally, (4) Experimental Mode Verification (EMV). This document contains the appendices to Volume I.
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.
2009-01-01
In current practice, it is often difficult to draw firm conclusions about turbulence model accuracy when performing multi-code CFD studies ostensibly using the same model because of inconsistencies in model formulation or implementation in different codes. This paper describes an effort to improve the consistency, verification, and validation of turbulence models within the aerospace community through a website database of verification and validation cases. Some of the variants of two widely-used turbulence models are described, and two independent computer codes (one structured and one unstructured) are used in conjunction with two specific versions of these models to demonstrate consistency with grid refinement for several representative problems. Naming conventions, implementation consistency, and thorough grid resolution studies are key factors necessary for success.
Description of a Website Resource for Turbulence Modeling Verification and Validation
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Smith, Brian R.; Huang, George P.
2010-01-01
The activities of the Turbulence Model Benchmarking Working Group - which is a subcommittee of the American Institute of Aeronautics and Astronautics (AIAA) Fluid Dynamics Technical Committee - are described. The group s main purpose is to establish a web-based repository for Reynolds-averaged Navier-Stokes turbulence model documentation, including verification and validation cases. This turbulence modeling resource has been established based on feedback from a survey on what is needed to achieve consistency and repeatability in turbulence model implementation and usage, and to document and disseminate information on new turbulence models or improvements to existing models. The various components of the website are described in detail: description of turbulence models, turbulence model readiness rating system, verification cases, validation cases, validation databases, and turbulence manufactured solutions. An outline of future plans of the working group is also provided.
Drumm, Daniel W; Greentree, Andrew D
2017-11-07
Finding a fluorescent target in a biological environment is a common and pressing microscopy problem. This task is formally analogous to the canonical search problem. In ideal (noise-free, truthful) search problems, the well-known binary search is optimal. The case of half-lies, where one of two responses to a search query may be deceptive, introduces a richer, Rényi-Ulam problem and is particularly relevant to practical microscopy. We analyse microscopy in the contexts of Rényi-Ulam games and half-lies, developing a new family of heuristics. We show the cost of insisting on verification by positive result in search algorithms; for the zero-half-lie case bisectioning with verification incurs a 50% penalty in the average number of queries required. The optimal partitioning of search spaces directly following verification in the presence of random half-lies is determined. Trisectioning with verification is shown to be the most efficient heuristic of the family in a majority of cases.
2005-05-01
TANK WALL.........................74 6 VERIFICATION - BONDED JOINT HOMOGENOUS ISOTROPIC AND ORTHOTROPIC DELALE & ERDOGAN PUBLICATION (SIX EXAMPLES...developed for verification of BondJo 87 6.3.2 Adhesive stress comparisons between BondJo, Ansys solid model FEA and Delale and Erdogan plate theory 88...comparisons for condition 1 91 6.3.6 Adhesive stress comparisons between BondJo, Ansys solid model FEA and Delale and Erdogan plate theory 92 x FIGURE
Systematic study of source mask optimization and verification flows
NASA Astrophysics Data System (ADS)
Ben, Yu; Latypov, Azat; Chua, Gek Soon; Zou, Yi
2012-06-01
Source mask optimization (SMO) emerged as powerful resolution enhancement technique (RET) for advanced technology nodes. However, there is a plethora of flow and verification metrics in the field, confounding the end user of the technique. Systemic study of different flows and the possible unification thereof is missing. This contribution is intended to reveal the pros and cons of different SMO approaches and verification metrics, understand the commonality and difference, and provide a generic guideline for RET selection via SMO. The paper discusses 3 different type of variations commonly arise in SMO, namely pattern preparation & selection, availability of relevant OPC recipe for freeform source and finally the metrics used in source verification. Several pattern selection algorithms are compared and advantages of systematic pattern selection algorithms are discussed. In the absence of a full resist model for SMO, alternative SMO flow without full resist model is reviewed. Preferred verification flow with quality metrics of DOF and MEEF is examined.
1981-03-01
overcome the shortcomings of this system. A phase III study develops the breakup model of the Space Shuttle clus’ter at various times into flight. The...2-1 ROCKET MODEL ..................................................... 2-5 COMBUSTION CHAMBER OPERATION ................................... 2-5...2-19 RESULTS .......................................................... 2-22 ROCKET MODEL
Recent literature on structural modeling, identification, and analysis
NASA Technical Reports Server (NTRS)
Craig, Roy R., Jr.
1990-01-01
The literature on the mathematical modeling of large space structures is first reviewed, with attention given to continuum models, model order reduction, substructuring, and computational techniques. System identification and mode verification are then discussed with reference to the verification of mathematical models of large space structures. In connection with analysis, the paper surveys recent research on eigensolvers and dynamic response solvers for large-order finite-element-based models.
Typology of Strategies of Personality Meaning-Making during Professional Education
ERIC Educational Resources Information Center
Shchipanova, Dina Ye.; Lebedeva, Ekaterina V.; Sukhinin, Valentin P.; Valieva, Elizaveta N.
2016-01-01
The importance of the studied issue is conditioned by the fact that high dynamic of processes in the labour market requires constant work of an individual on self-determination and search for significance of his/her professional activity. The purpose of research is theoretical development and empirical verification of the types of strategies of…
Code Verification Results of an LLNL ASC Code on Some Tri-Lab Verification Test Suite Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, S R; Bihari, B L; Salari, K
As scientific codes become more complex and involve larger numbers of developers and algorithms, chances for algorithmic implementation mistakes increase. In this environment, code verification becomes essential to building confidence in the code implementation. This paper will present first results of a new code verification effort within LLNL's B Division. In particular, we will show results of code verification of the LLNL ASC ARES code on the test problems: Su Olson non-equilibrium radiation diffusion, Sod shock tube, Sedov point blast modeled with shock hydrodynamics, and Noh implosion.
NASA Astrophysics Data System (ADS)
Shouquan Cheng, Chad; Li, Qian; Li, Guilong
2010-05-01
The synoptic weather typing approach has become popular in evaluating the impacts of climate change on a variety of environmental problems. One of the reasons is its ability to categorize a complex set of meteorological variables as a coherent index, which can facilitate analyses of local climate change impacts. The weather typing method has been successfully applied in Environment Canada for several research projects to analyze climatic change impacts on a number of extreme weather events, such as freezing rain, heavy rainfall, high-/low-flow events, air pollution, and human health. These studies comprise of three major parts: (1) historical simulation modeling to verify the extreme weather events, (2) statistical downscaling to provide station-scale future hourly/daily climate data, and (3) projections of changes in frequency and intensity of future extreme weather events in this century. To achieve these goals, in addition to synoptic weather typing, the modeling conceptualizations in meteorology and hydrology and a number of linear/nonlinear regression techniques were applied. Furthermore, a formal model result verification process has been built into each of the three parts of the projects. The results of the verification, based on historical observations of the outcome variables predicted by the models, showed very good agreement. The modeled results from these projects found that the frequency and intensity of future extreme weather events are projected to significantly increase under a changing climate in this century. This talk will introduce these research projects and outline the modeling exercise and result verification process. The major findings on future projections from the studies will be summarized in the presentation as well. One of the major conclusions from the studies is that the procedures (including synoptic weather typing) used in the studies are useful for climate change impact analysis on future extreme weather events. The implication of the significant increases in frequency and intensity of future extreme weather events would be useful to be considered when revising engineering infrastructure design standards and developing adaptation strategies and policies.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony
1990-01-01
The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.
1990-01-01
Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
Model Checking Satellite Operational Procedures
NASA Astrophysics Data System (ADS)
Cavaliere, Federico; Mari, Federico; Melatti, Igor; Minei, Giovanni; Salvo, Ivano; Tronci, Enrico; Verzino, Giovanni; Yushtein, Yuri
2011-08-01
We present a model checking approach for the automatic verification of satellite operational procedures (OPs). Building a model for a complex system as a satellite is a hard task. We overcome this obstruction by using a suitable simulator (SIMSAT) for the satellite. Our approach aims at improving OP quality assurance by automatic exhaustive exploration of all possible simulation scenarios. Moreover, our solution decreases OP verification costs by using a model checker (CMurphi) to automatically drive the simulator. We model OPs as user-executed programs observing the simulator telemetries and sending telecommands to the simulator. In order to assess feasibility of our approach we present experimental results on a simple meaningful scenario. Our results show that we can save up to 90% of verification time.
Practical Application of Model Checking in Software Verification
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Skakkebaek, Jens Ulrik
1999-01-01
This paper presents our experiences in applying the JAVA PATHFINDER (J(sub PF)), a recently developed JAVA to SPIN translator, in the finding of synchronization bugs in a Chinese Chess game server application written in JAVA. We give an overview of J(sub PF) and the subset of JAVA that it supports and describe the abstraction and verification of the game server. Finally, we analyze the results of the effort. We argue that abstraction by under-approximation is necessary for abstracting sufficiently smaller models for verification purposes; that user guidance is crucial for effective abstraction; and that current model checkers do not conveniently support the computational models of software in general and JAVA in particular.
Jitter Test Program and On-Orbit Mitigation Strategies for Solar Dynamic Observatory
NASA Technical Reports Server (NTRS)
Liu, Kuo-Chia; Kenney, Thomas; Maghami, Peiman; Mule, Pete; Blaurock, Carl; Haile, William B.
2007-01-01
The Solar Dynamic Observatory (SDO) aims to study the Sun's influence on the Earth, the source, storage, and release of the solar energy, and the interior structure of the Sun. During science observations, the jitter stability at the instrument focal plane must be maintained to less than a fraction of an arcsecond for two of the SDO instruments. To meet these stringent requirements, a significant amount of analysis and test effort has been devoted to predicting the jitter induced from various disturbance sources. This paper presents an overview of the SDO jitter analysis approach and test effort performed to date. It emphasizes the disturbance modeling, verification, calibration, and validation of the high gain antenna stepping mechanism and the reaction wheels, which are the two largest jitter contributors. This paper also describes on-orbit mitigation strategies to protect the system from analysis model uncertainties. Lessons learned from the SDO jitter analyses and test programs are included in the paper to share the knowledge gained with the community.
Interpreter composition issues in the formal verification of a processor-memory module
NASA Technical Reports Server (NTRS)
Fura, David A.; Cohen, Gerald C.
1994-01-01
This report describes interpreter composition techniques suitable for the formal specification and verification of a processor-memory module using the HOL theorem proving system. The processor-memory module is a multichip subsystem within a fault-tolerant embedded system under development within the Boeing Defense and Space Group. Modeling and verification methods were developed that permit provably secure composition at the transaction-level of specification, significantly reducing the complexity of the hierarchical verification of the system.
Hejl, H.R.
1989-01-01
The precipitation-runoff modeling system was applied to the 8.21 sq-mi drainage area of the Ah-shi-sle-pah Wash watershed in northwestern New Mexico. The calibration periods were May to September of 1981 and 1982, and the verification period was May to September 1983. Twelve storms were available for calibration and 8 storms were available for verification. For calibration A (hydraulic conductivity estimated from onsite data and other storm-mode parameters optimized), the computed standard error of estimate was 50% for runoff volumes and 72% of peak discharges. Calibration B included hydraulic conductivity in the optimization, which reduced the standard error of estimate to 28 % for runoff volumes and 50% for peak discharges. Optimized values for hydraulic conductivity resulted in reductions from 1.00 to 0.26 in/h and 0.20 to 0.03 in/h for the 2 general soils groups in the calibrations. Simulated runoff volumes using 7 of 8 storms occurring during the verification period had a standard error of estimate of 40% for verification A and 38% for verification B. Simulated peak discharge had a standard error of estimate of 120% for verification A and 56% for verification B. Including the eighth storm which had a relatively small magnitude in the verification analysis more than doubled the standard error of estimating volumes and peaks. (USGS)
A strategy for automatically generating programs in the lucid programming language
NASA Technical Reports Server (NTRS)
Johnson, Sally C.
1987-01-01
A strategy for automatically generating and verifying simple computer programs is described. The programs are specified by a precondition and a postcondition in predicate calculus. The programs generated are in the Lucid programming language, a high-level, data-flow language known for its attractive mathematical properties and ease of program verification. The Lucid programming is described, and the automatic program generation strategy is described and applied to several example problems.
The Hawaiian Electric Companies | Energy Systems Integration Facility |
farm in Maui, Hawaii Verification of Voltage Regulation Operating Strategies NREL has studied how Hawaiian Electric Companies can best manage voltage regulation functions from distributed technologies. Two
Investigation of a Verification and Validation Tool with a Turbofan Aircraft Engine Application
NASA Technical Reports Server (NTRS)
Uth, Peter; Narang-Siddarth, Anshu; Wong, Edmond
2018-01-01
The development of more advanced control architectures for turbofan aircraft engines can yield gains in performance and efficiency over the lifetime of an engine. However, the implementation of these increasingly complex controllers is contingent on their ability to provide safe, reliable engine operation. Therefore, having the means to verify the safety of new control algorithms is crucial. As a step towards this goal, CoCoSim, a publicly available verification tool for Simulink, is used to analyze C-MAPSS40k, a 40,000 lbf class turbo-fan engine model developed at NASA for testing new control algorithms. Due to current limitations of the verification software, several modifications are made to C-MAPSS40k to achieve compatibility with CoCoSim. Some of these modifications sacrifice fidelity to the original model. Several safety and performance requirements typical for turbofan engines are identified and constructed into a verification framework. Preliminary results using an industry standard baseline controller for these requirements are presented. While verification capabilities are demonstrated, a truly comprehensive analysis will require further development of the verification tool.
Madon, Stephanie; Guyll, Max; Buller, Ashley A.; Scherr, Kyle C.; Willard, Jennifer; Spoth, Richard
2010-01-01
This research examined whether self-fulfilling prophecy effects are mediated by self-verification, informational conformity, and modeling processes. The authors examined these mediational processes across multiple time frames with longitudinal data obtained from two samples of mother – child dyads (N1 = 487; N2 = 287). Children’s alcohol use was the outcome variable. The results provided consistent support for the mediational process of self-verification. In both samples and across several years of adolescence, there was a significant indirect effect of mothers’ beliefs on children’s alcohol use through children’s self-assessed likelihood of drinking alcohol in the future. Comparatively less support was found for informational conformity and modeling processes as mediators of mothers’ self-fulfilling effects. The potential for self-fulfilling prophecies to produce long lasting changes in targets’ behavior via self-verification processes are discussed. PMID:18665708
Madon, Stephanie; Guyll, Max; Buller, Ashley A; Scherr, Kyle C; Willard, Jennifer; Spoth, Richard
2008-08-01
This research examined whether self-fulfilling prophecy effects are mediated by self-verification, informational conformity, and modeling processes. The authors examined these mediational processes across multiple time frames with longitudinal data obtained from two samples of mother-child dyads (N-sub-1 = 486; N-sub-2 = 287), with children's alcohol use as the outcome variable. The results provided consistent support for the mediational process of self-verification. In both samples and across several years of adolescence, there was a significant indirect effect of mothers' beliefs on children's alcohol use through children's self-assessed likelihood of drinking alcohol in the future. Comparatively less support was found for informational conformity and modeling processes as mediators of mothers' self-fulfilling effects. The potential for self-fulfilling prophecies to produce long-lasting changes in targets' behavior via self-verification processes are discussed. (c) 2008 APA, all rights reserved
Valentiner, David P; Skowronski, John J; McGrath, Patrick B; Smith, Sarah A; Renner, Kerry A
2011-10-01
A self-verification model of social anxiety views negative social self-esteem as a core feature of social anxiety. This core feature is proposed to be maintained through self-verification processes, such as by leading individuals with negative social self-esteem to prefer negative social feedback. This model is tested in two studies. In Study 1, questionnaires were administered to a college sample (N = 317). In Study 2, questionnaires were administered to anxiety disordered patients (N = 62) before and after treatment. Study 1 developed measures of preference for negative social feedback and social self-esteem, and provided evidence of their incremental validity in a college sample. Study 2 found that these two variables are not strongly related to fears of evaluation, are relatively unaffected by a treatment that targets such fears, and predict residual social anxiety following treatment. Overall, these studies provide preliminary evidence for a self-verification model of social anxiety.
A Methodology for Evaluating Artifacts Produced by a Formal Verification Process
NASA Technical Reports Server (NTRS)
Siminiceanu, Radu I.; Miner, Paul S.; Person, Suzette
2011-01-01
The goal of this study is to produce a methodology for evaluating the claims and arguments employed in, and the evidence produced by formal verification activities. To illustrate the process, we conduct a full assessment of a representative case study for the Enabling Technology Development and Demonstration (ETDD) program. We assess the model checking and satisfiabilty solving techniques as applied to a suite of abstract models of fault tolerant algorithms which were selected to be deployed in Orion, namely the TTEthernet startup services specified and verified in the Symbolic Analysis Laboratory (SAL) by TTTech. To this end, we introduce the Modeling and Verification Evaluation Score (MVES), a metric that is intended to estimate the amount of trust that can be placed on the evidence that is obtained. The results of the evaluation process and the MVES can then be used by non-experts and evaluators in assessing the credibility of the verification results.
Space transportation system payload interface verification
NASA Technical Reports Server (NTRS)
Everline, R. T.
1977-01-01
The paper considers STS payload-interface verification requirements and the capability provided by STS to support verification. The intent is to standardize as many interfaces as possible, not only through the design, development, test and evaluation (DDT and E) phase of the major payload carriers but also into the operational phase. The verification process is discussed in terms of its various elements, such as the Space Shuttle DDT and E (including the orbital flight test program) and the major payload carriers DDT and E (including the first flights). Five tools derived from the Space Shuttle DDT and E are available to support the verification process: mathematical (structural and thermal) models, the Shuttle Avionics Integration Laboratory, the Shuttle Manipulator Development Facility, and interface-verification equipment (cargo-integration test equipment).
Genome-Scale Screen for DNA Methylation-Based Detection Markers for Ovarian Cancer
Houshdaran, Sahar; Shen, Hui; Widschwendter, Martin; Daxenbichler, Günter; Long, Tiffany; Marth, Christian; Laird-Offringa, Ite A.; Press, Michael F.; Dubeau, Louis; Siegmund, Kimberly D.; Wu, Anna H.; Groshen, Susan; Chandavarkar, Uma; Roman, Lynda D.; Berchuck, Andrew; Pearce, Celeste L.; Laird, Peter W.
2011-01-01
Background The identification of sensitive biomarkers for the detection of ovarian cancer is of high clinical relevance for early detection and/or monitoring of disease recurrence. We developed a systematic multi-step biomarker discovery and verification strategy to identify candidate DNA methylation markers for the blood-based detection of ovarian cancer. Methodology/Principal Findings We used the Illumina Infinium platform to analyze the DNA methylation status of 27,578 CpG sites in 41 ovarian tumors. We employed a marker selection strategy that emphasized sensitivity by requiring consistency of methylation across tumors, while achieving specificity by excluding markers with methylation in control leukocyte or serum DNA. Our verification strategy involved testing the ability of identified markers to monitor disease burden in serially collected serum samples from ovarian cancer patients who had undergone surgical tumor resection compared to CA-125 levels. We identified one marker, IFFO1 promoter methylation (IFFO1-M), that is frequently methylated in ovarian tumors and that is rarely detected in the blood of normal controls. When tested in 127 serially collected sera from ovarian cancer patients, IFFO1-M showed post-resection kinetics significantly correlated with serum CA-125 measurements in six out of 16 patients. Conclusions/Significance We implemented an effective marker screening and verification strategy, leading to the identification of IFFO1-M as a blood-based candidate marker for sensitive detection of ovarian cancer. Serum levels of IFFO1-M displayed post-resection kinetics consistent with a reflection of disease burden. We anticipate that IFFO1-M and other candidate markers emerging from this marker development pipeline may provide disease detection capabilities that complement existing biomarkers. PMID:22163280
NASA Technical Reports Server (NTRS)
Lee, S. S.; Sengupta, S.; Nwadike, E. V.; Sinha, S. K.
1982-01-01
The six-volume report: describes the theory of a three dimensional (3-D) mathematical thermal discharge model and a related one dimensional (1-D) model, includes model verification at two sites, and provides a separate user's manual for each model. The 3-D model has two forms: free surface and rigid lid. The former, verified at Anclote Anchorage (FL), allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth; e.g., estuaries and coastal regions. The latter, verified at Lake Keowee (SC), is suited for small surface wave heights compared to depth (e.g., natural or man-made inland lakes) because surface elevation has been removed as a parameter. These models allow computation of time-dependent velocity and temperature fields for given initial conditions and time-varying boundary conditions. The free-surface model also provides surface height variations with time.
Consistent model driven architecture
NASA Astrophysics Data System (ADS)
Niepostyn, Stanisław J.
2015-09-01
The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.
On improving the performance of nonphotochemical quenching in CP29 light-harvesting antenna complex
Berman, Gennady Petrovich; Nesterov, Alexander I.; Sayre, Richard Thomas; ...
2016-02-02
In this study, we model and simulate the performance of charge-transfer in nonphotochemical quenching (NPQ) in the CP29 light-harvesting antenna-complex associated with photosystem II (PSII). The model consists of five discrete excitonic energy states and two sinks, responsible for the potentially damaging processes and charge-transfer channels, respectively. We demonstrate that by varying (i) the parameters of the chlorophyll-based dimer, (ii) the resonant properties of the protein-solvent environment interaction, and (iii) the energy transfer rates to the sinks, one can significantly improve the performance of the NPQ. In conclusion, our analysis suggests strategies for improving the performance of the NPQ inmore » response to environmental changes, and may stimulate experimental verification.« less
Physico-Chemical Dynamics of Nanoparticle Formation during Laser Decontamination
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, M.D.
2005-06-01
Laser-ablation based decontamination is a new and effective approach for simultaneous removal and characterization of contaminants from surfaces (e.g., building interior and exterior walls, ground floors, etc.). The scientific objectives of this research are to: (1) characterize particulate matter generated during the laser-ablation based decontamination, (2) develop a technique for simultaneous cleaning and spectroscopic verification, and (3) develop an empirical model for predicting particle generation for the size range from 10 nm to tens of micrometers. This research project provides fundamental data obtained through a systematic study on the particle generation mechanism, and also provides a working model for predictionmore » of particle generation such that an effective operational strategy can be devised to facilitate worker protection.« less
Survey of Verification and Validation Techniques for Small Satellite Software Development
NASA Technical Reports Server (NTRS)
Jacklin, Stephen A.
2015-01-01
The purpose of this paper is to provide an overview of the current trends and practices in small-satellite software verification and validation. This document is not intended to promote a specific software assurance method. Rather, it seeks to present an unbiased survey of software assurance methods used to verify and validate small satellite software and to make mention of the benefits and value of each approach. These methods include simulation and testing, verification and validation with model-based design, formal methods, and fault-tolerant software design with run-time monitoring. Although the literature reveals that simulation and testing has by far the longest legacy, model-based design methods are proving to be useful for software verification and validation. Some work in formal methods, though not widely used for any satellites, may offer new ways to improve small satellite software verification and validation. These methods need to be further advanced to deal with the state explosion problem and to make them more usable by small-satellite software engineers to be regularly applied to software verification. Last, it is explained how run-time monitoring, combined with fault-tolerant software design methods, provides an important means to detect and correct software errors that escape the verification process or those errors that are produced after launch through the effects of ionizing radiation.
The Environmental Technology Verification report discusses the technology and performance of Laser Touch model LT-B512 targeting device manufactured by Laser Touch and Technologies, LLC, for manual spray painting operations. The relative transfer efficiency (TE) improved an avera...
Chai, Linguo; Cai, Baigen; ShangGuan, Wei; Wang, Jian; Wang, Huashen
2017-08-23
To enhance the reality of Connected and Autonomous Vehicles (CAVs) kinematic simulation scenarios and to guarantee the accuracy and reliability of the verification, a four-layer CAVs kinematic simulation framework, which is composed with road network layer, vehicle operating layer, uncertainties modelling layer and demonstrating layer, is proposed in this paper. Properties of the intersections are defined to describe the road network. A target position based vehicle position updating method is designed to simulate such vehicle behaviors as lane changing and turning. Vehicle kinematic models are implemented to maintain the status of the vehicles when they are moving towards the target position. Priorities for individual vehicle control are authorized for different layers. Operation mechanisms of CAVs uncertainties, which are defined as position error and communication delay in this paper, are implemented in the simulation to enhance the reality of the simulation. A simulation platform is developed based on the proposed methodology. A comparison of simulated and theoretical vehicle delay has been analyzed to prove the validity and the creditability of the platform. The scenario of rear-end collision avoidance is conducted to verify the uncertainties operating mechanisms, and a slot-based intersections (SIs) control strategy is realized and verified in the simulation platform to show the supports of the platform to CAVs kinematic simulation and verification.
Chai, Linguo; Cai, Baigen; ShangGuan, Wei; Wang, Jian; Wang, Huashen
2017-01-01
To enhance the reality of Connected and Autonomous Vehicles (CAVs) kinematic simulation scenarios and to guarantee the accuracy and reliability of the verification, a four-layer CAVs kinematic simulation framework, which is composed with road network layer, vehicle operating layer, uncertainties modelling layer and demonstrating layer, is proposed in this paper. Properties of the intersections are defined to describe the road network. A target position based vehicle position updating method is designed to simulate such vehicle behaviors as lane changing and turning. Vehicle kinematic models are implemented to maintain the status of the vehicles when they are moving towards the target position. Priorities for individual vehicle control are authorized for different layers. Operation mechanisms of CAVs uncertainties, which are defined as position error and communication delay in this paper, are implemented in the simulation to enhance the reality of the simulation. A simulation platform is developed based on the proposed methodology. A comparison of simulated and theoretical vehicle delay has been analyzed to prove the validity and the creditability of the platform. The scenario of rear-end collision avoidance is conducted to verify the uncertainties operating mechanisms, and a slot-based intersections (SIs) control strategy is realized and verified in the simulation platform to show the supports of the platform to CAVs kinematic simulation and verification. PMID:28832518
NASA Technical Reports Server (NTRS)
Lee, S. S.; Sengupta, S.; Nwadike, E. V.
1982-01-01
The six-volume report: describes the theory of a three dimensional (3-D) mathematical thermal discharge model and a related one dimensional (1-D) model, includes model verification at two sites, and provides a separate user's manual for each model. The 3-D model has two forms: free surface and rigid lid. The former, verified at Anclote Anchorate (FL), allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth; e.g., estuaries and coastal regions. The latter, verified at Lake Keowee (SC), is suited for small surface wave heights compared to depth (e.g., natural or man-made inland lakes) because surface elevation has been removed as a parameter. These models allow computation of time dependent velocity and temperature fields for given initial conditions and time-varying boundary conditions.
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Divito, Ben L.; Holloway, C. Michael
1994-01-01
In this paper the design and formal verification of the lower levels of the Reliable Computing Platform (RCP), a fault-tolerant computing system for digital flight control applications, are presented. The RCP uses NMR-style redundancy to mask faults and internal majority voting to flush the effects of transient faults. Two new layers of the RCP hierarchy are introduced: the Minimal Voting refinement (DA_minv) of the Distributed Asynchronous (DA) model and the Local Executive (LE) Model. Both the DA_minv model and the LE model are specified formally and have been verified using the Ehdm verification system. All specifications and proofs are available electronically via the Internet using anonymous FTP or World Wide Web (WWW) access.
NASA Technical Reports Server (NTRS)
Gravitz, Robert M.; Hale, Joseph
2006-01-01
NASA's Exploration Systems Mission Directorate (ESMD) is implementing a management approach for modeling and simulation (M&S) that will provide decision-makers information on the model's fidelity, credibility, and quality. This information will allow the decision-maker to understand the risks involved in using a model's results in the decision-making process. This presentation will discuss NASA's approach for verification and validation (V&V) of its models or simulations supporting space exploration. This presentation will describe NASA's V&V process and the associated M&S verification and validation (V&V) activities required to support the decision-making process. The M&S V&V Plan and V&V Report templates for ESMD will also be illustrated.
Cho, Gun-Sang; Kim, Dae-Sung; Yi, Eun-Surk
2015-12-01
The purpose of this study is to verification of relationship model between Korean new elderly class's recovery resilience and productive aging. As of 2013, this study sampled preliminary elderly people in Gyeonggi-do and other provinces nationwide. Data from a total of effective 484 subjects was analyzed. The collected data was processed using the IBM SPSS 20.0 and AMOS 20.0, and underwent descriptive statistical analysis, confirmatory factor analysis, and structure model verification. The path coefficient associated with model fitness was examined. The standardization path coefficient between recovery resilience and productive aging is β=0.975 (t=14.790), revealing a statistically significant positive effect. Thus, it was found that the proposed basic model on the direct path of recovery resilience and productive aging was fit for the model.
Cho, Gun-Sang; Kim, Dae-Sung; Yi, Eun-Surk
2015-01-01
The purpose of this study is to verification of relationship model between Korean new elderly class’s recovery resilience and productive aging. As of 2013, this study sampled preliminary elderly people in Gyeonggi-do and other provinces nationwide. Data from a total of effective 484 subjects was analyzed. The collected data was processed using the IBM SPSS 20.0 and AMOS 20.0, and underwent descriptive statistical analysis, confirmatory factor analysis, and structure model verification. The path coefficient associated with model fitness was examined. The standardization path coefficient between recovery resilience and productive aging is β=0.975 (t=14.790), revealing a statistically significant positive effect. Thus, it was found that the proposed basic model on the direct path of recovery resilience and productive aging was fit for the model. PMID:26730383
ETV VR/VS SUNSET LABORATORY MODEL 4 OC-EC FIELD ANALYZER
This is a verification report statement that describes the verification test which was conducted over a period of approximately 30 days (April 5 to May 7, 2013) and involved the continuous operation of duplicate Model 4 OC-EC analyzers at the Battelle Columbus Operations Special ...
Verification testing of the Vortechnics, Inc. Vortechs® System, Model 1000 was conducted on a 0.25 acre portion of an elevated highway near downtown Milwaukee, Wisconsin. The Vortechs is designed to remove settable and floatable pollutants from stormwater runoff. The Vortechs® ...
Experimental verification of a model of a two-link flexible, lightweight manipulator. M.S. Thesis
NASA Technical Reports Server (NTRS)
Huggins, James David
1988-01-01
Experimental verification is presented for an assumed modes model of a large, two link, flexible manipulator design and constructed in the School of Mechanical Engineering at Georgia Institute of Technology. The structure was designed to have typical characteristics of a lightweight manipulator.
National Centers for Environmental Prediction
Reference List Table of Contents NCEP OPERATIONAL MODEL FORECAST GRAPHICS PARALLEL/EXPERIMENTAL MODEL Developmental Air Quality Forecasts and Verification Back to Table of Contents 2. PARALLEL/EXPERIMENTAL GRAPHICS VERIFICATION (GRID VS.OBS) WEB PAGE (NCEP EXPERIMENTAL PAGE, INTERNAL USE ONLY) Interactive web page tool for
Beer, Lynn A.; Wang, Huan; Tang, Hsin-Yao; Cao, Zhijun; Chang-Wong, Tony; Tanyi, Janos L.; Zhang, Rugang; Liu, Qin; Speicher, David W.
2013-01-01
The most cancer-specific biomarkers in blood are likely to be proteins shed directly by the tumor rather than less specific inflammatory or other host responses. The use of xenograft mouse models together with in-depth proteome analysis for identification of human proteins in the mouse blood is an under-utilized strategy that can clearly identify proteins shed by the tumor. In the current study, 268 human proteins shed into mouse blood from human OVCAR-3 serous tumors were identified based upon human vs. mouse species differences using a four-dimensional plasma proteome fractionation strategy. A multi-step prioritization and verification strategy was subsequently developed to efficiently select some of the most promising biomarkers from this large number of candidates. A key step was parallel analysis of human proteins detected in the tumor supernatant, because substantially greater sequence coverage for many of the human proteins initially detected in the xenograft mouse plasma confirmed assignments as tumor-derived human proteins. Verification of candidate biomarkers in patient sera was facilitated by in-depth, label-free quantitative comparisons of serum pools from patients with ovarian cancer and benign ovarian tumors. The only proteins that advanced to multiple reaction monitoring (MRM) assay development were those that exhibited increases in ovarian cancer patients compared with benign tumor controls. MRM assays were facilely developed for all 11 novel biomarker candidates selected by this process and analysis of larger pools of patient sera suggested that all 11 proteins are promising candidate biomarkers that should be further evaluated on individual patient blood samples. PMID:23544127
Assessment of Galileo modal test results for mathematical model verification
NASA Technical Reports Server (NTRS)
Trubert, M.
1984-01-01
The modal test program for the Galileo Spacecraft was completed at the Jet Propulsion Laboratory in the summer of 1983. The multiple sine dwell method was used for the baseline test. The Galileo Spacecraft is a rather complex 2433 kg structure made of a central core on which seven major appendages representing 30 percent of the total mass are attached, resulting in a high modal density structure. The test revealed a strong nonlinearity in several major modes. This nonlinearity discovered in the course of the test necessitated running additional tests at the unusually high response levels of up to about 21 g. The high levels of response were required to obtain a model verification valid at the level of loads for which the spacecraft was designed. Because of the high modal density and the nonlinearity, correlation between the dynamic mathematical model and the test results becomes a difficult task. Significant changes in the pre-test analytical model are necessary to establish confidence in the upgraded analytical model used for the final load verification. This verification, using a test verified model, is required by NASA to fly the Galileo Spacecraft on the Shuttle/Centaur launch vehicle in 1986.
Design Authority in the Test Programme Definition: The Alenia Spazio Experience
NASA Astrophysics Data System (ADS)
Messidoro, P.; Sacchi, E.; Beruto, E.; Fleming, P.; Marucchi Chierro, P.-P.
2004-08-01
In addition, being the Verification and Test Programme a significant part of the spacecraft development life cycle in terms of cost and time, very often the subject of the mentioned discussion has the objective to optimize the verification campaign by possible deletion or limitation of some testing activities. The increased market pressure to reduce the project's schedule and cost is originating a dialecting process inside the project teams, involving program management and design authorities, in order to optimize the verification and testing programme. The paper introduces the Alenia Spazio experience in this context, coming from the real project life on different products and missions (science, TLC, EO, manned, transportation, military, commercial, recurrent and one-of-a-kind). Usually the applicable verification and testing standards (e.g. ECSS-E-10 part 2 "Verification" and ECSS-E-10 part 3 "Testing" [1]) are tailored to the specific project on the basis of its peculiar mission constraints. The Model Philosophy and the associated verification and test programme are defined following an iterative process which suitably combines several aspects (including for examples test requirements and facilities) as shown in Fig. 1 (from ECSS-E-10). The considered cases are mainly oriented to the thermal and mechanical verification, where the benefits of possible test programme optimizations are more significant. Considering the thermal qualification and acceptance testing (i.e. Thermal Balance and Thermal Vacuum) the lessons learned originated by the development of several satellites are presented together with the corresponding recommended approaches. In particular the cases are indicated in which a proper Thermal Balance Test is mandatory and others, in presence of more recurrent design, where a qualification by analysis could be envisaged. The importance of a proper Thermal Vacuum exposure for workmanship verification is also highlighted. Similar considerations are summarized for the mechanical testing with particular emphasis on the importance of Modal Survey, Static and Sine Vibration Tests in the qualification stage in combination with the effectiveness of Vibro-Acoustic Test in acceptance. The apparent relative importance of the Sine Vibration Test for workmanship verification in specific circumstances is also highlighted. Fig. 1. Model philosophy, Verification and Test Programme definition The verification of the project requirements is planned through a combination of suitable verification methods (in particular Analysis and Test) at the different verification levels (from System down to Equipment), in the proper verification stages (e.g. in Qualification and Acceptance).
The Verification-based Analysis of Reliable Multicast Protocol
NASA Technical Reports Server (NTRS)
Wu, Yunqing
1996-01-01
Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP Multicasting. In this paper, we develop formal models for R.W using existing automatic verification systems, and perform verification-based analysis on the formal RMP specifications. We also use the formal models of RW specifications to generate a test suite for conformance testing of the RMP implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress between the implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.
Zimmerman, Lindsay P; Goel, Satyender; Sathar, Shazia; Gladfelter, Charon E; Onate, Alejandra; Kane, Lindsey L; Sital, Shelly; Phua, Jasmin; Davis, Paris; Margellos-Anast, Helen; Meltzer, David O; Polonsky, Tamar S; Shah, Raj C; Trick, William E; Ahmad, Faraz S; Kho, Abel N
2018-01-01
This article presents and describes our methods in developing a novel strategy for recruitment of underrepresented, community-based participants, for pragmatic research studies leveraging routinely collected electronic health record (EHR) data. We designed a new approach for recruiting eligible patients from the community, while also leveraging affiliated health systems to extract clinical data for community participants. The strategy involves methods for data collection, linkage, and tracking. In this workflow, potential participants are identified in the community and surveyed regarding eligibility. These data are then encrypted and deidentified via a hashing algorithm for linkage of the community participant back to a record at a clinical site. The linkage allows for eligibility verification and automated follow-up. Longitudinal data are collected by querying the EHR data and surveying the community participant directly. We discuss this strategy within the context of two national research projects, a clinical trial and an observational cohort study. The community-based recruitment strategy is a novel, low-touch, clinical trial enrollment method to engage a diverse set of participants. Direct outreach to community participants, while utilizing EHR data for clinical information and follow-up, allows for efficient recruitment and follow-up strategies. This new strategy for recruitment links data reported from community participants to clinical data in the EHR and allows for eligibility verification and automated follow-up. The workflow has the potential to improve recruitment efficiency and engage traditionally underrepresented individuals in research. Schattauer GmbH Stuttgart.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, A; Han, B; Bush, K
Purpose: Dosimetric verification of VMAT/SBRT is currently performed on one or two planes in a phantom with either film or array detectors. A robust and easy-to-use 3D dosimetric tool has been sought since the advent of conformal radiation therapy. Here we present such a strategy for independent 3D VMAT/SBRT plan verification system by a combined use of EPID and cloud-based Monte Carlo (MC) dose calculation. Methods: The 3D dosimetric verification proceeds in two steps. First, the plan was delivered with a high resolution portable EPID mounted on the gantry, and the EPID-captured gantry-angle-resolved VMAT/SBRT field images were converted into fluencemore » by using the EPID pixel response function derived from MC simulations. The fluence was resampled and used as the input for an in-house developed Amazon cloud-based MC software to reconstruct the 3D dose distribution. The accuracy of the developed 3D dosimetric tool was assessed using a Delta4 phantom with various field sizes (square, circular, rectangular, and irregular MLC fields) and different patient cases. The method was applied to validate VMAT/SBRT plans using WFF and FFF photon beams (Varian TrueBeam STX). Results: It was found that the proposed method yielded results consistent with the Delta4 measurements. For points on the two detector planes, a good agreement within 1.5% were found for all the testing fields. Patient VMAT/SBRT plan studies revealed similar level of accuracy: an average γ-index passing rate of 99.2± 0.6% (3mm/3%), 97.4± 2.4% (2mm/2%), and 72.6± 8.4 % ( 1mm/1%). Conclusion: A valuable 3D dosimetric verification strategy has been developed for VMAT/SBRT plan validation. The technique provides a viable solution for a number of intractable dosimetry problems, such as small fields and plans with high dose gradient.« less
Research on the response of the water sources to the climatic change in Shiyang River Basin
NASA Astrophysics Data System (ADS)
Jin, Y. Z.; Zeng, J. J.; Hu, X. Q.; Sun, D. Y.; Song, Z. F.; Zhang, Y. L.; Lu, S. C.; Cui, Y. Q.
2017-08-01
The influence of the future climate change to the water resource will directly pose some impact on the watershed management planning and administrative strategies of Shiyang River Basin. With the purpose of exploring the influence of climate change to the runoff, this paper set Shiyang River as the study area and then established a SWAT basin hydrological model based on the data such as DEM, land use, soil, climate hydrology and so on. Besides, algorithm of SUFI2 embedded in SWAT-CUP software is adopted. The conclusion shows that SWAT Model can simulate the runoff process of Nanying River well. During the period of model verification and simulation, the runoff Nash-Sutcliffe efficient coefficient of the verification and simulation is 0.76 and 0.72 separately. The relative error between the simulation and actual measurement and the model efficient coefficient are both within the scope of acceptance, which means that the SWAT hydrological model can be properly applied into the runoff simulation of Shiyang River Basin. Meantime, analysis on the response of the water resources to the climate change in Shiyang River Basin indicates that the impact of climate change on runoff is remarkable under different climate change situations and the annual runoff will be greatly decreased as the precipitation falls and the temperature rises. Influence of precipitation to annual runoff is greater than that of temperature. Annual runoff differs obviously under different climate change situations. All in all, this paper tries to provide some technical assistance for the water sources development and utilization assessment and optimal configuration.
Formal Verification for a Next-Generation Space Shuttle
NASA Technical Reports Server (NTRS)
Nelson, Stacy D.; Pecheur, Charles; Koga, Dennis (Technical Monitor)
2002-01-01
This paper discusses the verification and validation (V&2) of advanced software used for integrated vehicle health monitoring (IVHM), in the context of NASA's next-generation space shuttle. We survey the current VBCV practice and standards used in selected NASA projects, review applicable formal verification techniques, and discuss their integration info existing development practice and standards. We also describe two verification tools, JMPL2SMV and Livingstone PathFinder, that can be used to thoroughly verify diagnosis applications that use model-based reasoning, such as the Livingstone system.
Bridge Health Monitoring Using a Machine Learning Strategy
DOT National Transportation Integrated Search
2017-01-01
The goal of this project was to cast the SHM problem within a statistical pattern recognition framework. Techniques borrowed from speaker recognition, particularly speaker verification, were used as this discipline deals with problems very similar to...
Evaluation and Research for Technology: Not Just Playing Around.
ERIC Educational Resources Information Center
Baker, Eva L.; O'Neil, Harold F., Jr.
2003-01-01
Discusses some of the challenges of technology-based training and education, the role of quality verification and evaluation, and strategies to integrate evaluation into the everyday design of technology-based systems for education and training. (SLD)
Automated biowaste sampling system urine subsystem operating model, part 1
NASA Technical Reports Server (NTRS)
Fogal, G. L.; Mangialardi, J. K.; Rosen, F.
1973-01-01
The urine subsystem automatically provides for the collection, volume sensing, and sampling of urine from six subjects during space flight. Verification of the subsystem design was a primary objective of the current effort which was accomplished thru the detail design, fabrication, and verification testing of an operating model of the subsystem.
NASA Astrophysics Data System (ADS)
Tyagi, N.; Curran, B. H.; Roberson, P. L.; Moran, J. M.; Acosta, E.; Fraass, B. A.
2008-02-01
IMRT often requires delivering small fields which may suffer from electronic disequilibrium effects. The presence of heterogeneities, particularly low-density tissues in patients, complicates such situations. In this study, we report on verification of the DPM MC code for IMRT treatment planning in heterogeneous media, using a previously developed model of the Varian 120-leaf MLC. The purpose of this study is twofold: (a) design a comprehensive list of experiments in heterogeneous media for verification of any dose calculation algorithm and (b) verify our MLC model in these heterogeneous type geometries that mimic an actual patient geometry for IMRT treatment. The measurements have been done using an IMRT head and neck phantom (CIRS phantom) and slab phantom geometries. Verification of the MLC model has been carried out using point doses measured with an A14 slim line (SL) ion chamber inside a tissue-equivalent and a bone-equivalent material using the CIRS phantom. Planar doses using lung and bone equivalent slabs have been measured and compared using EDR films (Kodak, Rochester, NY).
Assume-Guarantee Verification of Source Code with Design-Level Assumptions
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Pasareanu, Corina S.; Cobleigh, Jamieson M.
2004-01-01
Model checking is an automated technique that can be used to determine whether a system satisfies certain required properties. To address the 'state explosion' problem associated with this technique, we propose to integrate assume-guarantee verification at different phases of system development. During design, developers build abstract behavioral models of the system components and use them to establish key properties of the system. To increase the scalability of model checking at this level, we have developed techniques that automatically decompose the verification task by generating component assumptions for the properties to hold. The design-level artifacts are subsequently used to guide the implementation of the system, but also to enable more efficient reasoning at the source code-level. In particular we propose to use design-level assumptions to similarly decompose the verification of the actual system implementation. We demonstrate our approach on a significant NASA application, where design-level models were used to identify; and correct a safety property violation, and design-level assumptions allowed us to check successfully that the property was presented by the implementation.
A Methodology for Formal Hardware Verification, with Application to Microprocessors.
1993-08-29
concurrent programming lan- guages. Proceedings of the NATO Advanced Study Institute on Logics and Models of Concurrent Systems ( Colle - sur - Loup , France, 8-19...restricted class of formu- las . Bose and Fisher [26] developed a symbolic model checker based on a Cosmos switch-level model. Their modeling approach...verification using SDVS-the method and a case study. 17th Anuual Microprogramming Workshop (New Orleans, LA , 30 October-2 November 1984). Published as
Automated Analysis of Stateflow Models
NASA Technical Reports Server (NTRS)
Bourbouh, Hamza; Garoche, Pierre-Loic; Garion, Christophe; Gurfinkel, Arie; Kahsaia, Temesghen; Thirioux, Xavier
2017-01-01
Stateflow is a widely used modeling framework for embedded and cyber physical systems where control software interacts with physical processes. In this work, we present a framework a fully automated safety verification technique for Stateflow models. Our approach is two-folded: (i) we faithfully compile Stateflow models into hierarchical state machines, and (ii) we use automated logic-based verification engine to decide the validity of safety properties. The starting point of our approach is a denotational semantics of State flow. We propose a compilation process using continuation-passing style (CPS) denotational semantics. Our compilation technique preserves the structural and modal behavior of the system. The overall approach is implemented as an open source toolbox that can be integrated into the existing Mathworks Simulink Stateflow modeling framework. We present preliminary experimental evaluations that illustrate the effectiveness of our approach in code generation and safety verification of industrial scale Stateflow models.
Towards Verification and Validation for Increased Autonomy
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra
2017-01-01
This presentation goes over the work we have performed over the last few years on verification and validation of the next generation onboard collision avoidance system, ACAS X, for commercial aircraft. It describes our work on probabilistic verification and synthesis of the model that ACAS X is based on, and goes on to the validation of that model with respect to actual simulation and flight data. The presentation then moves on to identify the characteristics of ACAS X that are related to autonomy and to discuss the challenges that autonomy pauses on VV. All work presented has already been published.
Integrated Disposal Facility FY 2016: ILAW Verification and Validation of the eSTOMP Simulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freedman, Vicky L.; Bacon, Diana H.; Fang, Yilin
2016-05-13
This document describes two sets of simulations carried out to further verify and validate the eSTOMP simulator. In this report, a distinction is made between verification and validation, and the focus is on verifying eSTOMP through a series of published benchmarks on cementitious wastes, and validating eSTOMP based on a lysimeter experiment for the glassified waste. These activities are carried out within the context of a scientific view of validation that asserts that models can only be invalidated, and that model validation (and verification) is a subjective assessment.
A process improvement model for software verification and validation
NASA Technical Reports Server (NTRS)
Callahan, John; Sabolish, George
1994-01-01
We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and space station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.
A process improvement model for software verification and validation
NASA Technical Reports Server (NTRS)
Callahan, John; Sabolish, George
1994-01-01
We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and Space Station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.
Design, Implementation, and Verification of the Reliable Multicast Protocol. Thesis
NASA Technical Reports Server (NTRS)
Montgomery, Todd L.
1995-01-01
This document describes the Reliable Multicast Protocol (RMP) design, first implementation, and formal verification. RMP provides a totally ordered, reliable, atomic multicast service on top of an unreliable multicast datagram service. RMP is fully and symmetrically distributed so that no site bears an undue portion of the communications load. RMP provides a wide range of guarantees, from unreliable delivery to totally ordered delivery, to K-resilient, majority resilient, and totally resilient atomic delivery. These guarantees are selectable on a per message basis. RMP provides many communication options, including virtual synchrony, a publisher/subscriber model of message delivery, a client/server model of delivery, mutually exclusive handlers for messages, and mutually exclusive locks. It has been commonly believed that total ordering of messages can only be achieved at great performance expense. RMP discounts this. The first implementation of RMP has been shown to provide high throughput performance on Local Area Networks (LAN). For two or more destinations a single LAN, RMP provides higher throughput than any other protocol that does not use multicast or broadcast technology. The design, implementation, and verification activities of RMP have occurred concurrently. This has allowed the verification to maintain a high fidelity between design model, implementation model, and the verification model. The restrictions of implementation have influenced the design earlier than in normal sequential approaches. The protocol as a whole has matured smoother by the inclusion of several different perspectives into the product development.
Post-OPC verification using a full-chip pattern-based simulation verification method
NASA Astrophysics Data System (ADS)
Hung, Chi-Yuan; Wang, Ching-Heng; Ma, Cliff; Zhang, Gary
2005-11-01
In this paper, we evaluated and investigated techniques for performing fast full-chip post-OPC verification using a commercial product platform. A number of databases from several technology nodes, i.e. 0.13um, 0.11um and 90nm are used in the investigation. Although it has proven that for most cases, our OPC technology is robust in general, due to the variety of tape-outs with complicated design styles and technologies, it is difficult to develop a "complete or bullet-proof" OPC algorithm that would cover every possible layout patterns. In the evaluation, among dozens of databases, some OPC databases were found errors by Model-based post-OPC checking, which could cost significantly in manufacturing - reticle, wafer process, and more importantly the production delay. From such a full-chip OPC database verification, we have learned that optimizing OPC models and recipes on a limited set of test chip designs may not provide sufficient coverage across the range of designs to be produced in the process. And, fatal errors (such as pinch or bridge) or poor CD distribution and process-sensitive patterns may still occur. As a result, more than one reticle tape-out cycle is not uncommon to prove models and recipes that approach the center of process for a range of designs. So, we will describe a full-chip pattern-based simulation verification flow serves both OPC model and recipe development as well as post OPC verification after production release of the OPC. Lastly, we will discuss the differentiation of the new pattern-based and conventional edge-based verification tools and summarize the advantages of our new tool and methodology: 1). Accuracy: Superior inspection algorithms, down to 1nm accuracy with the new "pattern based" approach 2). High speed performance: Pattern-centric algorithms to give best full-chip inspection efficiency 3). Powerful analysis capability: Flexible error distribution, grouping, interactive viewing and hierarchical pattern extraction to narrow down to unique patterns/cells.
NASA Technical Reports Server (NTRS)
Cleveland, Paul E.; Parrish, Keith A.
2005-01-01
A thorough and unique thermal verification and model validation plan has been developed for NASA s James Webb Space Telescope. The JWST observatory consists of a large deployed aperture optical telescope passively cooled to below 50 Kelvin along with a suite of several instruments passively and actively cooled to below 37 Kelvin and 7 Kelvin, respectively. Passive cooling to these extremely low temperatures is made feasible by the use of a large deployed high efficiency sunshield and an orbit location at the L2 Lagrange point. Another enabling feature is the scale or size of the observatory that allows for large radiator sizes that are compatible with the expected power dissipation of the instruments and large format Mercury Cadmium Telluride (HgCdTe) detector arrays. This passive cooling concept is simple, reliable, and mission enabling when compared to the alternatives of mechanical coolers and stored cryogens. However, these same large scale observatory features, which make passive cooling viable, also prevent the typical flight configuration fully-deployed thermal balance test that is the keystone to most space missions thermal verification plan. JWST is simply too large in its deployed configuration to be properly thermal balance tested in the facilities that currently exist. This reality, when combined with a mission thermal concept with little to no flight heritage, has necessitated the need for a unique and alternative approach to thermal system verification and model validation. This paper describes the thermal verification and model validation plan that has been developed for JWST. The plan relies on judicious use of cryogenic and thermal design margin, a completely independent thermal modeling cross check utilizing different analysis teams and software packages, and finally, a comprehensive set of thermal tests that occur at different levels of JWST assembly. After a brief description of the JWST mission and thermal architecture, a detailed description of the three aspects of the thermal verification and model validation plan is presented.
Expert system verification and validation survey, delivery 4
NASA Technical Reports Server (NTRS)
1990-01-01
The purpose is to determine the state-of-the-practice in Verification and Validation (V and V) of Expert Systems (ESs) on current NASA and Industry applications. This is the first task of a series which has the ultimate purpose of ensuring that adequate ES V and V tools and techniques are available for Space Station Knowledge Based Systems development. The strategy for determining the state-of-the-practice is to check how well each of the known ES V and V issues are being addressed and to what extent they have impacted the development of ESs.
Expert system verification and validation survey. Delivery 2: Survey results
NASA Technical Reports Server (NTRS)
1990-01-01
The purpose is to determine the state-of-the-practice in Verification and Validation (V and V) of Expert Systems (ESs) on current NASA and industry applications. This is the first task of the series which has the ultimate purpose of ensuring that adequate ES V and V tools and techniques are available for Space Station Knowledge Based Systems development. The strategy for determining the state-of-the-practice is to check how well each of the known ES V and V issues are being addressed and to what extent they have impacted the development of ESs.
Expert system verification and validation survey. Delivery 5: Revised
NASA Technical Reports Server (NTRS)
1990-01-01
The purpose is to determine the state-of-the-practice in Verification and Validation (V and V) of Expert Systems (ESs) on current NASA and Industry applications. This is the first task of a series which has the ultimate purpose of ensuring that adequate ES V and V tools and techniques are available for Space Station Knowledge Based Systems development. The strategy for determining the state-of-the-practice is to check how well each of the known ES V and V issues are being addressed and to what extent they have impacted the development of ESs.
Expert system verification and validation survey. Delivery 3: Recommendations
NASA Technical Reports Server (NTRS)
1990-01-01
The purpose is to determine the state-of-the-practice in Verification and Validation (V and V) of Expert Systems (ESs) on current NASA and Industry applications. This is the first task of a series which has the ultimate purpose of ensuring that adequate ES V and V tools and techniques are available for Space Station Knowledge Based Systems development. The strategy for determining the state-of-the-practice is to check how well each of the known ES V and V issues are being addressed and to what extent they have impacted the development of ESs.
Verification testing of the SUNTEC LPX200 UV Disinfection System to develop the UV delivered dose flow relationship was conducted at the Parsippany-Troy Hills wastewater treatment plant test site in Parsippany, New Jersey. Two lamp modules were mounted parallel in a 6.5-meter lon...
Verification testing of the Bio-Microbics RetroFAST® 0.375 System to determine the reduction of nitrogen in residential wastewater was conducted over a twelve-month period at the Mamquam Wastewater Technology Test Facility, located at the Mamquam Wastewater Treatment Plant. The R...
Impact of an antiretroviral stewardship strategy on medication error rates.
Shea, Katherine M; Hobbs, Athena Lv; Shumake, Jason D; Templet, Derek J; Padilla-Tolentino, Eimeira; Mondy, Kristin E
2018-05-02
The impact of an antiretroviral stewardship strategy on medication error rates was evaluated. This single-center, retrospective, comparative cohort study included patients at least 18 years of age infected with human immunodeficiency virus (HIV) who were receiving antiretrovirals and admitted to the hospital. A multicomponent approach was developed and implemented and included modifications to the order-entry and verification system, pharmacist education, and a pharmacist-led antiretroviral therapy checklist. Pharmacists performed prospective audits using the checklist at the time of order verification. To assess the impact of the intervention, a retrospective review was performed before and after implementation to assess antiretroviral errors. Totals of 208 and 24 errors were identified before and after the intervention, respectively, resulting in a significant reduction in the overall error rate ( p < 0.001). In the postintervention group, significantly lower medication error rates were found in both patient admissions containing at least 1 medication error ( p < 0.001) and those with 2 or more errors ( p < 0.001). Significant reductions were also identified in each error type, including incorrect/incomplete medication regimen, incorrect dosing regimen, incorrect renal dose adjustment, incorrect administration, and the presence of a major drug-drug interaction. A regression tree selected ritonavir as the only specific medication that best predicted more errors preintervention ( p < 0.001); however, no antiretrovirals reliably predicted errors postintervention. An antiretroviral stewardship strategy for hospitalized HIV patients including prospective audit by staff pharmacists through use of an antiretroviral medication therapy checklist at the time of order verification decreased error rates. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Liang, Yun; Kim, Gwe-Ya; Pawlicki, Todd; Mundt, Arno J; Mell, Loren K
2013-03-04
The purpose of this study was to develop dosimetry verification procedures for volumetric-modulated arc therapy (VMAT)-based total marrow irradiation (TMI). The VMAT based TMI plans were generated for three patients: one child and two adults. The planning target volume (PTV) was defined as bony skeleton, from head to mid-femur, with a 3 mm margin. The plan strategy similar to published studies was adopted. The PTV was divided into head and neck, chest, and pelvic regions, with separate plans each of which is composed of 2-3 arcs/fields. Multiple isocenters were evenly distributed along the patient's axial direction. The focus of this study is to establish a dosimetry quality assurance procedure involving both two-dimensional (2D) and three-dimensional (3D) volumetric verifications, which is desirable for a large PTV treated with multiple isocenters. The 2D dose verification was performed with film for gamma evaluation and absolute point dose was measured with ion chamber, with attention to the junction between neighboring plans regarding hot/cold spots. The 3D volumetric dose verification used commercial dose reconstruction software to reconstruct dose from electronic portal imaging devices (EPID) images. The gamma evaluation criteria in both 2D and 3D verification were 5% absolute point dose difference and 3 mm of distance to agreement. With film dosimetry, the overall average gamma passing rate was 98.2% and absolute dose difference was 3.9% in junction areas among the test patients; with volumetric portal dosimetry, the corresponding numbers were 90.7% and 2.4%. A dosimetry verification procedure involving both 2D and 3D was developed for VMAT-based TMI. The initial results are encouraging and warrant further investigation in clinical trials.
Load Model Verification, Validation and Calibration Framework by Statistical Analysis on Field Data
NASA Astrophysics Data System (ADS)
Jiao, Xiangqing; Liao, Yuan; Nguyen, Thai
2017-11-01
Accurate load models are critical for power system analysis and operation. A large amount of research work has been done on load modeling. Most of the existing research focuses on developing load models, while little has been done on developing formal load model verification and validation (V&V) methodologies or procedures. Most of the existing load model validation is based on qualitative rather than quantitative analysis. In addition, not all aspects of model V&V problem have been addressed by the existing approaches. To complement the existing methods, this paper proposes a novel load model verification and validation framework that can systematically and more comprehensively examine load model's effectiveness and accuracy. Statistical analysis, instead of visual check, quantifies the load model's accuracy, and provides a confidence level of the developed load model for model users. The analysis results can also be used to calibrate load models. The proposed framework can be used as a guidance to systematically examine load models for utility engineers and researchers. The proposed method is demonstrated through analysis of field measurements collected from a utility system.
USDA-ARS?s Scientific Manuscript database
Rapid detection and identification of pathogenic microorganisms naturally occurring during food processing are important in developing intervention and verification strategies. In the poultry industry, contamination of poultry meat with foodborne pathogens (especially, Salmonella and Campylobacter) ...
Escobedo, Patricia; Cruz, Tess Boley; Tsai, Kai-Ya; Allem, Jon-Patrick; Soto, Daniel W; Kirkpatrick, Matthew G; Pattarroyo, Monica; Unger, Jennifer B
2017-09-11
Limited information exists about strategies and methods used on brand marketing websites to transmit pro-tobacco messages to tobacco users and potential users. This study compared age verification methods, themes, interactive activities and links to social media across tobacco brand websites. This study examined 12 tobacco brand websites representing four tobacco product categories: cigarettes, cigar/cigarillos, smokeless tobacco, and e-cigarettes. Website content was analyzed by tobacco product category and data from all website visits (n = 699) were analyzed. Adult smokers (n=32) coded websites during a one-year period, indicating whether or not they observed any of 53 marketing themes, seven interactive activities, or five external links to social media sites. Most (58%) websites required online registration before entering, however e-cigarette websites used click-through age verification. Compared to cigarette sites, cigar/cigarillo sites were more likely to feature themes related to "party" lifestyle, and e-cigarette websites were much more likely to feature themes related to harm reduction. Cigarette sites featured greater levels of interactive content compared to other tobacco products. Compared to cigarette sites, cigar/cigarillo sites were more likely to feature activities related to events and music. Compared to cigarette sites, both cigar and e-cigarette sites were more likely to direct visitors to external social media sites. Marketing methods and strategies normalize tobacco use by providing website visitors with positive themes combined with interactive content, and is an area of future research. Moreover, all tobacco products under federal regulatory authority should be required to use more stringent age verification gates. Findings indicate the Food and Drug Administration (FDA) should require brand websites of all tobacco products under its regulatory authority use more stringent age verification gates by requiring all visitors be at least 18 years of age and register online prior to entry. This is important given that marketing strategies may encourage experimentation with tobacco or deter quit attempts among website visitors. Future research should examine the use of interactive activities and social media on a wide variety of tobacco brand websites as interactive content is associated with more active information processing. © The Author 2017. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Proceedings of the Second NASA Formal Methods Symposium
NASA Technical Reports Server (NTRS)
Munoz, Cesar (Editor)
2010-01-01
This publication contains the proceedings of the Second NASA Formal Methods Symposium sponsored by the National Aeronautics and Space Administration and held in Washington D.C. April 13-15, 2010. Topics covered include: Decision Engines for Software Analysis using Satisfiability Modulo Theories Solvers; Verification and Validation of Flight-Critical Systems; Formal Methods at Intel -- An Overview; Automatic Review of Abstract State Machines by Meta Property Verification; Hardware-independent Proofs of Numerical Programs; Slice-based Formal Specification Measures -- Mapping Coupling and Cohesion Measures to Formal Z; How Formal Methods Impels Discovery: A Short History of an Air Traffic Management Project; A Machine-Checked Proof of A State-Space Construction Algorithm; Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications; Modeling Regular Replacement for String Constraint Solving; Using Integer Clocks to Verify the Timing-Sync Sensor Network Protocol; Can Regulatory Bodies Expect Efficient Help from Formal Methods?; Synthesis of Greedy Algorithms Using Dominance Relations; A New Method for Incremental Testing of Finite State Machines; Verification of Faulty Message Passing Systems with Continuous State Space in PVS; Phase Two Feasibility Study for Software Safety Requirements Analysis Using Model Checking; A Prototype Embedding of Bluespec System Verilog in the PVS Theorem Prover; SimCheck: An Expressive Type System for Simulink; Coverage Metrics for Requirements-Based Testing: Evaluation of Effectiveness; Software Model Checking of ARINC-653 Flight Code with MCP; Evaluation of a Guideline by Formal Modelling of Cruise Control System in Event-B; Formal Verification of Large Software Systems; Symbolic Computation of Strongly Connected Components Using Saturation; Towards the Formal Verification of a Distributed Real-Time Automotive System; Slicing AADL Specifications for Model Checking; Model Checking with Edge-valued Decision Diagrams; and Data-flow based Model Analysis.
Formal verification of software-based medical devices considering medical guidelines.
Daw, Zamira; Cleaveland, Rance; Vetter, Marcus
2014-01-01
Software-based devices have increasingly become an important part of several clinical scenarios. Due to their critical impact on human life, medical devices have very strict safety requirements. It is therefore necessary to apply verification methods to ensure that the safety requirements are met. Verification of software-based devices is commonly limited to the verification of their internal elements without considering the interaction that these elements have with other devices as well as the application environment in which they are used. Medical guidelines define clinical procedures, which contain the necessary information to completely verify medical devices. The objective of this work was to incorporate medical guidelines into the verification process in order to increase the reliability of the software-based medical devices. Medical devices are developed using the model-driven method deterministic models for signal processing of embedded systems (DMOSES). This method uses unified modeling language (UML) models as a basis for the development of medical devices. The UML activity diagram is used to describe medical guidelines as workflows. The functionality of the medical devices is abstracted as a set of actions that is modeled within these workflows. In this paper, the UML models are verified using the UPPAAL model-checker. For this purpose, a formalization approach for the UML models using timed automaton (TA) is presented. A set of requirements is verified by the proposed approach for the navigation-guided biopsy. This shows the capability for identifying errors or optimization points both in the workflow and in the system design of the navigation device. In addition to the above, an open source eclipse plug-in was developed for the automated transformation of UML models into TA models that are automatically verified using UPPAAL. The proposed method enables developers to model medical devices and their clinical environment using clinical workflows as one UML diagram. Additionally, the system design can be formally verified automatically.
Verification testing of the BaySaver Separation System, Model 10K was conducted on a 10 acre drainage basin near downtown Griffin, Georgia. The system consists of two water tight pre-cast concrete manholes and a high-density polyethylene BaySaver Separator Unit. The BaySaver Mod...
Integrated Formal Analysis of Timed-Triggered Ethernet
NASA Technical Reports Server (NTRS)
Dutertre, Bruno; Shankar, Nstarajan; Owre, Sam
2012-01-01
We present new results related to the verification of the Timed-Triggered Ethernet (TTE) clock synchronization protocol. This work extends previous verification of TTE based on model checking. We identify a suboptimal design choice in a compression function used in clock synchronization, and propose an improvement. We compare the original design and the improved definition using the SAL model checker.
daily and monthly statistics. The daily and monthly verification processing is broken down into three geopotential height and wind using daily statistics from the gdas1 prepbufr files at 00Z; 06Z; 12Z; and, 18Z Hemisphere; the Southern Hemisphere; and the Tropics. Daily S1 scores from the GFS and NAM models are
Land Surface Verification Toolkit (LVT) - A Generalized Framework for Land Surface Model Evaluation
NASA Technical Reports Server (NTRS)
Kumar, Sujay V.; Peters-Lidard, Christa D.; Santanello, Joseph; Harrison, Ken; Liu, Yuqiong; Shaw, Michael
2011-01-01
Model evaluation and verification are key in improving the usage and applicability of simulation models for real-world applications. In this article, the development and capabilities of a formal system for land surface model evaluation called the Land surface Verification Toolkit (LVT) is described. LVT is designed to provide an integrated environment for systematic land model evaluation and facilitates a range of verification approaches and analysis capabilities. LVT operates across multiple temporal and spatial scales and employs a large suite of in-situ, remotely sensed and other model and reanalysis datasets in their native formats. In addition to the traditional accuracy-based measures, LVT also includes uncertainty and ensemble diagnostics, information theory measures, spatial similarity metrics and scale decomposition techniques that provide novel ways for performing diagnostic model evaluations. Though LVT was originally designed to support the land surface modeling and data assimilation framework known as the Land Information System (LIS), it also supports hydrological data products from other, non-LIS environments. In addition, the analysis of diagnostics from various computational subsystems of LIS including data assimilation, optimization and uncertainty estimation are supported within LVT. Together, LIS and LVT provide a robust end-to-end environment for enabling the concepts of model data fusion for hydrological applications. The evolving capabilities of LVT framework are expected to facilitate rapid model evaluation efforts and aid the definition and refinement of formal evaluation procedures for the land surface modeling community.
Code Verification Capabilities and Assessments in Support of ASC V&V Level 2 Milestone #6035
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doebling, Scott William; Budzien, Joanne Louise; Ferguson, Jim Michael
This document provides a summary of the code verification activities supporting the FY17 Level 2 V&V milestone entitled “Deliver a Capability for V&V Assessments of Code Implementations of Physics Models and Numerical Algorithms in Support of Future Predictive Capability Framework Pegposts.” The physics validation activities supporting this milestone are documented separately. The objectives of this portion of the milestone are: 1) Develop software tools to support code verification analysis; 2) Document standard definitions of code verification test problems; and 3) Perform code verification assessments (focusing on error behavior of algorithms). This report and a set of additional standalone documents servemore » as the compilation of results demonstrating accomplishment of these objectives.« less
Yang, P C; Zhang, S X; Sun, P P; Cai, Y L; Lin, Y; Zou, Y H
2017-07-10
Objective: To construct the Markov models to reflect the reality of prevention and treatment interventions against hepatitis B virus (HBV) infection, simulate the natural history of HBV infection in different age groups and provide evidence for the economics evaluations of hepatitis B vaccination and population-based antiviral treatment in China. Methods: According to the theory and techniques of Markov chain, the Markov models of Chinese HBV epidemic were developed based on the national data and related literature both at home and abroad, including the settings of Markov model states, allowable transitions and initial and transition probabilities. The model construction, operation and verification were conducted by using software TreeAge Pro 2015. Results: Several types of Markov models were constructed to describe the disease progression of HBV infection in neonatal period, perinatal period or adulthood, the progression of chronic hepatitis B after antiviral therapy, hepatitis B prevention and control in adults, chronic hepatitis B antiviral treatment and the natural progression of chronic hepatitis B in general population. The model for the newborn was fundamental which included ten states, i.e . susceptiblity to HBV, HBsAg clearance, immune tolerance, immune clearance, low replication, HBeAg negative CHB, compensated cirrhosis, decompensated cirrhosis, hepatocellular carcinoma (HCC) and death. The susceptible state to HBV was excluded in the perinatal period model, and the immune tolerance state was excluded in the adulthood model. The model for general population only included two states, survive and death. Among the 5 types of models, there were 9 initial states assigned with initial probabilities, and 27 states for transition probabilities. The results of model verifications showed that the probability curves were basically consistent with the situation of HBV epidemic in China. Conclusion: The Markov models developed can be used in economics evaluation of hepatitis B vaccination and treatment for the elimination of HBV infection in China though the structures and parameters in the model have uncertainty with dynamic natures.
Assessment of Automated Measurement and Verification (M&V) Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granderson, Jessica; Touzani, Samir; Custodio, Claudine
This report documents the application of a general statistical methodology to assess the accuracy of baseline energy models, focusing on its application to Measurement and Verification (M&V) of whole-building energy savings.
NASA Astrophysics Data System (ADS)
Illing, Sebastian; Schuster, Mareike; Kadow, Christopher; Kröner, Igor; Richling, Andy; Grieger, Jens; Kruschke, Tim; Lang, Benjamin; Redl, Robert; Schartner, Thomas; Cubasch, Ulrich
2016-04-01
MiKlip is project for medium-term climate prediction funded by the Federal Ministry of Education and Research in Germany (BMBF) and aims to create a model system that is able provide reliable decadal climate forecasts. During the first project phase of MiKlip the sub-project INTEGRATION located at Freie Universität Berlin developed a framework for scientific infrastructures (FREVA). More information about FREVA can be found in EGU2016-13060. An instance of this framework is used as Central Evaluation System (CES) during the MiKlip project. Throughout the first project phase various sub-projects developed over 25 analysis tools - so called plugins - for the CES. The main focus of these plugins is on the evaluation and verification of decadal climate prediction data, but most plugins are not limited to this scope. They target a wide range of scientific questions. Starting from preprocessing tools like the "LeadtimeSelector", which creates lead-time dependent time-series from decadal hindcast sets, over tracking tools like the "Zykpak" plugin, which can objectively locate and track mid-latitude cyclones, to plugins like "MurCSS" or "SPECS", which calculate deterministic and probabilistic skill metrics. We also integrated some analyses from Model Evaluation Tools (MET), which was developed at NCAR. We will show the theoretical background, technical implementation strategies, and some interesting results of the evaluation of the MiKlip Prototype decadal prediction system for a selected set of these tools.
A New Integrated Threshold Selection Methodology for Spatial Forecast Verification of Extreme Events
NASA Astrophysics Data System (ADS)
Kholodovsky, V.
2017-12-01
Extreme weather and climate events such as heavy precipitation, heat waves and strong winds can cause extensive damage to the society in terms of human lives and financial losses. As climate changes, it is important to understand how extreme weather events may change as a result. Climate and statistical models are often independently used to model those phenomena. To better assess performance of the climate models, a variety of spatial forecast verification methods have been developed. However, spatial verification metrics that are widely used in comparing mean states, in most cases, do not have an adequate theoretical justification to benchmark extreme weather events. We proposed a new integrated threshold selection methodology for spatial forecast verification of extreme events that couples existing pattern recognition indices with high threshold choices. This integrated approach has three main steps: 1) dimension reduction; 2) geometric domain mapping; and 3) thresholds clustering. We apply this approach to an observed precipitation dataset over CONUS. The results are evaluated by displaying threshold distribution seasonally, monthly and annually. The method offers user the flexibility of selecting a high threshold that is linked to desired geometrical properties. The proposed high threshold methodology could either complement existing spatial verification methods, where threshold selection is arbitrary, or be directly applicable in extreme value theory.
2016-10-01
comes when considering numerous scores and statistics during a preliminary evaluation of the applicability of the fuzzy- verification minimum coverage...The selection of thresholds with which to generate categorical-verification scores and statistics from the application of both traditional and...of statistically significant numbers of cases; the latter presents a challenge of limited application for assessment of the forecast models’ ability
QPF verification using different radar-based analyses: a case study
NASA Astrophysics Data System (ADS)
Moré, J.; Sairouni, A.; Rigo, T.; Bravo, M.; Mercader, J.
2009-09-01
Verification of QPF in NWP models has been always challenging not only for knowing what scores are better to quantify a particular skill of a model but also for choosing the more appropriate methodology when comparing forecasts with observations. On the one hand, an objective verification technique can provide conclusions that are not in agreement with those ones obtained by the "eyeball" method. Consequently, QPF can provide valuable information to forecasters in spite of having poor scores. On the other hand, there are difficulties in knowing the "truth" so different results can be achieved depending on the procedures used to obtain the precipitation analysis. The aim of this study is to show the importance of combining different precipitation analyses and verification methodologies to obtain a better knowledge of the skills of a forecasting system. In particular, a short range precipitation forecasting system based on MM5 at 12 km coupled with LAPS is studied in a local convective precipitation event that took place in NE Iberian Peninsula on October 3rd 2008. For this purpose, a variety of verification methods (dichotomous, recalibration and object oriented methods) are used to verify this case study. At the same time, different precipitation analyses are used in the verification process obtained by interpolating radar data using different techniques.
Abstraction and Assume-Guarantee Reasoning for Automated Software Verification
NASA Technical Reports Server (NTRS)
Chaki, S.; Clarke, E.; Giannakopoulou, D.; Pasareanu, C. S.
2004-01-01
Compositional verification and abstraction are the key techniques to address the state explosion problem associated with model checking of concurrent software. A promising compositional approach is to prove properties of a system by checking properties of its components in an assume-guarantee style. This article proposes a framework for performing abstraction and assume-guarantee reasoning of concurrent C code in an incremental and fully automated fashion. The framework uses predicate abstraction to extract and refine finite state models of software and it uses an automata learning algorithm to incrementally construct assumptions for the compositional verification of the abstract models. The framework can be instantiated with different assume-guarantee rules. We have implemented our approach in the COMFORT reasoning framework and we show how COMFORT out-performs several previous software model checking approaches when checking safety properties of non-trivial concurrent programs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsao, Jeffrey Y.; Trucano, Timothy G.; Kleban, Stephen D.
This report contains the written footprint of a Sandia-hosted workshop held in Albuquerque, New Mexico, June 22-23, 2016 on “Complex Systems Models and Their Applications: Towards a New Science of Verification, Validation and Uncertainty Quantification,” as well as of pre-work that fed into the workshop. The workshop’s intent was to explore and begin articulating research opportunities at the intersection between two important Sandia communities: the complex systems (CS) modeling community, and the verification, validation and uncertainty quantification (VVUQ) community The overarching research opportunity (and challenge) that we ultimately hope to address is: how can we quantify the credibility of knowledgemore » gained from complex systems models, knowledge that is often incomplete and interim, but will nonetheless be used, sometimes in real-time, by decision makers?« less
An evaluation of reading comprehension of expository text in adults with traumatic brain injury.
Sohlberg, McKay Moore; Griffiths, Gina G; Fickas, Stephen
2014-05-01
This project was conducted to obtain information about reading problems of adults with traumatic brain injury (TBI) with mild-to-moderate cognitive impairments and to investigate how these readers respond to reading comprehension strategy prompts integrated into digital versions of text. Participants from 2 groups, adults with TBI (n = 15) and matched controls (n = 15), read 4 different 500-word expository science passages linked to either a strategy prompt condition or a no-strategy prompt condition. The participants' reading comprehension was evaluated using sentence verification and free recall tasks. The TBI and control groups exhibited significant differences on 2 of the 5 reading comprehension measures: paraphrase statements on a sentence verification task and communication units on a free recall task. Unexpected group differences were noted on the participants' prerequisite reading skills. For the within-group comparison, participants showed significantly higher reading comprehension scores on 2 free recall measures: words per communication unit and type-token ratio. There were no significant interactions. The results help to elucidate the nature of reading comprehension in adults with TBI with mild-to-moderate cognitive impairments and endorse further evaluation of reading comprehension strategies as a potential intervention option for these individuals. Future research is needed to better understand how individual differences influence a person's reading and response to intervention.
Land surface Verification Toolkit (LVT)
NASA Technical Reports Server (NTRS)
Kumar, Sujay V.
2017-01-01
LVT is a framework developed to provide an automated, consolidated environment for systematic land surface model evaluation Includes support for a range of in-situ, remote-sensing and other model and reanalysis products. Supports the analysis of outputs from various LIS subsystems, including LIS-DA, LIS-OPT, LIS-UE. Note: The Land Information System Verification Toolkit (LVT) is a NASA software tool designed to enable the evaluation, analysis and comparison of outputs generated by the Land Information System (LIS). The LVT software is released under the terms and conditions of the NASA Open Source Agreement (NOSA) Version 1.1 or later. Land Information System Verification Toolkit (LVT) NOSA.
A Verification System for Distributed Objects with Asynchronous Method Calls
NASA Astrophysics Data System (ADS)
Ahrendt, Wolfgang; Dylla, Maximilian
We present a verification system for Creol, an object-oriented modeling language for concurrent distributed applications. The system is an instance of KeY, a framework for object-oriented software verification, which has so far been applied foremost to sequential Java. Building on KeY characteristic concepts, like dynamic logic, sequent calculus, explicit substitutions, and the taclet rule language, the system presented in this paper addresses functional correctness of Creol models featuring local cooperative thread parallelism and global communication via asynchronous method calls. The calculus heavily operates on communication histories which describe the interfaces of Creol units. Two example scenarios demonstrate the usage of the system.
C formal verification with unix communication and concurrency
NASA Technical Reports Server (NTRS)
Hoover, Doug N.
1990-01-01
The results of a NASA SBIR project are presented in which CSP-Ariel, a verification system for C programs which use Unix system calls for concurrent programming, interprocess communication, and file input and output, was developed. This project builds on ORA's Ariel C verification system by using the system of Hoare's book, Communicating Sequential Processes, to model concurrency and communication. The system runs in ORA's Clio theorem proving environment. The use of CSP to model Unix concurrency and sketch the CSP semantics of a simple concurrent program is outlined. Plans for further development of CSP-Ariel are discussed. This paper is presented in viewgraph form.
Hardware proofs using EHDM and the RSRE verification methodology
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Sjogren, Jon A.
1988-01-01
Examined is a methodology for hardware verification developed by Royal Signals and Radar Establishment (RSRE) in the context of the SRI International's Enhanced Hierarchical Design Methodology (EHDM) specification/verification system. The methodology utilizes a four-level specification hierarchy with the following levels: functional level, finite automata model, block model, and circuit level. The properties of a level are proved as theorems in the level below it. This methodology is applied to a 6-bit counter problem and is critically examined. The specifications are written in EHDM's specification language, Extended Special, and the proofs are improving both the RSRE methodology and the EHDM system.
Software Model Checking Without Source Code
NASA Technical Reports Server (NTRS)
Chaki, Sagar; Ivers, James
2009-01-01
We present a framework, called AIR, for verifying safety properties of assembly language programs via software model checking. AIR extends the applicability of predicate abstraction and counterexample guided abstraction refinement to the automated verification of low-level software. By working at the assembly level, AIR allows verification of programs for which source code is unavailable-such as legacy and COTS software-and programs that use features-such as pointers, structures, and object-orientation-that are problematic for source-level software verification tools. In addition, AIR makes no assumptions about the underlying compiler technology. We have implemented a prototype of AIR and present encouraging results on several non-trivial examples.
NASA Astrophysics Data System (ADS)
Matsumoto, Jun; Okaya, Shunichi; Igoh, Hiroshi; Kawaguchi, Junichiro
2017-04-01
A new propellant feed system referred to as a self-pressurized feed system is proposed for liquid rocket engines. The self-pressurized feed system is a type of gas-pressure feed system; however, the pressurization source is retained in the liquid state to reduce tank volume. The liquid pressurization source is heated and gasified using heat exchange from the hot propellant using a regenerative cooling strategy. The liquid pressurization source is raised to critical pressure by a pressure booster referred to as a charger in order to avoid boiling and improve the heat exchange efficiency. The charger is driven by a part of the generated pressurization gas using a closed-loop self-pressurized feed system. The purpose of this study is to propose a propellant feed system that is lighter and simpler than traditional gas pressure feed systems. The proposed system can be applied to all liquid rocket engines that use the regenerative cooling strategy. The concept and mathematical models of the self-pressurized feed system are presented first. Experiment results for verification are then shown and compared with the mathematical models.
Environmental Testing Campaign and Verification of Satellite Deimos-2 at INTA
NASA Astrophysics Data System (ADS)
Hernandez, Daniel; Vazquez, Mercedes; Anon, Manuel; Olivo, Esperanza; Gallego, Pablo; Morillo, Pablo; Parra, Javier; Capraro; Luengo, Mar; Garcia, Beatriz; Villacorta, Pablo
2014-06-01
In this paper the environmental test campaign and verification of the DEIMOS-2 (DM2) satellite will be presented and described. DM2 will be ready for launch in 2014.Firstly, a short description of the satellite is presented, including its physical characteristics and intended optical performances. DEIMOS-2 is a LEO satellite for earth observation that will provide high resolution imaging services for agriculture, civil protection, environmental issues, disasters monitoring, climate change, urban planning, cartography, security and intelligence.Then, the verification and test campaign carried out on the SM and FM models at INTA is described; including Mechanical test for the SM and Climatic, Mechanical and Electromagnetic Compatibility tests for the FM. In addition, this paper includes Centre of Gravity and Moment of Inertia measurements for both models, and other verification activities carried out in order to ensure satellite's health during launch and its in orbit performance.
Simulation-Based Verification of Autonomous Controllers via Livingstone PathFinder
NASA Technical Reports Server (NTRS)
Lindsey, A. E.; Pecheur, Charles
2004-01-01
AI software is often used as a means for providing greater autonomy to automated systems, capable of coping with harsh and unpredictable environments. Due in part to the enormous space of possible situations that they aim to addrs, autonomous systems pose a serious challenge to traditional test-based verification approaches. Efficient verification approaches need to be perfected before these systems can reliably control critical applications. This publication describes Livingstone PathFinder (LPF), a verification tool for autonomous control software. LPF applies state space exploration algorithms to an instrumented testbed, consisting of the controller embedded in a simulated operating environment. Although LPF has focused on NASA s Livingstone model-based diagnosis system applications, the architecture is modular and adaptable to other systems. This article presents different facets of LPF and experimental results from applying the software to a Livingstone model of the main propulsion feed subsystem for a prototype space vehicle.
1989-07-01
TECHNICAL REPORT HL-89-14 VERIFICATION OF THE HYDRODYNAMIC AND Si SEDIMENT TRANSPORT HYBRID MODELING SYSTEM FOR CUMBERLAND SOUND AND I’) KINGS BAY...Hydrodynamic and Sediment Transport Hybrid Modeling System for Cumberland Sound and Kings Bay Navigation Channel, Georgia 12 PERSONAL AUTHOR(S) Granat...Hydrodynamic results from RMA-2V were used in the numerical sediment transport code STUDH in modeling the interaction of the flow transport and
NASA Astrophysics Data System (ADS)
Yankovskii, A. P.
2015-05-01
An indirect verification of a structural model describing the creep of a composite medium reinforced by honeycombs and made of nonlinear hereditary phase materials obeying the Rabotnov theory of creep is presented. It is shown that the structural model proposed is trustworthy and can be used in practical calculations. For different kinds of loading, creep curves for a honeycomb core made of a D16T aluminum alloy are calculated.
The Environmental Technology Verification report discusses the technology and performance of the Predator II, Model 8VADTP123C23CC000 air filter for dust and bioaerosol filtration manufactured by Tri-Dim Filter Corporation. The pressure drop across the filter was 138 Pa clean and...
Action-based verification of RTCP-nets with CADP
NASA Astrophysics Data System (ADS)
Biernacki, Jerzy; Biernacka, Agnieszka; Szpyrka, Marcin
2015-12-01
The paper presents an RTCP-nets' (real-time coloured Petri nets) coverability graphs into Aldebaran format translation algorithm. The approach provides the possibility of automatic RTCP-nets verification using model checking techniques provided by the CADP toolbox. An actual fire alarm control panel system has been modelled and several of its crucial properties have been verified to demonstrate the usability of the approach.
Observing the earth radiation budget from satellites - Past, present, and a look to the future
NASA Technical Reports Server (NTRS)
House, F. B.
1985-01-01
Satellite measurements of the radiative exchange between the planet earth and space have been the objective of many experiments since the beginning of the space age in the late 1950's. The on-going mission of the Earth Radiation Budget (ERB) experiments has been and will be to consider flight hardware, data handling and scientific analysis methods in a single design strategy. Research and development on observational data has produced an analysis model of errors associated with ERB measurement systems on polar satellites. Results show that the variability of reflected solar radiation from changing meteorology dominates measurement uncertainties. As an application, model calculations demonstrate that measurement requirements for the verification of climate models may be satisfied with observations from one polar satellite, provided there is information on diurnal variations of the radiation budget from the ERBE mission.
Monthly and seasonally verification of precipitation in Poland
NASA Astrophysics Data System (ADS)
Starosta, K.; Linkowska, J.
2009-04-01
The national meteorological service of Poland - the Institute of Meteorology and Water Management (IMWM) joined COSMO - The Consortium for Small Scale Modelling on July 2004. In Poland, the COSMO _PL model version 3.5 had run till June 2007. Since July 2007, the model version 4.0 has been running. The model runs in an operational mode at 14-km grid spacing, twice a day (00 UTC, 12 UTC). For scientific research also model with 7-km grid spacing is ran. Monthly and seasonally verification for the 24-hours (06 UTC - 06 UTC) accumulated precipitation is presented in this paper. The precipitation field of COSMO_LM had been verified against rain gauges network (308 points). The verification had been made for every month and all seasons from December 2007 to December 2008. The verification was made for three forecast days for selected thresholds: 0.5, 1, 2.5, 5, 10, 20, 25, 30 mm. Following indices from contingency table were calculated: FBI (bias), POD (probability of detection), PON (probability of detection of non event), FAR (False alarm rate), TSS (True sill statistic), HSS (Heidke skill score), ETS (Equitable skill score). Also percentile ranks and ROC-relative operating characteristic are presented. The ROC is a graph of the hit rate (Y-axis) against false alarm rate (X-axis) for different decision thresholds
Monthly and seasonally verification of precipitation in Poland
NASA Astrophysics Data System (ADS)
Starosta, K.; Linkowska, J.
2009-04-01
The national meteorological service of Poland - the Institute of Meteorology and Water Management (IMWM) joined COSMO - The Consortium for Small Scale Modelling on July 2004. In Poland, the COSMO _PL model version 3.5 had run till June 2007. Since July 2007, the model version 4.0 has been running. The model runs in an operational mode at 14-km grid spacing, twice a day (00 UTC, 12 UTC). For scientific research also model with 7-km grid spacing is ran. Monthly and seasonally verification for the 24-hours (06 UTC - 06 UTC) accumulated precipitation is presented in this paper. The precipitation field of COSMO_LM had been verified against rain gauges network (308 points). The verification had been made for every month and all seasons from December 2007 to December 2008. The verification was made for three forecast days for selected thresholds: 0.5, 1, 2.5, 5, 10, 20, 25, 30 mm. Following indices from contingency table were calculated: FBI (bias), POD (probability of detection), PON (probability of detection of non event), FAR (False alarm rate), TSS (True sill statistic), HSS (Heidke skill score), ETS (Equitable skill score). Also percentile ranks and ROC-relative operating characteristic are presented. The ROC is a graph of the hit rate (Y-axis) against false alarm rate (X-axis) for different decision thresholds.
Verification of NWP Cloud Properties using A-Train Satellite Observations
NASA Astrophysics Data System (ADS)
Kucera, P. A.; Weeks, C.; Wolff, C.; Bullock, R.; Brown, B.
2011-12-01
Recently, the NCAR Model Evaluation Tools (MET) has been enhanced to incorporate satellite observations for the verification of Numerical Weather Prediction (NWP) cloud products. We have developed tools that match fields spatially (both in the vertical and horizontal dimensions) to compare NWP products with satellite observations. These matched fields provide diagnostic evaluation of cloud macro attributes such as vertical distribution of clouds, cloud top height, and the spatial and seasonal distribution of cloud fields. For this research study, we have focused on using CloudSat, CALIPSO, and MODIS observations to evaluate cloud fields for a variety of NWP fields and derived products. We have selected cases ranging from large, mid-latitude synoptic systems to well-organized tropical cyclones. For each case, we matched the observed cloud field with gridded model and/or derived product fields. CloudSat and CALIPSO observations and model fields were matched and compared in the vertical along the orbit track. MODIS data and model fields were matched and compared in the horizontal. We then use MET to compute the verification statistics to quantify the performance of the models in representing the cloud fields. In this presentation we will give a summary of our comparison and show verification results for both synoptic and tropical cyclone cases.
Aßmann, C
2016-06-01
Besides large efforts regarding field work, provision of valid databases requires statistical and informational infrastructure to enable long-term access to longitudinal data sets on height, weight and related issues. To foster use of longitudinal data sets within the scientific community, provision of valid databases has to address data-protection regulations. It is, therefore, of major importance to hinder identifiability of individuals from publicly available databases. To reach this goal, one possible strategy is to provide a synthetic database to the public allowing for pretesting strategies for data analysis. The synthetic databases can be established using multiple imputation tools. Given the approval of the strategy, verification is based on the original data. Multiple imputation by chained equations is illustrated to facilitate provision of synthetic databases as it allows for capturing a wide range of statistical interdependencies. Also missing values, typically occurring within longitudinal databases for reasons of item non-response, can be addressed via multiple imputation when providing databases. The provision of synthetic databases using multiple imputation techniques is one possible strategy to ensure data protection, increase visibility of longitudinal databases and enhance the analytical potential.
Optimized Temporal Monitors for SystemC
NASA Technical Reports Server (NTRS)
Tabakov, Deian; Rozier, Kristin Y.; Vardi, Moshe Y.
2012-01-01
SystemC is a modeling language built as an extension of C++. Its growing popularity and the increasing complexity of designs have motivated research efforts aimed at the verification of SystemC models using assertion-based verification (ABV), where the designer asserts properties that capture the design intent in a formal language such as PSL or SVA. The model then can be verified against the properties using runtime or formal verification techniques. In this paper we focus on automated generation of runtime monitors from temporal properties. Our focus is on minimizing runtime overhead, rather than monitor size or monitor-generation time. We identify four issues in monitor generation: state minimization, alphabet representation, alphabet minimization, and monitor encoding. We conduct extensive experimentation and identify a combination of settings that offers the best performance in terms of runtime overhead.
Development of an inpatient operational pharmacy productivity model.
Naseman, Ryan W; Lopez, Ben R; Forrey, Ryan A; Weber, Robert J; Kipp, Kris M
2015-02-01
An innovative model for measuring the operational productivity of medication order management in inpatient settings is described. Order verification within a computerized prescriber order-entry system was chosen as the pharmacy workload driver. To account for inherent variability in the tasks involved in processing different types of orders, pharmaceutical products were grouped by class, and each class was assigned a time standard, or "medication complexity weight" reflecting the intensity of pharmacist and technician activities (verification of drug indication, verification of appropriate dosing, adverse-event prevention and monitoring, medication preparation, product checking, product delivery, returns processing, nurse/provider education, and problem-order resolution). The resulting "weighted verifications" (WV) model allows productivity monitoring by job function (pharmacist versus technician) to guide hiring and staffing decisions. A 9-month historical sample of verified medication orders was analyzed using the WV model, and the calculations were compared with values derived from two established models—one based on the Case Mix Index (CMI) and the other based on the proprietary Pharmacy Intensity Score (PIS). Evaluation of Pearson correlation coefficients indicated that values calculated using the WV model were highly correlated with those derived from the CMI-and PIS-based models (r = 0.845 and 0.886, respectively). Relative to the comparator models, the WV model offered the advantage of less period-to-period variability. The WV model yielded productivity data that correlated closely with values calculated using two validated workload management models. The model may be used as an alternative measure of pharmacy operational productivity. Copyright © 2015 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
1990-01-01
The purpose is to report the state-of-the-practice in Verification and Validation (V and V) of Expert Systems (ESs) on current NASA and Industry applications. This is the first task of a series which has the ultimate purpose of ensuring that adequate ES V and V tools and techniques are available for Space Station Knowledge Based Systems development. The strategy for determining the state-of-the-practice is to check how well each of the known ES V and V issues are being addressed and to what extent they have impacted the development of Expert Systems.
Wang, Guoqiang; Zhang, Honglin; Zhao, Jiyang; Li, Wei; Cao, Jia; Zhu, Chengjian; Li, Shuhua
2016-05-10
Density functional theory (DFT) investigations revealed that 4-cyanopyridine was capable of homolytically cleaving the B-B σ bond of diborane via the cooperative coordination to the two boron atoms of the diborane to generate pyridine boryl radicals. Our experimental verification provides supportive evidence for this new B-B activation mode. With this novel activation strategy, we have experimentally realized the catalytic reduction of azo-compounds to hydrazine derivatives, deoxygenation of sulfoxides to sulfides, and reduction of quinones with B2 (pin)2 at mild conditions. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
STELLAR: fast and exact local alignments
2011-01-01
Background Large-scale comparison of genomic sequences requires reliable tools for the search of local alignments. Practical local aligners are in general fast, but heuristic, and hence sometimes miss significant matches. Results We present here the local pairwise aligner STELLAR that has full sensitivity for ε-alignments, i.e. guarantees to report all local alignments of a given minimal length and maximal error rate. The aligner is composed of two steps, filtering and verification. We apply the SWIFT algorithm for lossless filtering, and have developed a new verification strategy that we prove to be exact. Our results on simulated and real genomic data confirm and quantify the conjecture that heuristic tools like BLAST or BLAT miss a large percentage of significant local alignments. Conclusions STELLAR is very practical and fast on very long sequences which makes it a suitable new tool for finding local alignments between genomic sequences under the edit distance model. Binaries are freely available for Linux, Windows, and Mac OS X at http://www.seqan.de/projects/stellar. The source code is freely distributed with the SeqAn C++ library version 1.3 and later at http://www.seqan.de. PMID:22151882
Lagomasino, David; Fatoyinbo, Temilola; Lee, SeungKuk; Feliciano, Emanuelle; Trettin, Carl; Simard, Marc
2016-04-01
Canopy height is one of the strongest predictors of biomass and carbon in forested ecosystems. Additionally, mangrove ecosystems represent one of the most concentrated carbon reservoirs that are rapidly degrading as a result of deforestation, development, and hydrologic manipulation. Therefore, the accuracy of Canopy Height Models (CHM) over mangrove forest can provide crucial information for monitoring and verification protocols. We compared four CHMs derived from independent remotely sensed imagery and identified potential errors and bias between measurement types. CHMs were derived from three spaceborne datasets; Very-High Resolution (VHR) stereophotogrammetry, TerraSAR-X add-on for Digital Elevation Measurement, and Shuttle Radar Topography Mission (TanDEM-X), and lidar data which was acquired from an airborne platform. Each dataset exhibited different error characteristics that were related to spatial resolution, sensitivities of the sensors, and reference frames. Canopies over 10 m were accurately predicted by all CHMs while the distributions of canopy height were best predicted by the VHR CHM. Depending on the guidelines and strategies needed for monitoring and verification activities, coarse resolution CHMs could be used to track canopy height at regional and global scales with finer resolution imagery used to validate and monitor critical areas undergoing rapid changes.
The 2014 Sandia Verification and Validation Challenge: Problem statement
Hu, Kenneth; Orient, George
2016-01-18
This paper presents a case study in utilizing information from experiments, models, and verification and validation (V&V) to support a decision. It consists of a simple system with data and models provided, plus a safety requirement to assess. The goal is to pose a problem that is flexible enough to allow challengers to demonstrate a variety of approaches, but constrained enough to focus attention on a theme. This was accomplished by providing a good deal of background information in addition to the data, models, and code, but directing the participants' activities with specific deliverables. In this challenge, the theme ismore » how to gather and present evidence about the quality of model predictions, in order to support a decision. This case study formed the basis of the 2014 Sandia V&V Challenge Workshop and this resulting special edition of the ASME Journal of Verification, Validation, and Uncertainty Quantification.« less
Lystrom, David J.
1972-01-01
Various methods of verifying real-time streamflow data are outlined in part II. Relatively large errors (those greater than 20-30 percent) can be detected readily by use of well-designed verification programs for a digital computer, and smaller errors can be detected only by discharge measurements and field observations. The capability to substitute a simulated discharge value for missing or erroneous data is incorporated in some of the verification routines described. The routines represent concepts ranging from basic statistical comparisons to complex watershed modeling and provide a selection from which real-time data users can choose a suitable level of verification.
Time-space modal logic for verification of bit-slice circuits
NASA Astrophysics Data System (ADS)
Hiraishi, Hiromi
1996-03-01
The major goal of this paper is to propose a new modal logic aiming at formal verification of bit-slice circuits. The new logic is called as time-space modal logic and its major feature is that it can handle two transition relations: one for time transition and the other for space transition. As for a verification algorithm, a symbolic model checking algorithm of the new logic is shown. This could be applicable to verification of bit-slice microprocessor of infinite bit width and 1D systolic array of infinite length. A simple benchmark result shows the effectiveness of the proposed approach.
Assessing Requirements Quality through Requirements Coverage
NASA Technical Reports Server (NTRS)
Rajan, Ajitha; Heimdahl, Mats; Woodham, Kurt
2008-01-01
In model-based development, the development effort is centered around a formal description of the proposed software system the model. This model is derived from some high-level requirements describing the expected behavior of the software. For validation and verification purposes, this model can then be subjected to various types of analysis, for example, completeness and consistency analysis [6], model checking [3], theorem proving [1], and test-case generation [4, 7]. This development paradigm is making rapid inroads in certain industries, e.g., automotive, avionics, space applications, and medical technology. This shift towards model-based development naturally leads to changes in the verification and validation (V&V) process. The model validation problem determining that the model accurately captures the customer's high-level requirements has received little attention and the sufficiency of the validation activities has been largely determined through ad-hoc methods. Since the model serves as the central artifact, its correctness with respect to the users needs is absolutely crucial. In our investigation, we attempt to answer the following two questions with respect to validation (1) Are the requirements sufficiently defined for the system? and (2) How well does the model implement the behaviors specified by the requirements? The second question can be addressed using formal verification. Nevertheless, the size and complexity of many industrial systems make formal verification infeasible even if we have a formal model and formalized requirements. Thus, presently, there is no objective way of answering these two questions. To this end, we propose an approach based on testing that, when given a set of formal requirements, explores the relationship between requirements-based structural test-adequacy coverage and model-based structural test-adequacy coverage. The proposed technique uses requirements coverage metrics defined in [9] on formal high-level software requirements and existing model coverage metrics such as the Modified Condition and Decision Coverage (MC/DC) used when testing highly critical software in the avionics industry [8]. Our work is related to Chockler et al. [2], but we base our work on traditional testing techniques as opposed to verification techniques.
Abstract Model of the SATS Concept of Operations: Initial Results and Recommendations
NASA Technical Reports Server (NTRS)
Dowek, Gilles; Munoz, Cesar; Carreno, Victor A.
2004-01-01
An abstract mathematical model of the concept of operations for the Small Aircraft Transportation System (SATS) is presented. The Concept of Operations consist of several procedures that describe nominal operations for SATS, Several safety properties of the system are proven using formal techniques. The final goal of the verification effort is to show that under nominal operations, aircraft are safely separated. The abstract model was written and formally verified in the Prototype Verification System (PVS).
Skill of Predicting Heavy Rainfall Over India: Improvement in Recent Years Using UKMO Global Model
NASA Astrophysics Data System (ADS)
Sharma, Kuldeep; Ashrit, Raghavendra; Bhatla, R.; Mitra, A. K.; Iyengar, G. R.; Rajagopal, E. N.
2017-11-01
The quantitative precipitation forecast (QPF) performance for heavy rains is still a challenge, even for the most advanced state-of-art high-resolution Numerical Weather Prediction (NWP) modeling systems. This study aims to evaluate the performance of UK Met Office Unified Model (UKMO) over India for prediction of high rainfall amounts (>2 and >5 cm/day) during the monsoon period (JJAS) from 2007 to 2015 in short range forecast up to Day 3. Among the various modeling upgrades and improvements in the parameterizations during this period, the model horizontal resolution has seen an improvement from 40 km in 2007 to 17 km in 2015. Skill of short range rainfall forecast has improved in UKMO model in recent years mainly due to increased horizontal and vertical resolution along with improved physics schemes. Categorical verification carried out using the four verification metrics, namely, probability of detection (POD), false alarm ratio (FAR), frequency bias (Bias) and Critical Success Index, indicates that QPF has improved by >29 and >24% in case of POD and FAR. Additionally, verification scores like EDS (Extreme Dependency Score), EDI (Extremal Dependence Index) and SEDI (Symmetric EDI) are used with special emphasis on verification of extreme and rare rainfall events. These scores also show an improvement by 60% (EDS) and >34% (EDI and SEDI) during the period of study, suggesting an improved skill of predicting heavy rains.
Mackey, Tim K; Miner, Angela; Cuomo, Raphael E
2015-11-01
The electronic cigarette (e-cigarette) market is maturing into a billion-dollar industry. Expansion includes new channels of access not sufficiently assessed, including Internet sales of e-cigarettes. This study identifies unique e-cigarette Internet vendor characteristics, including geographic location, promotional strategies, use of social networking, presence/absence of age verification, and consumer warning representation. We performed structured Internet search engine queries and used inclusion/exclusion criteria to identify e-cigarette vendors. We then conducted content analysis of characteristics of interest. Our examination yielded 57 e-cigarette Internet vendors including 54.4% (n=31) that sold exclusively online. The vast majority of websites (96.5%, n=55) were located in the U.S. Vendors used a variety of sales promotion strategies to market e-cigarettes including 70.2% (n=40) that used more than one social network service (SNS) and 42.1% (n=24) that used more than one promotional sales strategies. Most vendors (68.4%, n=39) displayed one or more health warnings on their website, but often displayed them in smaller font or in their terms and conditions. Additionally, 35.1% (n=20) of vendors did not have any detectable age verification process. E-cigarette Internet vendors are actively engaged in various promotional activities to increase the appeal and presence of their products online. In the absence of FDA regulations specific to the Internet, the e-cigarette e-commerce marketplace is likely to grow. This digital environment poses unique challenges requiring targeted policy-making including robust online age verification, monitoring of SNS marketing, and greater scrutiny of certain forms of marketing promotional practices. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
High-entropy alloys in hexagonal close-packed structure
Gao, Michael C.; Zhang, B.; Guo, S. M.; ...
2015-08-28
The microstructures and properties of high-entropy alloys (HEAs) based on the face-centered cubic and body-centered cubic structures have been studied extensively in the literature, but reports on HEAs in the hexagonal close-packed (HCP) structure are very limited. Using an efficient strategy in combining phase diagram inspection, CALPHAD modeling, and ab initio molecular dynamics simulations, a variety of new compositions are suggested that may hold great potentials in forming single-phase HCP HEAs that comprise rare earth elements and transition metals, respectively. Lastly, experimental verification was carried out on CoFeReRu and CoReRuV using X-ray diffraction, scanning electron microscopy, and energy dispersion spectroscopy.
2016-02-02
understanding is the experimental verification of a new model of light-induced loss spectra, employing continuum-dressed basis states, which agrees in...and additional qualifiers separated by commas, e.g. Smith, Richard, J, Jr. 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES). Self -explanatory... verification of a new model of light-induced loss spectra, employing continuum-dressed basis states, which agrees in shape and magnitude with all of our
TeleOperator/telePresence System (TOPS) Concept Verification Model (CVM) development
NASA Technical Reports Server (NTRS)
Shimamoto, Mike S.
1993-01-01
The development of an anthropomorphic, undersea manipulator system, the TeleOperator/telePresence System (TOPS) Concept Verification Model (CVM) is described. The TOPS system's design philosophy, which results from NRaD's experience in undersea vehicles and manipulator systems development and operations, is presented. The TOPS design approach, task teams, manipulator, and vision system development and results, conclusions, and recommendations are presented.
The verification test of the SeparmaticTM DE Pressure Type Filter System Model 12P-2 was conducted at the UNH Water Treatment Technology Assistance Center (WTTAC) in Durham, New Hampshire. The source water was finished water from the Arthur Rollins Treatment Plant that was pretr...
SPR Hydrostatic Column Model Verification and Validation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bettin, Giorgia; Lord, David; Rudeen, David Keith
2015-10-01
A Hydrostatic Column Model (HCM) was developed to help differentiate between normal "tight" well behavior and small-leak behavior under nitrogen for testing the pressure integrity of crude oil storage wells at the U.S. Strategic Petroleum Reserve. This effort was motivated by steady, yet distinct, pressure behavior of a series of Big Hill caverns that have been placed under nitrogen for extended period of time. This report describes the HCM model, its functional requirements, the model structure and the verification and validation process. Different modes of operation are also described, which illustrate how the software can be used to model extendedmore » nitrogen monitoring and Mechanical Integrity Tests by predicting wellhead pressures along with nitrogen interface movements. Model verification has shown that the program runs correctly and it is implemented as intended. The cavern BH101 long term nitrogen test was used to validate the model which showed very good agreement with measured data. This supports the claim that the model is, in fact, capturing the relevant physical phenomena and can be used to make accurate predictions of both wellhead pressure and interface movements.« less
NASA's Evolutionary Xenon Thruster (NEXT) Component Verification Testing
NASA Technical Reports Server (NTRS)
Herman, Daniel A.; Pinero, Luis R.; Sovey, James S.
2009-01-01
Component testing is a critical facet of the comprehensive thruster life validation strategy devised by the NASA s Evolutionary Xenon Thruster (NEXT) program. Component testing to-date has consisted of long-duration high voltage propellant isolator and high-cycle heater life validation testing. The high voltage propellant isolator, a heritage design, will be operated under different environmental condition in the NEXT ion thruster requiring verification testing. The life test of two NEXT isolators was initiated with comparable voltage and pressure conditions with a higher temperature than measured for the NEXT prototype-model thruster. To date the NEXT isolators have accumulated 18,300 h of operation. Measurements indicate a negligible increase in leakage current over the testing duration to date. NEXT 1/2 in. heaters, whose manufacturing and control processes have heritage, were selected for verification testing based upon the change in physical dimensions resulting in a higher operating voltage as well as potential differences in thermal environment. The heater fabrication processes, developed for the International Space Station (ISS) plasma contactor hollow cathode assembly, were utilized with modification of heater dimensions to accommodate a larger cathode. Cyclic testing of five 1/22 in. diameter heaters was initiated to validate these modified fabrication processes while retaining high reliability heaters. To date two of the heaters have been cycled to 10,000 cycles and suspended to preserve hardware. Three of the heaters have been cycled to failure giving a B10 life of 12,615 cycles, approximately 6,000 more cycles than the established qualification B10 life of the ISS plasma contactor heaters.
Frame synchronization for the Galileo code
NASA Technical Reports Server (NTRS)
Arnold, S.; Swanson, L.
1991-01-01
Results are reported on the performance of the Deep Space Network's frame synchronizer for the (15,1/4) convolutional code after Viterbi decoding. The threshold is found that optimizes the probability of acquiring true sync within four frames using a strategy that requires next frame verification.
76 FR 40753 - NASA Advisory Council; Aeronautics Committee; Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-11
... strategy Verification and Validation of Flight Critical Systems planning update NASA Aeronautics systems analysis and strategic planning It is imperative that this meeting be held on this date to accommodate the... aeronautics community and other persons, research and technical information relevant to program planning...
NASA Astrophysics Data System (ADS)
Zhou, Ning; Yang, Jia; Cheng, Zheng; Chen, Bo; Su, Yong Chun; Shu, Zhan; Zou, Jin
2017-06-01
Solar photovoltaic power generation is the power generation using solar cell module converting sunlight into DC electric energy. In the paper an equivalent model of solar photovoltaic power generation system is built in RTDS. The main circuit structure of the two-stage PV grid-connected system consists of the DC-DC, DC-AC circuit. The MPPT (Maximum Power Point Tracking) control of the PV array is controlled by adjusting the duty ratio of the DC-DC circuit. The proposed control strategy of constant voltage/constant reactive power (V/Q) control is successfully implemented grid-connected control of the inverter when grid-connected operation. The closed-loop experiment of islanding protection device of photovoltaic power plant on RTDS, verifies the correctness of the simulation model, and the experimental verification can be applied to this type of device.
Scientific, statistical, practical, and regulatory considerations in design space development.
Debevec, Veronika; Srčič, Stanko; Horvat, Matej
2018-03-01
The quality by design (QbD) paradigm guides the pharmaceutical industry towards improved understanding of products and processes, and at the same time facilitates a high degree of manufacturing and regulatory flexibility throughout the establishment of the design space. This review article presents scientific, statistical and regulatory considerations in design space development. All key development milestones, starting with planning, selection of factors, experimental execution, data analysis, model development and assessment, verification, and validation, and ending with design space submission, are presented and discussed. The focus is especially on frequently ignored topics, like management of factors and CQAs that will not be included in experimental design, evaluation of risk of failure on design space edges, or modeling scale-up strategy. Moreover, development of a design space that is independent of manufacturing scale is proposed as the preferred approach.
NASA Astrophysics Data System (ADS)
Martin, L.; Schatalov, M.; Hagner, M.; Goltz, U.; Maibaum, O.
Today's software for aerospace systems typically is very complex. This is due to the increasing number of features as well as the high demand for safety, reliability, and quality. This complexity also leads to significant higher software development costs. To handle the software complexity, a structured development process is necessary. Additionally, compliance with relevant standards for quality assurance is a mandatory concern. To assure high software quality, techniques for verification are necessary. Besides traditional techniques like testing, automated verification techniques like model checking become more popular. The latter examine the whole state space and, consequently, result in a full test coverage. Nevertheless, despite the obvious advantages, this technique is rarely yet used for the development of aerospace systems. In this paper, we propose a tool-supported methodology for the development and formal verification of safety-critical software in the aerospace domain. The methodology relies on the V-Model and defines a comprehensive work flow for model-based software development as well as automated verification in compliance to the European standard series ECSS-E-ST-40C. Furthermore, our methodology supports the generation and deployment of code. For tool support we use the tool SCADE Suite (Esterel Technology), an integrated design environment that covers all the requirements for our methodology. The SCADE Suite is well established in avionics and defense, rail transportation, energy and heavy equipment industries. For evaluation purposes, we apply our approach to an up-to-date case study of the TET-1 satellite bus. In particular, the attitude and orbit control software is considered. The behavioral models for the subsystem are developed, formally verified, and optimized.
2007-02-01
and Astronautics 11 PS3C W3 P3 T3 FAR3 Ps3 W41 P41 T41 FAR41 Ps41 W4 P4 T4 FAR4 Ps4 7 NozFlow 6 Flow45 5 Flow44 4 Flow41 3 Flow4 2 Flow3 1 N2Bal... Motivation for Modeling and Simulation Work The Augmented Generic Engine Model (AGEM) Model Verification and Validation (V&V) Assessment of AGEM V&V
Definition of ground test for verification of large space structure control
NASA Technical Reports Server (NTRS)
Doane, G. B., III; Glaese, J. R.; Tollison, D. K.; Howsman, T. G.; Curtis, S. (Editor); Banks, B.
1984-01-01
Control theory and design, dynamic system modelling, and simulation of test scenarios are the main ideas discussed. The overall effort is the achievement at Marshall Space Flight Center of a successful ground test experiment of a large space structure. A simplified planar model of ground test experiment of a large space structure. A simplified planar model of ground test verification was developed. The elimination from that model of the uncontrollable rigid body modes was also examined. Also studied was the hardware/software of computation speed.
The Yes-No Question Answering System and Statement Verification.
ERIC Educational Resources Information Center
Akiyama, M. Michael; And Others
1979-01-01
Two experiments investigated the relationship of verification to the answering of yes-no questions. Subjects verified simple statements or answered simple questions. Various proposals concerning the relative difficulty of answering questions and verifying statements were considered, and a model was proposed. (SW)
A Verification Method for MASOES.
Perozo, N; Aguilar Perozo, J; Terán, O; Molina, H
2013-02-01
MASOES is a 3agent architecture for designing and modeling self-organizing and emergent systems. This architecture describes the elements, relationships, and mechanisms, both at the individual and the collective levels, that favor the analysis of the self-organizing and emergent phenomenon without mathematically modeling the system. In this paper, a method is proposed for verifying MASOES from the point of view of design in order to study the self-organizing and emergent behaviors of the modeled systems. The verification criteria are set according to what is proposed in MASOES for modeling self-organizing and emerging systems and the principles of the wisdom of crowd paradigm and the fuzzy cognitive map (FCM) theory. The verification method for MASOES has been implemented in a tool called FCM Designer and has been tested to model a community of free software developers that works under the bazaar style as well as a Wikipedia community in order to study their behavior and determine their self-organizing and emergent capacities.
Towards the Formal Verification of a Distributed Real-Time Automotive System
NASA Technical Reports Server (NTRS)
Endres, Erik; Mueller, Christian; Shadrin, Andrey; Tverdyshev, Sergey
2010-01-01
We present the status of a project which aims at building, formally and pervasively verifying a distributed automotive system. The target system is a gate-level model which consists of several interconnected electronic control units with independent clocks. This model is verified against the specification as seen by a system programmer. The automotive system is implemented on several FPGA boards. The pervasive verification is carried out using combination of interactive theorem proving (Isabelle/HOL) and model checking (LTL).
Verification of Gyrokinetic codes: Theoretical background and applications
NASA Astrophysics Data System (ADS)
Tronko, Natalia; Bottino, Alberto; Görler, Tobias; Sonnendrücker, Eric; Told, Daniel; Villard, Laurent
2017-05-01
In fusion plasmas, the strong magnetic field allows the fast gyro-motion to be systematically removed from the description of the dynamics, resulting in a considerable model simplification and gain of computational time. Nowadays, the gyrokinetic (GK) codes play a major role in the understanding of the development and the saturation of turbulence and in the prediction of the subsequent transport. Naturally, these codes require thorough verification and validation. Here, we present a new and generic theoretical framework and specific numerical applications to test the faithfulness of the implemented models to theory and to verify the domain of applicability of existing GK codes. For a sound verification process, the underlying theoretical GK model and the numerical scheme must be considered at the same time, which has rarely been done and therefore makes this approach pioneering. At the analytical level, the main novelty consists in using advanced mathematical tools such as variational formulation of dynamics for systematization of basic GK code's equations to access the limits of their applicability. The verification of the numerical scheme is proposed via the benchmark effort. In this work, specific examples of code verification are presented for two GK codes: the multi-species electromagnetic ORB5 (PIC) and the radially global version of GENE (Eulerian). The proposed methodology can be applied to any existing GK code. We establish a hierarchy of reduced GK Vlasov-Maxwell equations implemented in the ORB5 and GENE codes using the Lagrangian variational formulation. At the computational level, detailed verifications of global electromagnetic test cases developed from the CYCLONE Base Case are considered, including a parametric β-scan covering the transition from ITG to KBM and the spectral properties at the nominal β value.
Simulation-based MDP verification for leading-edge masks
NASA Astrophysics Data System (ADS)
Su, Bo; Syrel, Oleg; Pomerantsev, Michael; Hagiwara, Kazuyuki; Pearman, Ryan; Pang, Leo; Fujimara, Aki
2017-07-01
For IC design starts below the 20nm technology node, the assist features on photomasks shrink well below 60nm and the printed patterns of those features on masks written by VSB eBeam writers start to show a large deviation from the mask designs. Traditional geometry-based fracturing starts to show large errors for those small features. As a result, other mask data preparation (MDP) methods have become available and adopted, such as rule-based Mask Process Correction (MPC), model-based MPC and eventually model-based MDP. The new MDP methods may place shot edges slightly differently from target to compensate for mask process effects, so that the final patterns on a mask are much closer to the design (which can be viewed as the ideal mask), especially for those assist features. Such an alteration generally produces better masks that are closer to the intended mask design. Traditional XOR-based MDP verification cannot detect problems caused by eBeam effects. Much like model-based OPC verification which became a necessity for OPC a decade ago, we see the same trend in MDP today. Simulation-based MDP verification solution requires a GPU-accelerated computational geometry engine with simulation capabilities. To have a meaningful simulation-based mask check, a good mask process model is needed. The TrueModel® system is a field tested physical mask model developed by D2S. The GPU-accelerated D2S Computational Design Platform (CDP) is used to run simulation-based mask check, as well as model-based MDP. In addition to simulation-based checks such as mask EPE or dose margin, geometry-based rules are also available to detect quality issues such as slivers or CD splits. Dose margin related hotspots can also be detected by setting a correct detection threshold. In this paper, we will demonstrate GPU-acceleration for geometry processing, and give examples of mask check results and performance data. GPU-acceleration is necessary to make simulation-based mask MDP verification acceptable.
NASA Technical Reports Server (NTRS)
Werner, C. R.; Humphreys, B. T.; Mulugeta, L.
2014-01-01
The Advanced Resistive Exercise Device (ARED) is the resistive exercise device used by astronauts on the International Space Station (ISS) to mitigate bone loss and muscle atrophy due to extended exposure to microgravity (micro g). The Digital Astronaut Project (DAP) has developed a multi-body dynamics model of biomechanics models for use in spaceflight exercise physiology research and operations. In an effort to advance model maturity and credibility of the ARED model, the DAP performed verification, validation and credibility (VV and C) assessment of the analyses of the model in accordance to NASA-STD-7009 'Standards for Models and Simulations'.
Logic Model Checking of Time-Periodic Real-Time Systems
NASA Technical Reports Server (NTRS)
Florian, Mihai; Gamble, Ed; Holzmann, Gerard
2012-01-01
In this paper we report on the work we performed to extend the logic model checker SPIN with built-in support for the verification of periodic, real-time embedded software systems, as commonly used in aircraft, automobiles, and spacecraft. We first extended the SPIN verification algorithms to model priority based scheduling policies. Next, we added a library to support the modeling of periodic tasks. This library was used in a recent application of the SPIN model checker to verify the engine control software of an automobile, to study the feasibility of software triggers for unintended acceleration events.
A thermal scale modeling study for Apollo and Apollo applications, volume 1
NASA Technical Reports Server (NTRS)
Shannon, R. L.
1972-01-01
The program is reported for developing and demonstrating the capabilities of thermal scale modeling as a thermal design and verification tool for Apollo and Apollo Applications Projects. The work performed for thermal scale modeling of STB; cabin atmosphere/spacecraft cabin wall thermal interface; closed loop heat rejection radiator; and docked module/spacecraft thermal interface are discussed along with the test facility requirements for thermal scale model testing of AAP spacecraft. It is concluded that thermal scale modeling can be used as an effective thermal design and verification tool to provide data early in a spacecraft development program.
Formal verification of mathematical software
NASA Technical Reports Server (NTRS)
Sutherland, D.
1984-01-01
Methods are investigated for formally specifying and verifying the correctness of mathematical software (software which uses floating point numbers and arithmetic). Previous work in the field was reviewed. A new model of floating point arithmetic called the asymptotic paradigm was developed and formalized. Two different conceptual approaches to program verification, the classical Verification Condition approach and the more recently developed Programming Logic approach, were adapted to use the asymptotic paradigm. These approaches were then used to verify several programs; the programs chosen were simplified versions of actual mathematical software.
NASA Astrophysics Data System (ADS)
Suarez Mullins, Astrid
Terrain-induced gravity waves and rotor circulations have been hypothesized to enhance the generation of submeso motions (i.e., nonstationary shear events with spatial and temporal scales greater than the turbulence scale and smaller than the meso-gamma scale) and to modulate low-level intermittency in the stable boundary layer (SBL). Intermittent turbulence, generated by submeso motions and/or the waves, can affect the atmospheric transport and dispersion of pollutants and hazardous materials. Thus, the study of these motions and the mechanisms through which they impact the weakly to very stable SBL is crucial for improving air quality modeling and hazard predictions. In this thesis, the effects of waves and rotor circulations on submeso and turbulence variability within the SBL is investigated over the moderate terrain of central Pennsylvania using special observations from a network deployed at Rock Springs, PA and high-resolution Weather Research and Forecasting (WRF) model forecasts. The investigation of waves and rotors over central PA is important because 1) the moderate topography of this region is common to most of the eastern US and thus the knowledge acquired from this study can be of significance to a large population, 2) there have been little evidence of complex wave structures and rotors reported for this region, and 3) little is known about the waves and rotors generated by smaller and more moderate topographies. Six case studies exhibiting an array of wave and rotor structures are analyzed. Observational evidence of the presence of complex wave structures, resembling nonstationary trapped gravity waves and downslope windstorms, and complex rotor circulations, resembling trapped and jump-type rotors, is presented. These motions and the mechanisms through which they modulate the SBL are further investigated using high-resolution WRF forecasts. First, the efficacy of the 0.444-km horizontal grid spacing WRF model to reproduce submeso and meso-gamma motions, generated by waves and rotors and hypothesized to impact the SBL, is investigated using a new wavelet-based verification methodology for assessing non-deterministic model skill in the submeso and meso-gamma range to complement standard deterministic measures. This technique allows the verification and/or intercomparison of any two nonstationary stochastic systems without many of the limitations of typical wavelet-based verification approaches (e.g., selection of noise models, testing for significance, etc.). Through this analysis, it is shown that the WRF model largely underestimates the number of small amplitude fluctuations in the small submeso range, as expected; and it overestimates the number of small amplitude fluctuations in the meso-gamma range, generally resulting in forecasts that are too smooth. Investigation of the variability for different initialization strategies shows that deterministic wind speed predictions are less sensitive to the choice of initialization strategy than temperature forecasts. Similarly, investigation of the variability for various planetary boundary layer (PBL) parameterizations reveals that turbulent kinetic energy (TKE)-based schemes have an advantage over the non-local schemes for non-deterministic motions. The larger spread in the verification scores for various PBL parameterizations than initialization strategies indicates that PBL parameterization may play a larger role modulating the variability of non-deterministic motions in the SBL for these cases. These results confirm previous findings that have shown WRF to have limited skill forecasting submeso variability for periods greater than ~20 min. The limited skill of the WRF at these scales in these cases is related to the systematic underestimation of the amplitude of observed fluctuations. These results are implemented in the model design and configuration for the investigation of nonstationary waves and rotor structures modulating submeso and mesogamma motions and the SBL. Observations and WRF forecasts of two wave cases characterized by nonstationary waves and rotors are investigated to show the WRF model to have reasonable accuracy forecasting low-level temperature and wind speed in the SBL and to qualitatively produce rotors, similar to those observed, as well as some of the mechanisms modulating their development and evolution. Finally, observations and high-resolution WRF forecasts under different environmental conditions using various initialization strategies are used to investigate the impact of nonlinear gravity waves and rotor structures on the generation of intermittent turbulence and valley transport in the SBL. Evidence of the presence of elevated regions of TKE generated by the complex waves and rotors is presented and investigated using an additional four case studies, exhibiting two synoptic flow regimes and different wave and rotor structures. Throughout this thesis, terrain-induced gravity waves and rotors in the SBL are shown to synergistically interact with the surface cold pool and to enhance low-level turbulence intermittency through the development of submeso and meso-gamma motions. These motions are shown to be an important source of uncertainty for the atmospheric transport and dispersion of pollutants and hazardous materials under very stable conditions. (Abstract shortened by ProQuest.).
Verification of Java Programs using Symbolic Execution and Invariant Generation
NASA Technical Reports Server (NTRS)
Pasareanu, Corina; Visser, Willem
2004-01-01
Software verification is recognized as an important and difficult problem. We present a norel framework, based on symbolic execution, for the automated verification of software. The framework uses annotations in the form of method specifications an3 loop invariants. We present a novel iterative technique that uses invariant strengthening and approximation for discovering these loop invariants automatically. The technique handles different types of data (e.g. boolean and numeric constraints, dynamically allocated structures and arrays) and it allows for checking universally quantified formulas. Our framework is built on top of the Java PathFinder model checking toolset and it was used for the verification of several non-trivial Java programs.
Formal Verification of Quasi-Synchronous Systems
2015-07-01
pg. 215-226, Springer-Verlag: London, UK, 2001. [4] Nicolas Halbwachs and Louis Mandel, Simulation and Verification of Asynchronous Systems by...Huang, S. A. Smolka, W. Tan , and S. Tripakis, Deep Random Search for Efficient Model Checking of Timed Automata, in Proceedings of the 13th Monterey
The U.S. Environmental Protection Agency, through the Environmental Technology Verification Program, is working to accelerate the acceptance and use of innovative technologies that improve the way the United States manages its environmental problems. As part of this program, the...
The U.S. Environmental Protection Agency, Through the Environmental Technology Verification Program, is working to accelerate the acceptance and use of innovative technologies that improve the way the United States manages its environmental problems. This report documents demons...
The U.S. Environmental Protection Agency (EPA) design efficient processes for conducting has created the Environmental Technology perfofl1lance tests of innovative technologies. Verification Program (E TV) to facilitate the deployment of innovative or improved environmental techn...
The Evolution of the NASA Commercial Crew Program Mission Assurance Process
NASA Technical Reports Server (NTRS)
Canfield, Amy C.
2016-01-01
In 2010, the National Aeronautics and Space Administration (NASA) established the Commercial Crew Program (CCP) in order to provide human access to the International Space Station and low Earth orbit via the commercial (non-governmental) sector. A particular challenge to NASA has been how to determine that the Commercial Provider's transportation system complies with programmatic safety requirements. The process used in this determination is the Safety Technical Review Board which reviews and approves provider submitted hazard reports. One significant product of the review is a set of hazard control verifications. In past NASA programs, 100% of these safety critical verifications were typically confirmed by NASA. The traditional Safety and Mission Assurance (S&MA) model does not support the nature of the CCP. To that end, NASA S&MA is implementing a Risk Based Assurance process to determine which hazard control verifications require NASA authentication. Additionally, a Shared Assurance Model is also being developed to efficiently use the available resources to execute the verifications.
2002-06-07
Continue to Develop and Refine Emerging Technology • Some of the emerging biometric devices, such as iris scans and facial recognition systems...such as iris scans and facial recognition systems, facial recognition systems, and speaker verification systems. (976301)
This document provides a general set of guidelines that may be consistently applied for collecting, evaluation, and reporting the costs of technologies tested under the ETV Program. Because of the diverse nature of the technologies and industries covered in this program, each ETV...
A new approach to handling incoming verifications.
Luizzo, Anthony; Roy, Bill; Luizzo, Philip
2016-10-01
Outside requests for data on current or former employees are handled in different ways by healthcare organizations and present considerable liability risks if a corporate policy for handling such risks is not in place. In this article, the authors present a strategy for responsible handling of sensitive information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Criscenti, Louise Jacqueline; Sassani, David Carl; Arguello, Jose Guadalupe, Jr.
2011-02-01
This report describes the progress in fiscal year 2010 in developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs,more » and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with robust verification, validation, and software quality requirements. Waste IPSC activities in fiscal year 2010 focused on specifying a challenge problem to demonstrate proof of concept, developing a verification and validation plan, and performing an initial gap analyses to identify candidate codes and tools to support the development and integration of the Waste IPSC. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. This year-end progress report documents the FY10 status of acquisition, development, and integration of thermal-hydrologic-chemical-mechanical (THCM) code capabilities, frameworks, and enabling tools and infrastructure.« less
NASA's Approach to Software Assurance
NASA Technical Reports Server (NTRS)
Wetherholt, Martha
2015-01-01
NASA defines software assurance as: the planned and systematic set of activities that ensure conformance of software life cycle processes and products to requirements, standards, and procedures via quality, safety, reliability, and independent verification and validation. NASA's implementation of this approach to the quality, safety, reliability, security and verification and validation of software is brought together in one discipline, software assurance. Organizationally, NASA has software assurance at each NASA center, a Software Assurance Manager at NASA Headquarters, a Software Assurance Technical Fellow (currently the same person as the SA Manager), and an Independent Verification and Validation Organization with its own facility. An umbrella risk mitigation strategy for safety and mission success assurance of NASA's software, software assurance covers a wide area and is better structured to address the dynamic changes in how software is developed, used, and managed, as well as it's increasingly complex functionality. Being flexible, risk based, and prepared for challenges in software at NASA is essential, especially as much of our software is unique for each mission.
Cognitive Bias in Systems Verification
NASA Technical Reports Server (NTRS)
Larson, Steve
2012-01-01
Working definition of cognitive bias: Patterns by which information is sought and interpreted that can lead to systematic errors in decisions. Cognitive bias is used in diverse fields: Economics, Politics, Intelligence, Marketing, to name a few. Attempts to ground cognitive science in physical characteristics of the cognitive apparatus exceed our knowledge. Studies based on correlations; strict cause and effect is difficult to pinpoint. Effects cited in the paper and discussed here have been replicated many times over, and appear sound. Many biases have been described, but it is still unclear whether they are all distinct. There may only be a handful of fundamental biases, which manifest in various ways. Bias can effect system verification in many ways . Overconfidence -> Questionable decisions to deploy. Availability -> Inability to conceive critical tests. Representativeness -> Overinterpretation of results. Positive Test Strategies -> Confirmation bias. Debiasing at individual level very difficult. The potential effect of bias on the verification process can be managed, but not eliminated. Worth considering at key points in the process.
NASA Astrophysics Data System (ADS)
Hager, P.; Czupalla, M.; Walter, U.
2010-11-01
In this paper we report on the development of a dynamic MATLAB SIMULINK® model for the water and electrolyte balance inside the human body. This model is part of an environmentally sensitive dynamic human model for the optimization and verification of environmental control and life support systems (ECLSS) in space flight applications. An ECLSS provides all vital supplies for supporting human life on board a spacecraft. As human space flight today focuses on medium- to long-term missions, the strategy in ECLSS is shifting to closed loop systems. For these systems the dynamic stability and function over long duration are essential. However, the only evaluation and rating methods for ECLSS up to now are either expensive trial and error breadboarding strategies or static and semi-dynamic simulations. In order to overcome this mismatch the Exploration Group at Technische Universität München (TUM) is developing a dynamic environmental simulation, the "Virtual Habitat" (V-HAB). The central element of this simulation is the dynamic and environmentally sensitive human model. The water subsystem simulation of the human model discussed in this paper is of vital importance for the efficiency of possible ECLSS optimizations, as an over- or under-scaled water subsystem would have an adverse effect on the overall mass budget. On the other hand water has a pivotal role in the human organism. Water accounts for about 60% of the total body mass and is educt and product of numerous metabolic reactions. It is a transport medium for solutes and, due to its high evaporation enthalpy, provides the most potent medium for heat load dissipation. In a system engineering approach the human water balance was worked out by simulating the human body's subsystems and their interactions. The body fluids were assumed to reside in three compartments: blood plasma, interstitial fluid and intracellular fluid. In addition, the active and passive transport of water and solutes between those compartments was modeled dynamically. A kidney model regulates the electrolyte concentration in body fluids (osmolality) in narrow confines and a thirst mechanism models the urge to ingest water. A controlled exchange of water and electrolytes with other human subsystems, as well as with the environment, is implemented. Finally, the changes in body composition due to muscle growth are accounted for. The outcome of this is a dynamic water and electrolyte balance, which is capable of representing body reactions like thirst and headaches, as well as heat stroke and collapse, as a response to its work load and environment.
Li, Yubo; Wang, Lei; Ju, Liang; Deng, Haoyue; Zhang, Zhenzhu; Hou, Zhiguo; Xie, Jiabin; Wang, Yuming; Zhang, Yanjun
2016-04-01
Current studies that evaluate toxicity based on metabolomics have primarily focused on the screening of biomarkers while largely neglecting further verification and biomarker applications. For this reason, we used drug-induced hepatotoxicity as an example to establish a systematic strategy for screening specific biomarkers and applied these biomarkers to evaluate whether the drugs have potential hepatotoxicity toxicity. Carbon tetrachloride (5 ml/kg), acetaminophen (1500 mg/kg), and atorvastatin (5 mg/kg) are established as rat hepatotoxicity models. Fifteen common biomarkers were screened by multivariate statistical analysis and integration analysis-based metabolomics data. The receiver operating characteristic curve was used to evaluate the sensitivity and specificity of the biomarkers. We obtained 10 specific biomarker candidates with an area under the curve greater than 0.7. Then, a support vector machine model was established by extracting specific biomarker candidate data from the hepatotoxic drugs and nonhepatotoxic drugs; the accuracy of the model was 94.90% (92.86% sensitivity and 92.59% specificity) and the results demonstrated that those ten biomarkers are specific. 6 drugs were used to predict the hepatotoxicity by the support vector machines model; the prediction results were consistent with the biochemical and histopathological results, demonstrating that the model was reliable. Thus, this support vector machine model can be applied to discriminate the between the hepatic or nonhepatic toxicity of drugs. This approach not only presents a new strategy for screening-specific biomarkers with greater diagnostic significance but also provides a new evaluation pattern for hepatotoxicity, and it will be a highly useful tool in toxicity estimation and disease diagnoses. © The Author 2016. Published by Oxford University Press on behalf of the Society of Toxicology. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Wei, T. B.; Chen, Y. L.; Lin, H. R.; Huang, S. Y.; Yeh, T. C. J.; Wen, J. C.
2016-12-01
In the groundwater study, it estimated the heterogeneous spatial distribution of hydraulic Properties, there were many scholars use to hydraulic tomography (HT) from field site pumping tests to estimate inverse of heterogeneous spatial distribution of hydraulic Properties, to prove the most of most field site aquifer was heterogeneous hydrogeological parameters spatial distribution field. Many scholars had proposed a method of hydraulic tomography to estimate heterogeneous spatial distribution of hydraulic Properties of aquifer, the Huang et al. [2011] was used the non-redundant verification analysis of pumping wells changed, observation wells fixed on the inverse and the forward, to reflect the feasibility of the heterogeneous spatial distribution of hydraulic Properties of field site aquifer of the non-redundant verification analysis on steady-state model.From post literature, finding only in steady state, non-redundant verification analysis of pumping well changed location and observation wells fixed location for inverse and forward. But the studies had not yet pumping wells fixed or changed location, and observation wells fixed location for redundant verification or observation wells change location for non-redundant verification of the various combinations may to explore of influences of hydraulic tomography method. In this study, it carried out redundant verification method and non-redundant verification method for forward to influences of hydraulic tomography method in transient. And it discuss above mentioned in NYUST campus sites the actual case, to prove the effectiveness of hydraulic tomography methods, and confirmed the feasibility on inverse and forward analysis from analysis results.Keywords: Hydraulic Tomography, Redundant Verification, Heterogeneous, Inverse, Forward
Scott, Sarah Nicole; Templeton, Jeremy Alan; Hough, Patricia Diane; ...
2014-01-01
This study details a methodology for quantification of errors and uncertainties of a finite element heat transfer model applied to a Ruggedized Instrumentation Package (RIP). The proposed verification and validation (V&V) process includes solution verification to examine errors associated with the code's solution techniques, and model validation to assess the model's predictive capability for quantities of interest. The model was subjected to mesh resolution and numerical parameters sensitivity studies to determine reasonable parameter values and to understand how they change the overall model response and performance criteria. To facilitate quantification of the uncertainty associated with the mesh, automatic meshing andmore » mesh refining/coarsening algorithms were created and implemented on the complex geometry of the RIP. Automated software to vary model inputs was also developed to determine the solution’s sensitivity to numerical and physical parameters. The model was compared with an experiment to demonstrate its accuracy and determine the importance of both modelled and unmodelled physics in quantifying the results' uncertainty. An emphasis is placed on automating the V&V process to enable uncertainty quantification within tight development schedules.« less
Signal verification can promote reliable signalling.
Broom, Mark; Ruxton, Graeme D; Schaefer, H Martin
2013-11-22
The central question in communication theory is whether communication is reliable, and if so, which mechanisms select for reliability. The primary approach in the past has been to attribute reliability to strategic costs associated with signalling as predicted by the handicap principle. Yet, reliability can arise through other mechanisms, such as signal verification; but the theoretical understanding of such mechanisms has received relatively little attention. Here, we model whether verification can lead to reliability in repeated interactions that typically characterize mutualisms. Specifically, we model whether fruit consumers that discriminate among poor- and good-quality fruits within a population can select for reliable fruit signals. In our model, plants either signal or they do not; costs associated with signalling are fixed and independent of plant quality. We find parameter combinations where discriminating fruit consumers can select for signal reliability by abandoning unprofitable plants more quickly. This self-serving behaviour imposes costs upon plants as a by-product, rendering it unprofitable for unrewarding plants to signal. Thus, strategic costs to signalling are not a prerequisite for reliable communication. We expect verification to more generally explain signal reliability in repeated consumer-resource interactions that typify mutualisms but also in antagonistic interactions such as mimicry and aposematism.
Control structural interaction testbed: A model for multiple flexible body verification
NASA Technical Reports Server (NTRS)
Chory, M. A.; Cohen, A. L.; Manning, R. A.; Narigon, M. L.; Spector, V. A.
1993-01-01
Conventional end-to-end ground tests for verification of control system performance become increasingly complicated with the development of large, multiple flexible body spacecraft structures. The expense of accurately reproducing the on-orbit dynamic environment and the attendant difficulties in reducing and accounting for ground test effects limits the value of these tests. TRW has developed a building block approach whereby a combination of analysis, simulation, and test has replaced end-to-end performance verification by ground test. Tests are performed at the component, subsystem, and system level on engineering testbeds. These tests are aimed at authenticating models to be used in end-to-end performance verification simulations: component and subassembly engineering tests and analyses establish models and critical parameters, unit level engineering and acceptance tests refine models, and subsystem level tests confirm the models' overall behavior. The Precision Control of Agile Spacecraft (PCAS) project has developed a control structural interaction testbed with a multibody flexible structure to investigate new methods of precision control. This testbed is a model for TRW's approach to verifying control system performance. This approach has several advantages: (1) no allocation for test measurement errors is required, increasing flight hardware design allocations; (2) the approach permits greater latitude in investigating off-nominal conditions and parametric sensitivities; and (3) the simulation approach is cost effective, because the investment is in understanding the root behavior of the flight hardware and not in the ground test equipment and environment.
NASA Technical Reports Server (NTRS)
Feng, C.; Sun, X.; Shen, Y. N.; Lombardi, Fabrizio
1992-01-01
This paper covers the verification and protocol validation for distributed computer and communication systems using a computer aided testing approach. Validation and verification make up the so-called process of conformance testing. Protocol applications which pass conformance testing are then checked to see whether they can operate together. This is referred to as interoperability testing. A new comprehensive approach to protocol testing is presented which address: (1) modeling for inter-layer representation for compatibility between conformance and interoperability testing; (2) computational improvement to current testing methods by using the proposed model inclusive of formulation of new qualitative and quantitative measures and time-dependent behavior; (3) analysis and evaluation of protocol behavior for interactive testing without extensive simulation.
From Verified Models to Verifiable Code
NASA Technical Reports Server (NTRS)
Lensink, Leonard; Munoz, Cesar A.; Goodloe, Alwyn E.
2009-01-01
Declarative specifications of digital systems often contain parts that can be automatically translated into executable code. Automated code generation may reduce or eliminate the kinds of errors typically introduced through manual code writing. For this approach to be effective, the generated code should be reasonably efficient and, more importantly, verifiable. This paper presents a prototype code generator for the Prototype Verification System (PVS) that translates a subset of PVS functional specifications into an intermediate language and subsequently to multiple target programming languages. Several case studies are presented to illustrate the tool's functionality. The generated code can be analyzed by software verification tools such as verification condition generators, static analyzers, and software model-checkers to increase the confidence that the generated code is correct.
NASA Astrophysics Data System (ADS)
Kennedy, J. H.; Bennett, A. R.; Evans, K. J.; Fyke, J. G.; Vargo, L.; Price, S. F.; Hoffman, M. J.
2016-12-01
Accurate representation of ice sheets and glaciers are essential for robust predictions of arctic climate within Earth System models. Verification and Validation (V&V) is a set of techniques used to quantify the correctness and accuracy of a model, which builds developer/modeler confidence, and can be used to enhance the credibility of the model. Fundamentally, V&V is a continuous process because each model change requires a new round of V&V testing. The Community Ice Sheet Model (CISM) development community is actively developing LIVVkit, the Land Ice Verification and Validation toolkit, which is designed to easily integrate into an ice-sheet model's development workflow (on both personal and high-performance computers) to provide continuous V&V testing.LIVVkit is a robust and extensible python package for V&V, which has components for both software V&V (construction and use) and model V&V (mathematics and physics). The model Verification component is used, for example, to verify model results against community intercomparisons such as ISMIP-HOM. The model validation component is used, for example, to generate a series of diagnostic plots showing the differences between model results against observations for variables such as thickness, surface elevation, basal topography, surface velocity, surface mass balance, etc. Because many different ice-sheet models are under active development, new validation datasets are becoming available, and new methods of analysing these models are actively being researched, LIVVkit includes a framework to easily extend the model V&V analyses by ice-sheet modelers. This allows modelers and developers to develop evaluations of parameters, implement changes, and quickly see how those changes effect the ice-sheet model and earth system model (when coupled). Furthermore, LIVVkit outputs a portable hierarchical website allowing evaluations to be easily shared, published, and analysed throughout the arctic and Earth system communities.
Glove-based approach to online signature verification.
Kamel, Nidal S; Sayeed, Shohel; Ellis, Grant A
2008-06-01
Utilizing the multiple degrees of freedom offered by the data glove for each finger and the hand, a novel on-line signature verification system using the Singular Value Decomposition (SVD) numerical tool for signature classification and verification is presented. The proposed technique is based on the Singular Value Decomposition in finding r singular vectors sensing the maximal energy of glove data matrix A, called principal subspace, so the effective dimensionality of A can be reduced. Having modeled the data glove signature through its r-principal subspace, signature authentication is performed by finding the angles between the different subspaces. A demonstration of the data glove is presented as an effective high-bandwidth data entry device for signature verification. This SVD-based signature verification technique is tested and its performance is shown to be able to recognize forgery signatures with a false acceptance rate of less than 1.2%.
Research of improved banker algorithm
NASA Astrophysics Data System (ADS)
Yuan, Xingde; Xu, Hong; Qiao, Shijiao
2013-03-01
In the multi-process operating system, resource management strategy of system is a critical global issue, especially when many processes implicating for the limited resources, since unreasonable scheduling will cause dead lock. The most classical solution for dead lock question is the banker algorithm; however, it has its own deficiency and only can avoid dead lock occurring in a certain extent. This article aims at reducing unnecessary safety checking, and then uses the new allocation strategy to improve the banker algorithm. Through full analysis and example verification of the new allocation strategy, the results show the improved banker algorithm obtains substantial increase in performance.
The USEPA has created the Environmental Technology Verification (ETV) Program to facilitate the deployment of innovative or improved environmental technologies through performance verification and dissemination of information. The ETV P2 Metal Finishing Technologies (ETV-MF) Prog...
Verification of Space Weather Forecasts using Terrestrial Weather Approaches
NASA Astrophysics Data System (ADS)
Henley, E.; Murray, S.; Pope, E.; Stephenson, D.; Sharpe, M.; Bingham, S.; Jackson, D.
2015-12-01
The Met Office Space Weather Operations Centre (MOSWOC) provides a range of 24/7 operational space weather forecasts, alerts, and warnings, which provide valuable information on space weather that can degrade electricity grids, radio communications, and satellite electronics. Forecasts issued include arrival times of coronal mass ejections (CMEs), and probabilistic forecasts for flares, geomagnetic storm indices, and energetic particle fluxes and fluences. These forecasts are produced twice daily using a combination of output from models such as Enlil, near-real-time observations, and forecaster experience. Verification of forecasts is crucial for users, researchers, and forecasters to understand the strengths and limitations of forecasters, and to assess forecaster added value. To this end, the Met Office (in collaboration with Exeter University) has been adapting verification techniques from terrestrial weather, and has been working closely with the International Space Environment Service (ISES) to standardise verification procedures. We will present the results of part of this work, analysing forecast and observed CME arrival times, assessing skill using 2x2 contingency tables. These MOSWOC forecasts can be objectively compared to those produced by the NASA Community Coordinated Modelling Center - a useful benchmark. This approach cannot be taken for the other forecasts, as they are probabilistic and categorical (e.g., geomagnetic storm forecasts give probabilities of exceeding levels from minor to extreme). We will present appropriate verification techniques being developed to address these forecasts, such as rank probability skill score, and comparing forecasts against climatology and persistence benchmarks. As part of this, we will outline the use of discrete time Markov chains to assess and improve the performance of our geomagnetic storm forecasts. We will also discuss work to adapt a terrestrial verification visualisation system to space weather, to help MOSWOC forecasters view verification results in near real-time; plans to objectively assess flare forecasts under the EU Horizon 2020 FLARECAST project; and summarise ISES efforts to achieve consensus on verification.
Design and verification of distributed logic controllers with application of Petri nets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiśniewski, Remigiusz; Grobelna, Iwona; Grobelny, Michał
2015-12-31
The paper deals with the designing and verification of distributed logic controllers. The control system is initially modelled with Petri nets and formally verified against structural and behavioral properties with the application of the temporal logic and model checking technique. After that it is decomposed into separate sequential automata that are working concurrently. Each of them is re-verified and if the validation is successful, the system can be finally implemented.
Equipment testing and verification of PCI Membrane Systems Inc. Fyne Process nanofiltraton systems Model ROP 1434 equipped with a C10 module containing AFC-30 tubular membranes was conducted from 3/16-5/11/2000 in Barrow, AS. The source water was a moderate alkalinity, moderately...
Design and verification of distributed logic controllers with application of Petri nets
NASA Astrophysics Data System (ADS)
Wiśniewski, Remigiusz; Grobelna, Iwona; Grobelny, Michał; Wiśniewska, Monika
2015-12-01
The paper deals with the designing and verification of distributed logic controllers. The control system is initially modelled with Petri nets and formally verified against structural and behavioral properties with the application of the temporal logic and model checking technique. After that it is decomposed into separate sequential automata that are working concurrently. Each of them is re-verified and if the validation is successful, the system can be finally implemented.
Quality Assurance in the Presence of Variability
NASA Astrophysics Data System (ADS)
Lauenroth, Kim; Metzger, Andreas; Pohl, Klaus
Software Product Line Engineering (SPLE) is a reuse-driven development paradigm that has been applied successfully in information system engineering and other domains. Quality assurance of the reusable artifacts of the product line (e.g. requirements, design, and code artifacts) is essential for successful product line engineering. As those artifacts are reused in several products, a defect in a reusable artifact can affect several products of the product line. A central challenge for quality assurance in product line engineering is how to consider product line variability. Since the reusable artifacts contain variability, quality assurance techniques from single-system engineering cannot directly be applied to those artifacts. Therefore, different strategies and techniques have been developed for quality assurance in the presence of variability. In this chapter, we describe those strategies and discuss in more detail one of those strategies, the so called comprehensive strategy. The comprehensive strategy aims at checking the quality of all possible products of the product line and thus offers the highest benefits, since it is able to uncover defects in all possible products of the product line. However, the central challenge for applying the comprehensive strategy is the complexity that results from the product line variability and the large number of potential products of a product line. In this chapter, we present one concrete technique that we have developed to implement the comprehensive strategy that addresses this challenge. The technique is based on model checking technology and allows for a comprehensive verification of domain artifacts against temporal logic properties.
Simulation of Laboratory Tests of Steel Arch Support
NASA Astrophysics Data System (ADS)
Horyl, Petr; Šňupárek, Richard; Maršálek, Pavel; Pacześniowski, Krzysztof
2017-03-01
The total load-bearing capacity of steel arch yielding roadways supports is among their most important characteristics. These values can be obtained in two ways: experimental measurements in a specialized laboratory or computer modelling by FEM. Experimental measurements are significantly more expensive and more time-consuming. However, for proper tuning, a computer model is very valuable and can provide the necessary verification by experiment. In the cooperating workplaces of GIG Katowice, VSB-Technical University of Ostrava and the Institute of Geonics ASCR this verification was successful. The present article discusses the conditions and results of this verification for static problems. The output is a tuned computer model, which may be used for other calculations to obtain the load-bearing capacity of other types of steel arch supports. Changes in other parameters such as the material properties of steel, size torques, friction coefficient values etc. can be determined relatively quickly by changing the properties of the investigated steel arch supports.
Space Weather Models and Their Validation and Verification at the CCMC
NASA Technical Reports Server (NTRS)
Hesse, Michael
2010-01-01
The Community Coordinated l\\lodeling Center (CCMC) is a US multi-agency activity with a dual mission. With equal emphasis, CCMC strives to provide science support to the international space research community through the execution of advanced space plasma simulations, and it endeavors to support the space weather needs of the CS and partners. Space weather support involves a broad spectrum, from designing robust forecasting systems and transitioning them to forecasters, to providing space weather updates and forecasts to NASA's robotic mission operators. All of these activities have to rely on validation and verification of models and their products, so users and forecasters have the means to assign confidence levels to the space weather information. In this presentation, we provide an overview of space weather models resident at CCMC, as well as of validation and verification activities undertaken at CCMC or through the use of CCMC services.
Study of the penetration of a plate made of titanium alloy VT6 with a steel ball
NASA Astrophysics Data System (ADS)
Buzyurkin, A. E.
2018-03-01
The purpose of this work is the development and verification of mathematical relationships, adapted to the package of finite element analysis LS-DYNA and describing the deformation and destruction of a titanium plate in a high-speed collision. Using data from experiments on the interaction of a steel ball with a titanium plate made of VT6 alloy, verification of the available constants necessary for describing the behavior of the material using the Johnson-Cook relationships was performed, as well as verification of the parameters of the fracture model used in the numerical modeling of the collision process. An analysis of experimental data on the interaction of a spherical impactor with a plate showed that the data accepted for VT6 alloy in the first approximation for deformation hardening in the Johnson-Cook model give too high results on the residual velocities of the impactor when piercing the plate.
Earth Science Activities: A Guide to Effective Elementary School Science Teaching.
ERIC Educational Resources Information Center
Kanis, Ira B.; Yasso, Warren E.
The primary emphasis of this book is on new or revised earth science activities that promote concept development rather than mere verification of concepts learned by passive means. Chapter 2 describes philosophies, strategies, methods, and techniques to guide preservice and inservice teachers, school building administrators, and curriculum…
New generation of universal modeling for centrifugal compressors calculation
NASA Astrophysics Data System (ADS)
Galerkin, Y.; Drozdov, A.
2015-08-01
The Universal Modeling method is in constant use from mid - 1990th. Below is presented the newest 6th version of the Method. The flow path configuration of 3D impellers is presented in details. It is possible to optimize meridian configuration including hub/shroud curvatures, axial length, leading edge position, etc. The new model of vaned diffuser includes flow non-uniformity coefficient based on CFD calculations. The loss model was built from the results of 37 experiments with compressors stages of different flow rates and loading factors. One common set of empirical coefficients in the loss model guarantees the efficiency definition within an accuracy of 0.86% at the design point and 1.22% along the performance curve. The model verification was made. Four multistage compressors performances with vane and vaneless diffusers were calculated. As the model verification was made, four multistage compressors performances with vane and vaneless diffusers were calculated. Two of these compressors have quite unusual flow paths. The modeling results were quite satisfactory in spite of these peculiarities. One sample of the verification calculations is presented in the text. This 6th version of the developed computer program is being already applied successfully in the design practice.
NASA Astrophysics Data System (ADS)
Cianciara, Aleksander
2016-09-01
The paper presents the results of research aimed at verifying the hypothesis that the Weibull distribution is an appropriate statistical distribution model of microseismicity emission characteristics, namely: energy of phenomena and inter-event time. It is understood that the emission under consideration is induced by the natural rock mass fracturing. Because the recorded emission contain noise, therefore, it is subjected to an appropriate filtering. The study has been conducted using the method of statistical verification of null hypothesis that the Weibull distribution fits the empirical cumulative distribution function. As the model describing the cumulative distribution function is given in an analytical form, its verification may be performed using the Kolmogorov-Smirnov goodness-of-fit test. Interpretations by means of probabilistic methods require specifying the correct model describing the statistical distribution of data. Because in these methods measurement data are not used directly, but their statistical distributions, e.g., in the method based on the hazard analysis, or in that that uses maximum value statistics.
Caliver: An R package for CALIbration and VERification of forest fire gridded model outputs.
Vitolo, Claudia; Di Giuseppe, Francesca; D'Andrea, Mirko
2018-01-01
The name caliver stands for CALIbration and VERification of forest fire gridded model outputs. This is a package developed for the R programming language and available under an APACHE-2 license from a public repository. In this paper we describe the functionalities of the package and give examples using publicly available datasets. Fire danger model outputs are taken from the modeling components of the European Forest Fire Information System (EFFIS) and observed burned areas from the Global Fire Emission Database (GFED). Complete documentation, including a vignette, is also available within the package.
Caliver: An R package for CALIbration and VERification of forest fire gridded model outputs
Di Giuseppe, Francesca; D’Andrea, Mirko
2018-01-01
The name caliver stands for CALIbration and VERification of forest fire gridded model outputs. This is a package developed for the R programming language and available under an APACHE-2 license from a public repository. In this paper we describe the functionalities of the package and give examples using publicly available datasets. Fire danger model outputs are taken from the modeling components of the European Forest Fire Information System (EFFIS) and observed burned areas from the Global Fire Emission Database (GFED). Complete documentation, including a vignette, is also available within the package. PMID:29293536
Magnus, Anne; Moodie, Marj L; Ferguson, Megan; Cobiac, Linda J; Liberato, Selma C; Brimblecombe, Julie
2016-04-01
To estimate the cost-effectiveness of fiscal measures applied in remote community food stores for Aboriginal Australians. Six price discount strategies on fruit, vegetables, diet drinks and water were modelled. Baseline diet was measured as 12 months' actual food sales data in three remote Aboriginal communities. Discount-induced changes in food purchases were based on published price elasticity data while the weight of the daily diet was assumed constant. Dietary change was converted to change in sodium and energy intake, and body mass index (BMI) over a 12-month period. Improved lifetime health outcomes, modelled for the remote population of Aboriginal and Torres Strait Islanders, were converted to disability adjusted life years (DALYs) saved using a proportional multistate lifetable model populated with diet-related disease risks and Aboriginal and Torres Strait Islander rates of disease. While dietary change was small, five of the six price discount strategies were estimated as cost-effective, below a $50,000/DALY threshold. Stakeholders are committed to finding ways to reduce important inequalities in health status between Aboriginal and Torres Strait Islanders and non-Indigenous Australians. Price discounts offer potential to improve Aboriginal and Torres Strait Islander health. Verification of these results by trial-based research coupled with consideration of factors important to all stakeholders is needed. © 2015 The Authors.
NASA Astrophysics Data System (ADS)
Barik, M. G.; Hogue, T. S.; Franz, K. J.; He, M.
2011-12-01
The National Oceanic and Atmospheric Administration's (NOAA's) River Forecast Centers (RFCs) issue hydrologic forecasts related to flood events, reservoir operations for water supply, streamflow regulation, and recreation on the nation's streams and rivers. The RFCs use the National Weather Service River Forecast System (NWSRFS) for streamflow forecasting which relies on a coupled snow model (i.e. SNOW17) and rainfall-runoff model (i.e. SAC-SMA) in snow-dominated regions of the US. Errors arise in various steps of the forecasting system from input data, model structure, model parameters, and initial states. The goal of the current study is to undertake verification of potential improvements in the SNOW17-SAC-SMA modeling framework developed for operational streamflow forecasts. We undertake verification for a range of parameters sets (i.e. RFC, DREAM (Differential Evolution Adaptive Metropolis)) as well as a data assimilation (DA) framework developed for the coupled models. Verification is also undertaken for various initial conditions to observe the influence of variability in initial conditions on the forecast. The study basin is the North Fork America River Basin (NFARB) located on the western side of the Sierra Nevada Mountains in northern California. Hindcasts are verified using both deterministic (i.e. Nash Sutcliffe efficiency, root mean square error, and joint distribution) and probabilistic (i.e. reliability diagram, discrimination diagram, containing ratio, and Quantile plots) statistics. Our presentation includes comparison of the performance of different optimized parameters and the DA framework as well as assessment of the impact associated with the initial conditions used for streamflow forecasts for the NFARB.
Model-Based Building Verification in Aerial Photographs.
1987-09-01
Powers ’ordon E. Schacher Chaii nan Dean of Science and Electrical and Computer Engineering Engineering "p. 5.€ ’ ,’"..€ € . € -, _ _ . ."€ . 4...paper, we have proposed an ex)erimental knowledge-based " verification syste, te organization for change (letection is oitliinet. , Kowledge rules and
Speed and Accuracy in the Processing of False Statements About Semantic Information.
ERIC Educational Resources Information Center
Ratcliff, Roger
1982-01-01
A standard reaction time procedure and a response signal procedure were used on data from eight experiments on semantic verifications. Results suggest that simple models of the semantic verification task that assume a single yes/no dimension on which discrimination is made are not correct. (Author/PN)
Face verification with balanced thresholds.
Yan, Shuicheng; Xu, Dong; Tang, Xiaoou
2007-01-01
The process of face verification is guided by a pre-learned global threshold, which, however, is often inconsistent with class-specific optimal thresholds. It is, hence, beneficial to pursue a balance of the class-specific thresholds in the model-learning stage. In this paper, we present a new dimensionality reduction algorithm tailored to the verification task that ensures threshold balance. This is achieved by the following aspects. First, feasibility is guaranteed by employing an affine transformation matrix, instead of the conventional projection matrix, for dimensionality reduction, and, hence, we call the proposed algorithm threshold balanced transformation (TBT). Then, the affine transformation matrix, constrained as the product of an orthogonal matrix and a diagonal matrix, is optimized to improve the threshold balance and classification capability in an iterative manner. Unlike most algorithms for face verification which are directly transplanted from face identification literature, TBT is specifically designed for face verification and clarifies the intrinsic distinction between these two tasks. Experiments on three benchmark face databases demonstrate that TBT significantly outperforms the state-of-the-art subspace techniques for face verification.
Model-based engineering for medical-device software.
Ray, Arnab; Jetley, Raoul; Jones, Paul L; Zhang, Yi
2010-01-01
This paper demonstrates the benefits of adopting model-based design techniques for engineering medical device software. By using a patient-controlled analgesic (PCA) infusion pump as a candidate medical device, the authors show how using models to capture design information allows for i) fast and efficient construction of executable device prototypes ii) creation of a standard, reusable baseline software architecture for a particular device family, iii) formal verification of the design against safety requirements, and iv) creation of a safety framework that reduces verification costs for future versions of the device software. 1.
Automated Verification of Specifications with Typestates and Access Permissions
NASA Technical Reports Server (NTRS)
Siminiceanu, Radu I.; Catano, Nestor
2011-01-01
We propose an approach to formally verify Plural specifications based on access permissions and typestates, by model-checking automatically generated abstract state-machines. Our exhaustive approach captures all the possible behaviors of abstract concurrent programs implementing the specification. We describe the formal methodology employed by our technique and provide an example as proof of concept for the state-machine construction rules. The implementation of a fully automated algorithm to generate and verify models, currently underway, provides model checking support for the Plural tool, which currently supports only program verification via data flow analysis (DFA).
A High-Level Language for Modeling Algorithms and Their Properties
NASA Astrophysics Data System (ADS)
Akhtar, Sabina; Merz, Stephan; Quinson, Martin
Designers of concurrent and distributed algorithms usually express them using pseudo-code. In contrast, most verification techniques are based on more mathematically-oriented formalisms such as state transition systems. This conceptual gap contributes to hinder the use of formal verification techniques. Leslie Lamport introduced PlusCal, a high-level algorithmic language that has the "look and feel" of pseudo-code, but is equipped with a precise semantics and includes a high-level expression language based on set theory. PlusCal models can be compiled to TLA + and verified using the model checker tlc.
Configuration Management Process Assessment Strategy
NASA Technical Reports Server (NTRS)
Henry, Thad
2014-01-01
Purpose: To propose a strategy for assessing the development and effectiveness of configuration management systems within Programs, Projects, and Design Activities performed by technical organizations and their supporting development contractors. Scope: Various entities CM Systems will be assessed dependent on Project Scope (DDT&E), Support Services and Acquisition Agreements. Approach: Model based structured against assessing organizations CM requirements including best practices maturity criteria. The model is tailored to the entity being assessed dependent on their CM system. The assessment approach provides objective feedback to Engineering and Project Management of the observed CM system maturity state versus the ideal state of the configuration management processes and outcomes(system). center dot Identifies strengths and risks versus audit gotcha's (findings/observations). center dot Used "recursively and iteratively" throughout program lifecycle at select points of need. (Typical assessments timing is Post PDR/Post CDR) center dot Ideal state criteria and maturity targets are reviewed with the assessed entity prior to an assessment (Tailoring) and is dependent on the assessed phase of the CM system. center dot Supports exit success criteria for Preliminary and Critical Design Reviews. center dot Gives a comprehensive CM system assessment which ultimately supports configuration verification activities.*
General Dynamic (GD) Launch Waveform On-Orbit Performance Report
NASA Technical Reports Server (NTRS)
Briones, Janette C.; Shalkhauser, Mary Jo
2014-01-01
The purpose of this report is to present the results from the GD SDR on-orbit performance testing using the launch waveform over TDRSS. The tests include the evaluation of well-tested waveform modes, the operation of RF links that are expected to have high margins, the verification of forward return link operation (including full duplex), the verification of non-coherent operational models, and the verification of radio at-launch operational frequencies. This report also outlines the launch waveform tests conducted and comparisons to the results obtained from ground testing.
Discrete Spring Model for Predicting Delamination Growth in Z-Fiber Reinforced DCB Specimens
NASA Technical Reports Server (NTRS)
Ratcliffe, James G.; OBrien, T. Kevin
2004-01-01
Beam theory analysis was applied to predict delamination growth in Double Cantilever Beam (DCB) specimens reinforced in the thickness direction with pultruded pins, known as Z-fibers. The specimen arms were modeled as cantilever beams supported by discrete springs, which were included to represent the pins. A bi-linear, irreversible damage law was used to represent Z-fiber damage, the parameters of which were obtained from previous experiments. Closed-form solutions were developed for specimen compliance and displacements corresponding to Z-fiber row locations. A solution strategy was formulated to predict delamination growth, in which the parent laminate mode I critical strain energy release rate was used as the criterion for delamination growth. The solution procedure was coded into FORTRAN 90, giving a dedicated software tool for performing the delamination prediction. Comparison of analysis results with previous analysis and experiment showed good agreement, yielding an initial verification for the analytical procedure.
Discrete Spring Model for Predicting Delamination Growth in Z-Fiber Reinforced DCB Specimens
NASA Technical Reports Server (NTRS)
Ratcliffe, James G.; O'Brien, T. Kevin
2004-01-01
Beam theory analysis was applied to predict delamination growth in DCB specimens reinforced in the thickness direction with pultruded pins, known as Z-fibers. The specimen arms were modeled as cantilever beams supported by discrete springs, which were included to represent the pins. A bi-linear, irreversible damage law was used to represent Z-fiber damage, the parameters of which were obtained from previous experiments. Closed-form solutions were developed for specimen compliance and displacements corresponding to Z-fiber row locations. A solution strategy was formulated to predict delamination growth, in which the parent laminate mode I fracture toughness was used as the criterion for delamination growth. The solution procedure was coded into FORTRAN 90, giving a dedicated software tool for performing the delamination prediction. Comparison of analysis results with previous analysis and experiment showed good agreement, yielding an initial verification for the analytical procedure.
Climate: Policy, Modeling, and Federal Priorities (Invited)
NASA Astrophysics Data System (ADS)
Koonin, S.; Department Of Energy Office Of The Under SecretaryScience
2010-12-01
The Administration has set ambitious national goals to reduce our dependence on fossil fuels and reduce anthropogenic greenhouse gas (GHG) emissions. The US and other countries involved in the U.N. Framework Convention on Climate Change continue to work toward a goal of establishing a viable treaty that would encompass limits on emissions and codify actions that nations would take to reduce emissions. These negotiations are informed by the science of climate change and by our understanding of how changes in technology and the economy might affect the overall climate in the future. I will describe the present efforts within the U.S. Department of Energy, and the federal government more generally, to address issues related to climate change. These include state-of-the-art climate modeling and uncertainty assessment, economic and climate scenario planning based on best estimates of different technology trajectories, adaption strategies for climate change, and monitoring and reporting for treaty verification.
Optimal coordination and control of posture and movements.
Johansson, Rolf; Fransson, Per-Anders; Magnusson, Måns
2009-01-01
This paper presents a theoretical model of stability and coordination of posture and locomotion, together with algorithms for continuous-time quadratic optimization of motion control. Explicit solutions to the Hamilton-Jacobi equation for optimal control of rigid-body motion are obtained by solving an algebraic matrix equation. The stability is investigated with Lyapunov function theory and it is shown that global asymptotic stability holds. It is also shown how optimal control and adaptive control may act in concert in the case of unknown or uncertain system parameters. The solution describes motion strategies of minimum effort and variance. The proposed optimal control is formulated to be suitable as a posture and movement model for experimental validation and verification. The combination of adaptive and optimal control makes this algorithm a candidate for coordination and control of functional neuromuscular stimulation as well as of prostheses. Validation examples with experimental data are provided.
Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario
2016-01-01
Coordinate measuring machines (CMM) are main instruments of measurement in laboratories and in industrial quality control. A compensation error model has been formulated (Part I). It integrates error and uncertainty in the feature measurement model. Experimental implementation for the verification of this model is carried out based on the direct testing on a moving bridge CMM. The regression results by axis are quantified and compared to CMM indication with respect to the assigned values of the measurand. Next, testing of selected measurements of length, flatness, dihedral angle, and roundness features are accomplished. The measurement of calibrated gauge blocks for length or angle, flatness verification of the CMM granite table and roundness of a precision glass hemisphere are presented under a setup of repeatability conditions. The results are analysed and compared with alternative methods of estimation. The overall performance of the model is endorsed through experimental verification, as well as the practical use and the model capability to contribute in the improvement of current standard CMM measuring capabilities. PMID:27754441
NASA Astrophysics Data System (ADS)
Chen, Peng; Liu, Yuwei; Gao, Bingkun; Jiang, Chunlei
2018-03-01
A semiconductor laser employed with two-external-cavity feedback structure for laser self-mixing interference (SMI) phenomenon is investigated and analyzed. The SMI model with two directions based on F-P cavity is deduced, and numerical simulation and experimental verification were conducted. Experimental results show that the SMI with the structure of two-external-cavity feedback under weak light feedback is similar to the sum of two SMIs.
Modeling interfacial fracture in Sierra.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Arthur A.; Ohashi, Yuki; Lu, Wei-Yang
2013-09-01
This report summarizes computational efforts to model interfacial fracture using cohesive zone models in the SIERRA/SolidMechanics (SIERRA/SM) finite element code. Cohesive surface elements were used to model crack initiation and propagation along predefined paths. Mesh convergence was observed with SIERRA/SM for numerous geometries. As the funding for this project came from the Advanced Simulation and Computing Verification and Validation (ASC V&V) focus area, considerable effort was spent performing verification and validation. Code verification was performed to compare code predictions to analytical solutions for simple three-element simulations as well as a higher-fidelity simulation of a double-cantilever beam. Parameter identification was conductedmore » with Dakota using experimental results on asymmetric double-cantilever beam (ADCB) and end-notched-flexure (ENF) experiments conducted under Campaign-6 funding. Discretization convergence studies were also performed with respect to mesh size and time step and an optimization study was completed for mode II delamination using the ENF geometry. Throughout this verification process, numerous SIERRA/SM bugs were found and reported, all of which have been fixed, leading to over a 10-fold increase in convergence rates. Finally, mixed-mode flexure experiments were performed for validation. One of the unexplained issues encountered was material property variability for ostensibly the same composite material. Since the variability is not fully understood, it is difficult to accurately assess uncertainty when performing predictions.« less
Atmospheric transport modelling in support of CTBT verification—overview and basic concepts
NASA Astrophysics Data System (ADS)
Wotawa, Gerhard; De Geer, Lars-Erik; Denier, Philippe; Kalinowski, Martin; Toivonen, Harri; D'Amours, Real; Desiato, Franco; Issartel, Jean-Pierre; Langer, Matthias; Seibert, Petra; Frank, Andreas; Sloan, Craig; Yamazawa, Hiromi
Under the provisions of the Comprehensive Nuclear-Test-Ban Treaty (CTBT), a global monitoring system comprising different verification technologies is currently being set up. The network will include 80 radionuclide (RN) stations distributed all over the globe that measure treaty-relevant radioactive species. While the seismic subsystem cannot distinguish between chemical and nuclear explosions, RN monitoring would provide the "smoking gun" of a possible treaty violation. Atmospheric transport modelling (ATM) will be an integral part of CTBT verification, since it provides a geo-temporal location capability for the RN technology. In this paper, the basic concept for the future ATM software system to be installed at the International Data Centre is laid out. The system is based on the operational computation of multi-dimensional source-receptor sensitivity fields for all RN samples by means of adjoint tracer transport modelling. While the source-receptor matrix methodology has already been applied in the past, the system that we suggest will be unique and unprecedented, since it is global, real-time and aims at uncovering source scenarios that are compatible with measurements. Furthermore, it has to deal with source dilution ratios that are by orders of magnitude larger than in typical transport model applications. This new verification software will need continuous scientific attention, and may well provide a prototype system for future applications in areas of environmental monitoring, emergency response and verification of other international agreements and treaties.
Reactive system verification case study: Fault-tolerant transputer communication
NASA Technical Reports Server (NTRS)
Crane, D. Francis; Hamory, Philip J.
1993-01-01
A reactive program is one which engages in an ongoing interaction with its environment. A system which is controlled by an embedded reactive program is called a reactive system. Examples of reactive systems are aircraft flight management systems, bank automatic teller machine (ATM) networks, airline reservation systems, and computer operating systems. Reactive systems are often naturally modeled (for logical design purposes) as a composition of autonomous processes which progress concurrently and which communicate to share information and/or to coordinate activities. Formal (i.e., mathematical) frameworks for system verification are tools used to increase the users' confidence that a system design satisfies its specification. A framework for reactive system verification includes formal languages for system modeling and for behavior specification and decision procedures and/or proof-systems for verifying that the system model satisfies the system specifications. Using the Ostroff framework for reactive system verification, an approach to achieving fault-tolerant communication between transputers was shown to be effective. The key components of the design, the decoupler processes, may be viewed as discrete-event-controllers introduced to constrain system behavior such that system specifications are satisfied. The Ostroff framework was also effective. The expressiveness of the modeling language permitted construction of a faithful model of the transputer network. The relevant specifications were readily expressed in the specification language. The set of decision procedures provided was adequate to verify the specifications of interest. The need for improved support for system behavior visualization is emphasized.
Verification of the NWP models operated at ICM, Poland
NASA Astrophysics Data System (ADS)
Melonek, Malgorzata
2010-05-01
Interdisciplinary Centre for Mathematical and Computational Modelling, University of Warsaw (ICM) started its activity in the field of NWP in May 1997. Since this time the numerical weather forecasts covering Central Europe have been routinely published on our publicly available website. First NWP model used in ICM was hydrostatic Unified Model developed by the UK Meteorological Office. It was a mesoscale version with horizontal resolution of 17 km and 31 levels in vertical. At present two NWP non-hydrostatic models are running in quasi-operational regime. The main new UM model with 4 km horizontal resolution, 38 levels in vertical and forecats range of 48 hours is running four times a day. Second, the COAMPS model (Coupled Ocean/Atmosphere Mesoscale Prediction System) developed by the US Naval Research Laboratory, configured with the three nested grids (with coresponding resolutions of 39km, 13km and 4.3km, 30 vertical levels) are running twice a day (for 00 and 12 UTC). The second grid covers Central Europe and has forecast range of 84 hours. Results of the both NWP models, ie. COAMPS computed on 13km mesh resolution and UM, are verified against observations from the Polish synoptic stations. Verification uses surface observations and nearest grid point forcasts. Following meteorological elements are verified: air temperature at 2m, mean sea level pressure, wind speed and wind direction at 10 m and 12 hours accumulated precipitation. There are presented different statistical indices. For continous variables Mean Error(ME), Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) in 6 hours intervals are computed. In case of precipitation the contingency tables for different thresholds are computed and some of the verification scores such as FBI, ETS, POD, FAR are graphically presented. The verification sample covers nearly one year.
Collaborative Localization and Location Verification in WSNs
Miao, Chunyu; Dai, Guoyong; Ying, Kezhen; Chen, Qingzhang
2015-01-01
Localization is one of the most important technologies in wireless sensor networks. A lightweight distributed node localization scheme is proposed by considering the limited computational capacity of WSNs. The proposed scheme introduces the virtual force model to determine the location by incremental refinement. Aiming at solving the drifting problem and malicious anchor problem, a location verification algorithm based on the virtual force mode is presented. In addition, an anchor promotion algorithm using the localization reliability model is proposed to re-locate the drifted nodes. Extended simulation experiments indicate that the localization algorithm has relatively high precision and the location verification algorithm has relatively high accuracy. The communication overhead of these algorithms is relative low, and the whole set of reliable localization methods is practical as well as comprehensive. PMID:25954948
The Effect of Mystery Shopper Reports on Age Verification for Tobacco Purchases
KREVOR, BRAD S.; PONICKI, WILLIAM R.; GRUBE, JOEL W.; DeJONG, WILLIAM
2011-01-01
Mystery shops (MS) involving attempted tobacco purchases by young buyers have been employed to monitor retail stores’ performance in refusing underage sales. Anecdotal evidence suggests that MS visits with immediate feedback to store personnel can improve age verification. This study investigated the impact of monthly and twice-monthly MS reports on age verification. Forty-five Walgreens stores were each visited 20 times by mystery shoppers. The stores were randomly assigned to one of three conditions. Control group stores received no feedback, whereas two treatment groups received feedback communications every visit (twice monthly) or every second visit (monthly) after baseline. Logit regression models tested whether each treatment group improved verification rates relative to the control group. Post-baseline verification rates were higher in both treatment groups than in the control group, but only the stores receiving monthly communications had a significantly greater improvement than control group stores. Verification rates increased significantly during the study period for all three groups, with delayed improvement among control group stores. Communication between managers regarding the MS program may account for the delayed age-verification improvements observed in the control group stores. Encouraging inter-store communication might extend the benefits of MS programs beyond those stores that receive this intervention. PMID:21541874
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marleau, Peter; Brubaker, Erik; Deland, Sharon M.
This report summarizes the discussion and conclusions reached during a table top exercise held at Sandia National Laboratories, Albuquerque on September 3, 2014 regarding a recently described approach for nuclear warhead verification based on the cryptographic concept of a zero-knowledge protocol (ZKP) presented in a recent paper authored by Glaser, Barak, and Goldston. A panel of Sandia National Laboratories researchers, whose expertise includes radiation instrumentation design and development, cryptography, and arms control verification implementation, jointly reviewed the paper and identified specific challenges to implementing the approach as well as some opportunities. It was noted that ZKP as used in cryptographymore » is a useful model for the arms control verification problem, but the direct analogy to arms control breaks down quickly. The ZKP methodology for warhead verification fits within the general class of template-based verification techniques, where a reference measurement is used to confirm that a given object is like another object that has already been accepted as a warhead by some other means. This can be a powerful verification approach, but requires independent means to trust the authenticity of the reference warhead - a standard that may be difficult to achieve, which the ZKP authors do not directly address. Despite some technical challenges, the concept of last-minute selection of the pre-loads and equipment could be a valuable component of a verification regime.« less
The effect of mystery shopper reports on age verification for tobacco purchases.
Krevor, Brad S; Ponicki, William R; Grube, Joel W; DeJong, William
2011-09-01
Mystery shops involving attempted tobacco purchases by young buyers have been implemented in order to monitor retail stores' performance in refusing underage sales. Anecdotal evidence suggests that mystery shop visits with immediate feedback to store personnel can improve age verification. This study investigated the effect of monthly and twice-monthly mystery shop reports on age verification. Mystery shoppers visited 45 Walgreens stores 20 times. The stores were randomly assigned to 1 of 3 conditions. Control group stores received no feedback, whereas 2 treatment groups received feedback communications on every visit (twice monthly) or on every second visit (monthly) after baseline. Logit regression models tested whether each treatment group improved verification rates relative to the control group. Postbaseline verification rates were higher in both treatment groups than in the control group, but only the stores receiving monthly communications had a significantly greater improvement compared with the control group stores. Verification rates increased significantly during the study period for all 3 groups, with delayed improvement among control group stores. Communication between managers regarding the mystery shop program may account for the delayed age-verification improvements observed in the control group stores. Encouraging interstore communication might extend the benefits of mystery shop programs beyond those stores that receive this intervention. Copyright © Taylor & Francis Group, LLC
Experimental verification of layout physical verification of silicon photonics
NASA Astrophysics Data System (ADS)
El Shamy, Raghi S.; Swillam, Mohamed A.
2018-02-01
Silicon photonics have been approved as one of the best platforms for dense integration of photonic integrated circuits (PICs) due to the high refractive index contrast among its materials. Silicon on insulator (SOI) is a widespread photonics technology, which support a variety of devices for lots of applications. As the photonics market is growing, the number of components in the PICs increases which increase the need for an automated physical verification (PV) process. This PV process will assure reliable fabrication of the PICs as it will check both the manufacturability and the reliability of the circuit. However, PV process is challenging in the case of PICs as it requires running an exhaustive electromagnetic (EM) simulations. Our group have recently proposed an empirical closed form models for the directional coupler and the waveguide bends based on the SOI technology. The models have shown a very good agreement with both finite element method (FEM) and finite difference time domain (FDTD) solvers. These models save the huge time of the 3D EM simulations and can be easily included in any electronic design automation (EDA) flow as the equations parameters can be easily extracted from the layout. In this paper we present experimental verification for our previously proposed models. SOI directional couplers with different dimensions have been fabricated using electron beam lithography and measured. The results from the measurements of the fabricate devices have been compared to the derived models and show a very good agreement. Also the matching can reach 100% by calibrating certain parameter in the model.
NASA Astrophysics Data System (ADS)
Jorris, Timothy R.
2007-12-01
To support the Air Force's Global Reach concept, a Common Aero Vehicle is being designed to support the Global Strike mission. "Waypoints" are specified for reconnaissance or multiple payload deployments and "no-fly zones" are specified for geopolitical restrictions or threat avoidance. Due to time critical targets and multiple scenario analysis, an autonomous solution is preferred over a time-intensive, manually iterative one. Thus, a real-time or near real-time autonomous trajectory optimization technique is presented to minimize the flight time, satisfy terminal and intermediate constraints, and remain within the specified vehicle heating and control limitations. This research uses the Hypersonic Cruise Vehicle (HCV) as a simplified two-dimensional platform to compare multiple solution techniques. The solution techniques include a unique geometric approach developed herein, a derived analytical dynamic optimization technique, and a rapidly emerging collocation numerical approach. This up-and-coming numerical technique is a direct solution method involving discretization then dualization, with pseudospectral methods and nonlinear programming used to converge to the optimal solution. This numerical approach is applied to the Common Aero Vehicle (CAV) as the test platform for the full three-dimensional reentry trajectory optimization problem. The culmination of this research is the verification of the optimality of this proposed numerical technique, as shown for both the two-dimensional and three-dimensional models. Additionally, user implementation strategies are presented to improve accuracy and enhance solution convergence. Thus, the contributions of this research are the geometric approach, the user implementation strategies, and the determination and verification of a numerical solution technique for the optimal reentry trajectory problem that minimizes time to target while satisfying vehicle dynamics and control limitation, and heating, waypoint, and no-fly zone constraints.
NASA Technical Reports Server (NTRS)
Lagomasino, David; Fatoyinbo, Temilola; Lee, SeungKuk; Feliciano, Emanuelle; Trettin, Carl; Simard, Marc
2016-01-01
Canopy height is one of the strongest predictors of biomass and carbon in forested ecosystems. Additionally, mangrove ecosystems represent one of the most concentrated carbon reservoirs that are rapidly degrading as a result of deforestation, development, and hydrologic manipulation. Therefore, the accuracy of Canopy Height Models (CHM) over mangrove forest can provide crucial information for monitoring and verification protocols. We compared four CHMs derived from independent remotely sensed imagery and identified potential errors and bias between measurement types. CHMs were derived from three spaceborne datasets; Very-High Resolution (VHR) stereophotogrammetry, TerraSAR-X add-on for Digital Elevation Measurement (DEM), and Shuttle Radar Topography Mission (TanDEM-X), and lidar data which was acquired from an airborne platform. Each dataset exhibited different error characteristics that were related to spatial resolution, sensitivities of the sensors, and reference frames. Canopies over 10 meters were accurately predicted by all CHMs while the distributions of canopy height were best predicted by the VHR CHM. Depending on the guidelines and strategies needed for monitoring and verification activities, coarse resolution CHMs could be used to track canopy height at regional and global scales with finer resolution imagery used to validate and monitor critical areas undergoing rapid changes.
Lagomasino, David; Fatoyinbo, Temilola; Lee, SeungKuk; Feliciano, Emanuelle; Trettin, Carl; Simard, Marc
2017-01-01
Canopy height is one of the strongest predictors of biomass and carbon in forested ecosystems. Additionally, mangrove ecosystems represent one of the most concentrated carbon reservoirs that are rapidly degrading as a result of deforestation, development, and hydrologic manipulation. Therefore, the accuracy of Canopy Height Models (CHM) over mangrove forest can provide crucial information for monitoring and verification protocols. We compared four CHMs derived from independent remotely sensed imagery and identified potential errors and bias between measurement types. CHMs were derived from three spaceborne datasets; Very-High Resolution (VHR) stereophotogrammetry, TerraSAR-X add-on for Digital Elevation Measurement, and Shuttle Radar Topography Mission (TanDEM-X), and lidar data which was acquired from an airborne platform. Each dataset exhibited different error characteristics that were related to spatial resolution, sensitivities of the sensors, and reference frames. Canopies over 10 m were accurately predicted by all CHMs while the distributions of canopy height were best predicted by the VHR CHM. Depending on the guidelines and strategies needed for monitoring and verification activities, coarse resolution CHMs could be used to track canopy height at regional and global scales with finer resolution imagery used to validate and monitor critical areas undergoing rapid changes. PMID:29629207
The Environmental Technology Verification (ETV) Program, beginning as an initiative of the U.S. Environmental Protection Agency (EPA) in 1995, verifies the performance of commercially available, innovative technologies that can be used to measure environmental quality. The ETV p...
NASA Technical Reports Server (NTRS)
Probst, D.; Jensen, L.
1991-01-01
Delay-insensitive VLSI systems have a certain appeal on the ground due to difficulties with clocks; they are even more attractive in space. We answer the question, is it possible to control state explosion arising from various sources during automatic verification (model checking) of delay-insensitive systems? State explosion due to concurrency is handled by introducing a partial-order representation for systems, and defining system correctness as a simple relation between two partial orders on the same set of system events (a graph problem). State explosion due to nondeterminism (chiefly arbitration) is handled when the system to be verified has a clean, finite recurrence structure. Backwards branching is a further optimization. The heart of this approach is the ability, during model checking, to discover a compact finite presentation of the verified system without prior composition of system components. The fully-implemented POM verification system has polynomial space and time performance on traditional asynchronous-circuit benchmarks that are exponential in space and time for other verification systems. We also sketch the generalization of this approach to handle delay-constrained VLSI systems.
NASA Technical Reports Server (NTRS)
Denney, Ewen W.; Fischer, Bernd
2009-01-01
Model-based development and automated code generation are increasingly used for production code in safety-critical applications, but since code generators are typically not qualified, the generated code must still be fully tested, reviewed, and certified. This is particularly arduous for mathematical and control engineering software which requires reviewers to trace subtle details of textbook formulas and algorithms to the code, and to match requirements (e.g., physical units or coordinate frames) not represented explicitly in models or code. Both tasks are complicated by the often opaque nature of auto-generated code. We address these problems by developing a verification-driven approach to traceability and documentation. We apply the AUTOCERT verification system to identify and then verify mathematical concepts in the code, based on a mathematical domain theory, and then use these verified traceability links between concepts, code, and verification conditions to construct a natural language report that provides a high-level structured argument explaining why and how the code uses the assumptions and complies with the requirements. We have applied our approach to generate review documents for several sub-systems of NASA s Project Constellation.
Verification of Gyrokinetic codes: theoretical background and applications
NASA Astrophysics Data System (ADS)
Tronko, Natalia
2016-10-01
In fusion plasmas the strong magnetic field allows the fast gyro motion to be systematically removed from the description of the dynamics, resulting in a considerable model simplification and gain of computational time. Nowadays, the gyrokinetic (GK) codes play a major role in the understanding of the development and the saturation of turbulence and in the prediction of the consequent transport. We present a new and generic theoretical framework and specific numerical applications to test the validity and the domain of applicability of existing GK codes. For a sound verification process, the underlying theoretical GK model and the numerical scheme must be considered at the same time, which makes this approach pioneering. At the analytical level, the main novelty consists in using advanced mathematical tools such as variational formulation of dynamics for systematization of basic GK code's equations to access the limits of their applicability. The indirect verification of numerical scheme is proposed via the Benchmark process. In this work, specific examples of code verification are presented for two GK codes: the multi-species electromagnetic ORB5 (PIC), and the radially global version of GENE (Eulerian). The proposed methodology can be applied to any existing GK code. We establish a hierarchy of reduced GK Vlasov-Maxwell equations using the generic variational formulation. Then, we derive and include the models implemented in ORB5 and GENE inside this hierarchy. At the computational level, detailed verification of global electromagnetic test cases based on the CYCLONE are considered, including a parametric β-scan covering the transition between the ITG to KBM and the spectral properties at the nominal β value.
An Emotional ANN (EANN) approach to modeling rainfall-runoff process
NASA Astrophysics Data System (ADS)
Nourani, Vahid
2017-01-01
This paper presents the first hydrological implementation of Emotional Artificial Neural Network (EANN), as a new generation of Artificial Intelligence-based models for daily rainfall-runoff (r-r) modeling of the watersheds. Inspired by neurophysiological form of brain, in addition to conventional weights and bias, an EANN includes simulated emotional parameters aimed at improving the network learning process. EANN trained by a modified version of back-propagation (BP) algorithm was applied to single and multi-step-ahead runoff forecasting of two watersheds with two distinct climatic conditions. Also to evaluate the ability of EANN trained by smaller training data set, three data division strategies with different number of training samples were considered for the training purpose. The overall comparison of the obtained results of the r-r modeling indicates that the EANN could outperform the conventional feed forward neural network (FFNN) model up to 13% and 34% in terms of training and verification efficiency criteria, respectively. The superiority of EANN over classic ANN is due to its ability to recognize and distinguish dry (rainless days) and wet (rainy days) situations using hormonal parameters of the artificial emotional system.
Safety Verification of the Small Aircraft Transportation System Concept of Operations
NASA Technical Reports Server (NTRS)
Carreno, Victor; Munoz, Cesar
2005-01-01
A critical factor in the adoption of any new aeronautical technology or concept of operation is safety. Traditionally, safety is accomplished through a rigorous process that involves human factors, low and high fidelity simulations, and flight experiments. As this process is usually performed on final products or functional prototypes, concept modifications resulting from this process are very expensive to implement. This paper describe an approach to system safety that can take place at early stages of a concept design. It is based on a set of mathematical techniques and tools known as formal methods. In contrast to testing and simulation, formal methods provide the capability of exhaustive state exploration analysis. We present the safety analysis and verification performed for the Small Aircraft Transportation System (SATS) Concept of Operations (ConOps). The concept of operations is modeled using discrete and hybrid mathematical models. These models are then analyzed using formal methods. The objective of the analysis is to show, in a mathematical framework, that the concept of operation complies with a set of safety requirements. It is also shown that the ConOps has some desirable characteristic such as liveness and absence of dead-lock. The analysis and verification is performed in the Prototype Verification System (PVS), which is a computer based specification language and a theorem proving assistant.
The NASA Commercial Crew Program (CCP) Mission Assurance Process
NASA Technical Reports Server (NTRS)
Canfield, Amy
2016-01-01
In 2010, NASA established the Commercial Crew Program in order to provide human access to the International Space Station and low earth orbit via the commercial (non-governmental) sector. A particular challenge to NASA has been how to determine the commercial providers transportation system complies with Programmatic safety requirements. The process used in this determination is the Safety Technical Review Board which reviews and approves provider submitted Hazard Reports. One significant product of the review is a set of hazard control verifications. In past NASA programs, 100 percent of these safety critical verifications were typically confirmed by NASA. The traditional Safety and Mission Assurance (SMA) model does not support the nature of the Commercial Crew Program. To that end, NASA SMA is implementing a Risk Based Assurance (RBA) process to determine which hazard control verifications require NASA authentication. Additionally, a Shared Assurance Model is also being developed to efficiently use the available resources to execute the verifications. This paper will describe the evolution of the CCP Mission Assurance process from the beginning of the Program to its current incarnation. Topics to be covered include a short history of the CCP; the development of the Programmatic mission assurance requirements; the current safety review process; a description of the RBA process and its products and ending with a description of the Shared Assurance Model.
Towards Trustable Digital Evidence with PKIDEV: PKI Based Digital Evidence Verification Model
NASA Astrophysics Data System (ADS)
Uzunay, Yusuf; Incebacak, Davut; Bicakci, Kemal
How to Capture and Preserve Digital Evidence Securely? For the investigation and prosecution of criminal activities that involve computers, digital evidence collected in the crime scene has a vital importance. On one side, it is a very challenging task for forensics professionals to collect them without any loss or damage. On the other, there is the second problem of providing the integrity and authenticity in order to achieve legal acceptance in a court of law. By conceiving digital evidence simply as one instance of digital data, it is evident that modern cryptography offers elegant solutions for this second problem. However, to our knowledge, there is not any previous work proposing a systematic model having a holistic view to address all the related security problems in this particular case of digital evidence verification. In this paper, we present PKIDEV (Public Key Infrastructure based Digital Evidence Verification model) as an integrated solution to provide security for the process of capturing and preserving digital evidence. PKIDEV employs, inter alia, cryptographic techniques like digital signatures and secure time-stamping as well as latest technologies such as GPS and EDGE. In our study, we also identify the problems public-key cryptography brings when it is applied to the verification of digital evidence.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-03
... management, retrofit strategies, home performance verification, and sustainable construction fundamentals... submitted to the Office of Management and Budget (OMB) for review, as required by the Paperwork Reduction... (2528--Pending) and should be sent to: HUD Desk Officer, Office of Management and Budget, New Executive...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-11
... Post at Punta Flamenco; (7) developing hiking trails; (8) completing boundary verification; and (9... of sea turtles and their nests/eggs. To benefit resident and migratory birds, annual surveys would be... management strategies to benefit target species of birds and cooperate with Puerto Rico DNER to conduct...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-26
... reduce the potential for shooting mortality or injury of closed species. These conservation measures..., meetings, radio shows, signs, school visits, and one-on-one contacts. We also recognize that no listed..., and in-season verification of the harvest. Our primary strategy to reduce the threat of shooting...
NASA Technical Reports Server (NTRS)
Berg, Melanie D.; Label, Kenneth A.; Kim, Hak; Phan, Anthony; Seidleck, Christina
2014-01-01
Finite state-machines (FSMs) are used to control operational flow in application specific integrated circuits (ASICs) and field programmable gate array (FPGA) devices. Because of their ease of interpretation, FSMs simplify the design and verification process and consequently are significant components in a synchronous design.
Li, Heng; Su, Xiaofan; Wang, Jing; Kan, Han; Han, Tingting; Zeng, Yajie; Chai, Xinyu
2018-01-01
Current retinal prostheses can only generate low-resolution visual percepts constituted of limited phosphenes which are elicited by an electrode array and with uncontrollable color and restricted grayscale. Under this visual perception, prosthetic recipients can just complete some simple visual tasks, but more complex tasks like face identification/object recognition are extremely difficult. Therefore, it is necessary to investigate and apply image processing strategies for optimizing the visual perception of the recipients. This study focuses on recognition of the object of interest employing simulated prosthetic vision. We used a saliency segmentation method based on a biologically plausible graph-based visual saliency model and a grabCut-based self-adaptive-iterative optimization framework to automatically extract foreground objects. Based on this, two image processing strategies, Addition of Separate Pixelization and Background Pixel Shrink, were further utilized to enhance the extracted foreground objects. i) The results showed by verification of psychophysical experiments that under simulated prosthetic vision, both strategies had marked advantages over Direct Pixelization in terms of recognition accuracy and efficiency. ii) We also found that recognition performance under two strategies was tied to the segmentation results and was affected positively by the paired-interrelated objects in the scene. The use of the saliency segmentation method and image processing strategies can automatically extract and enhance foreground objects, and significantly improve object recognition performance towards recipients implanted a high-density implant. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Lee, S. S.; Sengupta, S.; Nwadike, E. V.
1982-01-01
The six-volume report: describes the theory of a three dimensional (3-D) mathematical thermal discharge model and a related one dimensional (1-D) model, includes model verification at two sites, and provides a separate user's manual for each model. The 3-D model has two forms: free surface and rigid lid. The former, verified at Anclote Anchorage (FL), allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth; e.g., estuaries and coastal regions. The latter, verified at Lake Keowee (SC), is suited for small surface wave heights compared to depth (e.g., natural or man-made inland lakes) because surface elevation has been removed as a parameter.
Alloy, L B; Lipman, A J
1992-05-01
In this commentary we examine Swann, Wenzlaff, Krull, and Pelham's (1992) findings with respect to each of 5 central propositions in self-verification theory. We conclude that although the data are consistent with self-verification theory, none of the 5 components of the theory have been demonstrated convincingly as yet. Specifically, we argue that depressed subjects' selection of social feedback appears to be balanced or evenhanded rather than biased toward negative feedback and that there is little evidence to indicate that depressives actively seek negative appraisals. Furthermore, we suggest that the studies are silent with respect to the motivational postulates of self-verification theory and that a variety of competing cognitive and motivational models can explain Swann et al.'s findings as well as self-verification theory.
NASA Technical Reports Server (NTRS)
Koga, Dennis (Technical Monitor); Penix, John; Markosian, Lawrence Z.; OMalley, Owen; Brew, William A.
2003-01-01
Attempts to achieve widespread use of software verification tools have been notably unsuccessful. Even 'straightforward', classic, and potentially effective verification tools such as lint-like tools face limits on their acceptance. These limits are imposed by the expertise required applying the tools and interpreting the results, the high false positive rate of many verification tools, and the need to integrate the tools into development environments. The barriers are even greater for more complex advanced technologies such as model checking. Web-hosted services for advanced verification technologies may mitigate these problems by centralizing tool expertise. The possible benefits of this approach include eliminating the need for software developer expertise in tool application and results filtering, and improving integration with other development tools.
A system for automatic evaluation of simulation software
NASA Technical Reports Server (NTRS)
Ryan, J. P.; Hodges, B. C.
1976-01-01
Within the field of computer software, simulation and verification are complementary processes. Simulation methods can be used to verify software by performing variable range analysis. More general verification procedures, such as those described in this paper, can be implicitly, viewed as attempts at modeling the end-product software. From software requirement methodology, each component of the verification system has some element of simulation to it. Conversely, general verification procedures can be used to analyze simulation software. A dynamic analyzer is described which can be used to obtain properly scaled variables for an analog simulation, which is first digitally simulated. In a similar way, it is thought that the other system components and indeed the whole system itself have the potential of being effectively used in a simulation environment.
An elementary tutorial on formal specification and verification using PVS
NASA Technical Reports Server (NTRS)
Butler, Ricky W.
1993-01-01
A tutorial on the development of a formal specification and its verification using the Prototype Verification System (PVS) is presented. The tutorial presents the formal specification and verification techniques by way of specific example - an airline reservation system. The airline reservation system is modeled as a simple state machine with two basic operations. These operations are shown to preserve a state invariant using the theorem proving capabilities of PVS. The technique of validating a specification via 'putative theorem proving' is also discussed and illustrated in detail. This paper is intended for the novice and assumes only some of the basic concepts of logic. A complete description of user inputs and the PVS output is provided and thus it can be effectively used while one is sitting at a computer terminal.
NASA Astrophysics Data System (ADS)
Latypov, Marat I.; Kalidindi, Surya R.
2017-10-01
There is a critical need for the development and verification of practically useful multiscale modeling strategies for simulating the mechanical response of multiphase metallic materials with heterogeneous microstructures. In this contribution, we present data-driven reduced order models for effective yield strength and strain partitioning in such microstructures. These models are built employing the recently developed framework of Materials Knowledge Systems that employ 2-point spatial correlations (or 2-point statistics) for the quantification of the heterostructures and principal component analyses for their low-dimensional representation. The models are calibrated to a large collection of finite element (FE) results obtained for a diverse range of microstructures with various sizes, shapes, and volume fractions of the phases. The performance of the models is evaluated by comparing the predictions of yield strength and strain partitioning in two-phase materials with the corresponding predictions from a classical self-consistent model as well as results of full-field FE simulations. The reduced-order models developed in this work show an excellent combination of accuracy and computational efficiency, and therefore present an important advance towards computationally efficient microstructure-sensitive multiscale modeling frameworks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pawlus, Witold, E-mail: witold.p.pawlus@ieee.org; Ebbesen, Morten K.; Hansen, Michael R.
Design of offshore drilling equipment is a task that involves not only analysis of strict machine specifications and safety requirements but also consideration of changeable weather conditions and harsh environment. These challenges call for a multidisciplinary approach and make the design process complex. Various modeling software products are currently available to aid design engineers in their effort to test and redesign equipment before it is manufactured. However, given the number of available modeling tools and methods, the choice of the proper modeling methodology becomes not obvious and – in some cases – troublesome. Therefore, we present a comparative analysis ofmore » two popular approaches used in modeling and simulation of mechanical systems: multibody and analytical modeling. A gripper arm of the offshore vertical pipe handling machine is selected as a case study for which both models are created. In contrast to some other works, the current paper shows verification of both systems by benchmarking their simulation results against each other. Such criteria as modeling effort and results accuracy are evaluated to assess which modeling strategy is the most suitable given its eventual application.« less
Towards automatic Markov reliability modeling of computer architectures
NASA Technical Reports Server (NTRS)
Liceaga, C. A.; Siewiorek, D. P.
1986-01-01
The analysis and evaluation of reliability measures using time-varying Markov models is required for Processor-Memory-Switch (PMS) structures that have competing processes such as standby redundancy and repair, or renewal processes such as transient or intermittent faults. The task of generating these models is tedious and prone to human error due to the large number of states and transitions involved in any reasonable system. Therefore model formulation is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model formulation. This paper presents an overview of the Automated Reliability Modeling (ARM) program, under development at NASA Langley Research Center. ARM will accept as input a description of the PMS interconnection graph, the behavior of the PMS components, the fault-tolerant strategies, and the operational requirements. The output of ARM will be the reliability of availability Markov model formulated for direct use by evaluation programs. The advantages of such an approach are (a) utility to a large class of users, not necessarily expert in reliability analysis, and (b) a lower probability of human error in the computation.
2016-08-26
Journal of Geophysical Research: Space Physics Modeling the magnetospheric X-ray emission from solar wind charge exchange with verification from XMM...Newton observations Ian C. Whittaker1, Steve Sembay1, Jennifer A. Carter1, AndrewM. Read1, Steve E. Milan1, andMinna Palmroth2 1Department of Physics ...observations, J. Geophys. Res. Space Physics , 121, 4158–4179, doi:10.1002/2015JA022292. Received 21 DEC 2015 Accepted 26 FEB 2016 Accepted article online 29
Verification of Sulfate Attack Penetration Rates for Saltstone Disposal Unit Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, G. P.
Recent Special Analysis modeling of Saltstone Disposal Units consider sulfate attack on concrete and utilize degradation rates estimated from Cementitious Barriers Partnership software simulations. This study provides an independent verification of those simulation results using an alternative analysis method and an independent characterization data source. The sulfate penetration depths estimated herein are similar to the best-estimate values in SRNL-STI-2013-00118 Rev. 2 and well below the nominal values subsequently used to define Saltstone Special Analysis base cases.
TORMES-BEXUS 17 and 19: Precursor of the 6U CubeSat 3CAT-2
NASA Astrophysics Data System (ADS)
Carreno-Luengo, H.; Amezaga, A.; Bolet, A.; Vidal, D.; Jane, J.; Munoz, J. F.; Olive, R.; Camps, A.; Carola, J.; Catarino, N.; Hagenfeldt, M.; Palomo, P.; Cornara, S.
2015-09-01
3Cat-2 Assembly, Integration and Verification (AIV) activities of the Engineering Model (EM) and the Flight Model (FM) are being carried out at present. The Attitude Determination and Control System (ADCS) and Flight Software (FSW) validation campaigns will be performed at Universitat Politècnica de Catalunya (UPC) during the incomings months. An analysis and verification of the 3Cat-2 key mission requirements has been performed. The main results are summarized in this work.
A Mechanism of Modeling and Verification for SaaS Customization Based on TLA
NASA Astrophysics Data System (ADS)
Luan, Shuai; Shi, Yuliang; Wang, Haiyang
With the gradually mature of SOA and the rapid development of Internet, SaaS has become a popular software service mode. The customized action of SaaS is usually subject to internal and external dependency relationships. This paper first introduces a method for modeling customization process based on Temporal Logic of Actions, and then proposes a verification algorithm to assure that each step in customization will not cause unpredictable influence on system and follow the related rules defined by SaaS provider.
Turbulence Modeling Verification and Validation
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.
2014-01-01
Computational fluid dynamics (CFD) software that solves the Reynolds-averaged Navier-Stokes (RANS) equations has been in routine use for more than a quarter of a century. It is currently employed not only for basic research in fluid dynamics, but also for the analysis and design processes in many industries worldwide, including aerospace, automotive, power generation, chemical manufacturing, polymer processing, and petroleum exploration. A key feature of RANS CFD is the turbulence model. Because the RANS equations are unclosed, a model is necessary to describe the effects of the turbulence on the mean flow, through the Reynolds stress terms. The turbulence model is one of the largest sources of uncertainty in RANS CFD, and most models are known to be flawed in one way or another. Alternative methods such as direct numerical simulations (DNS) and large eddy simulations (LES) rely less on modeling and hence include more physics than RANS. In DNS all turbulent scales are resolved, and in LES the large scales are resolved and the effects of the smallest turbulence scales are modeled. However, both DNS and LES are too expensive for most routine industrial usage on today's computers. Hybrid RANS-LES, which blends RANS near walls with LES away from walls, helps to moderate the cost while still retaining some of the scale-resolving capability of LES, but for some applications it can still be too expensive. Even considering its associated uncertainties, RANS turbulence modeling has proved to be very useful for a wide variety of applications. For example, in the aerospace field, many RANS models are considered to be reliable for computing attached flows. However, existing turbulence models are known to be inaccurate for many flows involving separation. Research has been ongoing for decades in an attempt to improve turbulence models for separated and other nonequilibrium flows. When developing or improving turbulence models, both verification and validation are important steps in the process. Verification insures that the CFD code is solving the equations as intended (no errors in the implementation). This is typically done either through the method of manufactured solutions (MMS) or through careful step-by-step comparisons with other verified codes. After the verification step is concluded, validation is performed to document the ability of the turbulence model to represent different types of flow physics. Validation can involve a large number of test case comparisons with experiments, theory, or DNS. Organized workshops have proved to be valuable resources for the turbulence modeling community in its pursuit of turbulence modeling verification and validation. Workshop contributors using different CFD codes run the same cases, often according to strict guidelines, and compare results. Through these comparisons, it is often possible to (1) identify codes that have likely implementation errors, and (2) gain insight into the capabilities and shortcomings of different turbulence models to predict the flow physics associated with particular types of flows. These are valuable lessons because they help bring consistency to CFD codes by encouraging the correction of faulty programming and facilitating the adoption of better models. They also sometimes point to specific areas needed for improvement in the models. In this paper, several recent workshops are summarized primarily from the point of view of turbulence modeling verification and validation. Furthermore, the NASA Langley Turbulence Modeling Resource website is described. The purpose of this site is to provide a central location where RANS turbulence models are documented, and test cases, grids, and data are provided. The goal of this paper is to provide an abbreviated survey of turbulence modeling verification and validation efforts, summarize some of the outcomes, and give some ideas for future endeavors in this area.
Formal verification of medical monitoring software using Z language: a representative sample.
Babamir, Seyed Morteza; Borhani, Mehdi
2012-08-01
Medical monitoring systems are useful aids assisting physicians in keeping patients under constant surveillance; however, taking sound decision by the systems is a physician concern. As a result, verification of the systems behavior in monitoring patients is a matter of significant. The patient monitoring is undertaken by software in modern medical systems; so, software verification of modern medial systems have been noticed. Such verification can be achieved by the Formal Languages having mathematical foundations. Among others, the Z language is a suitable formal language has been used to formal verification of systems. This study aims to present a constructive method to verify a representative sample of a medical system by which the system is visually specified and formally verified against patient constraints stated in Z Language. Exploiting our past experience in formal modeling Continuous Infusion Insulin Pump (CIIP), we think of the CIIP system as a representative sample of medical systems in proposing our present study. The system is responsible for monitoring diabetic's blood sugar.
Verification and Validation of a Three-Dimensional Generalized Composite Material Model
NASA Technical Reports Server (NTRS)
Hoffarth, Canio; Harrington, Joseph; Subramaniam, D. Rajan; Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Blankenhorn, Gunther
2014-01-01
A general purpose orthotropic elasto-plastic computational constitutive material model has been developed to improve predictions of the response of composites subjected to high velocity impact. The three-dimensional orthotropic elasto-plastic composite material model is being implemented initially for solid elements in LS-DYNA as MAT213. In order to accurately represent the response of a composite, experimental stress-strain curves are utilized as input, allowing for a more general material model that can be used on a variety of composite applications. The theoretical details are discussed in a companion paper. This paper documents the implementation, verification and qualitative validation of the material model using the T800- F3900 fiber/resin composite material.
Verification and Validation of a Three-Dimensional Generalized Composite Material Model
NASA Technical Reports Server (NTRS)
Hoffarth, Canio; Harrington, Joseph; Rajan, Subramaniam D.; Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Blankenhorn, Gunther
2015-01-01
A general purpose orthotropic elasto-plastic computational constitutive material model has been developed to improve predictions of the response of composites subjected to high velocity impact. The three-dimensional orthotropic elasto-plastic composite material model is being implemented initially for solid elements in LS-DYNA as MAT213. In order to accurately represent the response of a composite, experimental stress-strain curves are utilized as input, allowing for a more general material model that can be used on a variety of composite applications. The theoretical details are discussed in a companion paper. This paper documents the implementation, verification and qualitative validation of the material model using the T800-F3900 fiber/resin composite material
Formal Analysis of BPMN Models Using Event-B
NASA Astrophysics Data System (ADS)
Bryans, Jeremy W.; Wei, Wei
The use of business process models has gone far beyond documentation purposes. In the development of business applications, they can play the role of an artifact on which high level properties can be verified and design errors can be revealed in an effort to reduce overhead at later software development and diagnosis stages. This paper demonstrates how formal verification may add value to the specification, design and development of business process models in an industrial setting. The analysis of these models is achieved via an algorithmic translation from the de-facto standard business process modeling language BPMN to Event-B, a widely used formal language supported by the Rodin platform which offers a range of simulation and verification technologies.
NASA Technical Reports Server (NTRS)
Lee, S. S.; Nwadike, E. V.; Sinha, S. E.
1982-01-01
The theory of a three dimensional (3-D) mathematical thermal discharge model and a related one dimensional (1-D) model are described. Model verification at two sites, a separate user's manual for each model are included. The 3-D model has two forms: free surface and rigid lid. The former allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth, estuaries and coastal regions. The latter is suited for small surface wave heights compared to depth because surface elevation was removed as a parameter. These models allow computation of time dependent velocity and temperature fields for given initial conditions and time-varying boundary conditions. The free surface model also provides surface height variations with time.
Using formal methods for content validation of medical procedure documents.
Cota, Érika; Ribeiro, Leila; Bezerra, Jonas Santos; Costa, Andrei; da Silva, Rosiana Estefane; Cota, Gláucia
2017-08-01
We propose the use of a formal approach to support content validation of a standard operating procedure (SOP) for a therapeutic intervention. Such an approach provides a useful tool to identify ambiguities, omissions and inconsistencies, and improves the applicability and efficacy of documents in the health settings. We apply and evaluate a methodology originally proposed for the verification of software specification documents to a specific SOP. The verification methodology uses the graph formalism to model the document. Semi-automatic analysis identifies possible problems in the model and in the original document. The verification is an iterative process that identifies possible faults in the original text that should be revised by its authors and/or specialists. The proposed method was able to identify 23 possible issues in the original document (ambiguities, omissions, redundant information, and inaccuracies, among others). The formal verification process aided the specialists to consider a wider range of usage scenarios and to identify which instructions form the kernel of the proposed SOP and which ones represent additional or required knowledge that are mandatory for the correct application of the medical document. By using the proposed verification process, a simpler and yet more complete SOP could be produced. As consequence, during the validation process the experts received a more mature document and could focus on the technical aspects of the procedure itself. Copyright © 2017 Elsevier B.V. All rights reserved.
SU-F-J-32: Do We Need KV Imaging During CBCT Based Patient Set-Up for Lung Radiation Therapy?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopal, A; Zhou, J; Prado, K
Purpose: To evaluate the role of 2D kilovoltage (kV) imaging to complement cone beam CT (CBCT) imaging in a shift threshold based image guided radiation therapy (IGRT) strategy for conventional lung radiotherapy. Methods: A retrospective study was conducted by analyzing IGRT couch shift trends for 15 patients that received lung radiation therapy to evaluate the benefit of performing orthogonal kV imaging prior to CBCT imaging. Herein, a shift threshold based IGRT protocol was applied, which would mandate additional CBCT verification if the applied patient shifts exceeded 3 mm to avoid intraobserver variability in CBCT registration and to confirm table shifts.more » For each patient, two IGRT strategies: kV + CBCT and CBCT alone, were compared and the recorded patient shifts were categorized into whether additional CBCT acquisition would have been mandated or not. The effectiveness of either strategy was gauged by the likelihood of needing additional CBCT imaging for accurate patient set-up. Results: The use of CBCT alone was 6 times more likely to require an additional CBCT than KV+CBCT, for a 3 mm shift threshold (88% vs 14%). The likelihood of additional CBCT verification generally increased with lower shift thresholds, and was significantly lower when kV+CBCT was used (7% with 5 mm shift threshold, 36% with 2 mm threshold), than with CBCT alone (61% with 5 mm shift threshold, 97% with 2 mm threshold). With CBCT alone, treatment time increased by 2.2 min and dose increased by 1.9 cGy per fraction on average due to additional CBCT with a 3mm shift threshold. Conclusion: The benefit of kV imaging to screen for gross misalignments led to more accurate CBCT based patient localization compared with using CBCT alone. The subsequently reduced need for additional CBCT verification will minimize treatment time and result in less overall patient imaging dose.« less
Verification and Validation for Flight-Critical Systems (VVFCS)
NASA Technical Reports Server (NTRS)
Graves, Sharon S.; Jacobsen, Robert A.
2010-01-01
On March 31, 2009 a Request for Information (RFI) was issued by NASA s Aviation Safety Program to gather input on the subject of Verification and Validation (V & V) of Flight-Critical Systems. The responses were provided to NASA on or before April 24, 2009. The RFI asked for comments in three topic areas: Modeling and Validation of New Concepts for Vehicles and Operations; Verification of Complex Integrated and Distributed Systems; and Software Safety Assurance. There were a total of 34 responses to the RFI, representing a cross-section of academic (26%), small & large industry (47%) and government agency (27%).
NASA Technical Reports Server (NTRS)
Kashangaki, Thomas A. L.
1992-01-01
This paper describes a series of modal tests that were performed on a cantilevered truss structure. The goal of the tests was to assemble a large database of high quality modal test data for use in verification of proposed methods for on orbit model verification and damage detection in flexible truss structures. A description of the hardware is provided along with details of the experimental setup and procedures for 16 damage cases. Results from selected cases are presented and discussed. Differences between ground vibration testing and on orbit modal testing are also described.
Report on the formal specification and partial verification of the VIPER microprocessor
NASA Technical Reports Server (NTRS)
Brock, Bishop; Hunt, Warren A., Jr.
1991-01-01
The formal specification and partial verification of the VIPER microprocessor is reviewed. The VIPER microprocessor was designed by RSRE, Malvern, England, for safety critical computing applications (e.g., aircraft, reactor control, medical instruments, armaments). The VIPER was carefully specified and partially verified in an attempt to provide a microprocessor with completely predictable operating characteristics. The specification of VIPER is divided into several levels of abstraction, from a gate-level description up to an instruction execution model. Although the consistency between certain levels was demonstrated with mechanically-assisted mathematical proof, the formal verification of VIPER was never completed.
Electric power system test and verification program
NASA Technical Reports Server (NTRS)
Rylicki, Daniel S.; Robinson, Frank, Jr.
1994-01-01
Space Station Freedom's (SSF's) electric power system (EPS) hardware and software verification is performed at all levels of integration, from components to assembly and system level tests. Careful planning is essential to ensure the EPS is tested properly on the ground prior to launch. The results of the test performed on breadboard model hardware and analyses completed to date have been evaluated and used to plan for design qualification and flight acceptance test phases. These results and plans indicate the verification program for SSF's 75-kW EPS would have been successful and completed in time to support the scheduled first element launch.
Kobayashi, Hiroki; Harada, Hiroko; Nakamura, Masaomi; Futamura, Yushi; Ito, Akihiro; Yoshida, Minoru; Iemura, Shun-Ichiro; Shin-Ya, Kazuo; Doi, Takayuki; Takahashi, Takashi; Natsume, Tohru; Imoto, Masaya; Sakakibara, Yasubumi
2012-04-05
Identification of the target proteins of bioactive compounds is critical for elucidating the mode of action; however, target identification has been difficult in general, mostly due to the low sensitivity of detection using affinity chromatography followed by CBB staining and MS/MS analysis. We applied our protocol of predicting target proteins combining in silico screening and experimental verification for incednine, which inhibits the anti-apoptotic function of Bcl-xL by an unknown mechanism. One hundred eighty-two target protein candidates were computationally predicted to bind to incednine by the statistical prediction method, and the predictions were verified by in vitro binding of incednine to seven proteins, whose expression can be confirmed in our cell system.As a result, 40% accuracy of the computational predictions was achieved successfully, and we newly found 3 incednine-binding proteins. This study revealed that our proposed protocol of predicting target protein combining in silico screening and experimental verification is useful, and provides new insight into a strategy for identifying target proteins of small molecules.
NASA Astrophysics Data System (ADS)
Kim, Youngmi; Choi, Jae-Young; Choi, Kwangseon; Choi, Jung-Hoe; Lee, Sooryong
2011-04-01
As IC design complexity keeps increasing, it is more and more difficult to ensure the pattern transfer after optical proximity correction (OPC) due to the continuous reduction of layout dimensions and lithographic limitation by k1 factor. To guarantee the imaging fidelity, resolution enhancement technologies (RET) such as off-axis illumination (OAI), different types of phase shift masks and OPC technique have been developed. In case of model-based OPC, to cross-confirm the contour image versus target layout, post-OPC verification solutions continuously keep developed - contour generation method and matching it to target structure, method for filtering and sorting the patterns to eliminate false errors and duplicate patterns. The way to detect only real errors by excluding false errors is the most important thing for accurate and fast verification process - to save not only reviewing time and engineer resource, but also whole wafer process time and so on. In general case of post-OPC verification for metal-contact/via coverage (CC) check, verification solution outputs huge of errors due to borderless design, so it is too difficult to review and correct all points of them. It should make OPC engineer to miss the real defect, and may it cause the delay time to market, at least. In this paper, we studied method for increasing efficiency of post-OPC verification, especially for the case of CC check. For metal layers, final CD after etch process shows various CD bias, which depends on distance with neighbor patterns, so it is more reasonable that consider final metal shape to confirm the contact/via coverage. Through the optimization of biasing rule for different pitches and shapes of metal lines, we could get more accurate and efficient verification results and decrease the time for review to find real errors. In this paper, the suggestion in order to increase efficiency of OPC verification process by using simple biasing rule to metal layout instead of etch model application is presented.
Hiraishi, Kunihiko
2014-01-01
One of the significant topics in systems biology is to develop control theory of gene regulatory networks (GRNs). In typical control of GRNs, expression of some genes is inhibited (activated) by manipulating external stimuli and expression of other genes. It is expected to apply control theory of GRNs to gene therapy technologies in the future. In this paper, a control method using a Boolean network (BN) is studied. A BN is widely used as a model of GRNs, and gene expression is expressed by a binary value (ON or OFF). In particular, a context-sensitive probabilistic Boolean network (CS-PBN), which is one of the extended models of BNs, is used. For CS-PBNs, the verification problem and the optimal control problem are considered. For the verification problem, a solution method using the probabilistic model checker PRISM is proposed. For the optimal control problem, a solution method using polynomial optimization is proposed. Finally, a numerical example on the WNT5A network, which is related to melanoma, is presented. The proposed methods provide us useful tools in control theory of GRNs. PMID:24587766
Authentication Based on Pole-zero Models of Signature Velocity
Rashidi, Saeid; Fallah, Ali; Towhidkhah, Farzad
2013-01-01
With the increase of communication and financial transaction through internet, on-line signature verification is an accepted biometric technology for access control and plays a significant role in authenticity and authorization in modernized society. Therefore, fast and precise algorithms for the signature verification are very attractive. The goal of this paper is modeling of velocity signal that pattern and properties is stable for persons. With using pole-zero models based on discrete cosine transform, precise method is proposed for modeling and then features is founded from strokes. With using linear, parzen window and support vector machine classifiers, the signature verification technique was tested with a large number of authentic and forgery signatures and has demonstrated the good potential of this technique. The signatures are collected from three different database include a proprietary database, the SVC2004 and the Sabanci University signature database benchmark databases. Experimental results based on Persian, SVC2004 and SUSIG databases show that our method achieves an equal error rate of 5.91%, 5.62% and 3.91% in the skilled forgeries, respectively. PMID:24696797
Using software security analysis to verify the secure socket layer (SSL) protocol
NASA Technical Reports Server (NTRS)
Powell, John D.
2004-01-01
nal Aeronautics and Space Administration (NASA) have tens of thousands of networked computer systems and applications. Software Security vulnerabilities present risks such as lost or corrupted data, information the3, and unavailability of critical systems. These risks represent potentially enormous costs to NASA. The NASA Code Q research initiative 'Reducing Software Security Risk (RSSR) Trough an Integrated Approach '' offers, among its capabilities, formal verification of software security properties, through the use of model based verification (MBV) to address software security risks. [1,2,3,4,5,6] MBV is a formal approach to software assurance that combines analysis of software, via abstract models, with technology, such as model checkers, that provide automation of the mechanical portions of the analysis process. This paper will discuss: The need for formal analysis to assure software systems with respect to software and why testing alone cannot provide it. The means by which MBV with a Flexible Modeling Framework (FMF) accomplishes the necessary analysis task. An example of FMF style MBV in the verification of properties over the Secure Socket Layer (SSL) communication protocol as a demonstration.
Using ICT techniques for improving mechatronic systems' dependability
NASA Astrophysics Data System (ADS)
Miron, Emanuel; Silva, João P. M. A.; Machado, José; Olaru, Dumitru; Prisacaru, Gheorghe
2013-10-01
The use of analysis techniques for industrial controller's analysis, such as Simulation and Formal Verification, is complex on industrial context. This complexity is due to the fact that such techniques require sometimes high investment in specific skilled human resources that have sufficient theoretical knowledge in those domains. This paper aims, mainly, to show that it is possible to obtain a timed automata model for formal verification purposes, considering the CAD model of a mechanical component. This systematic approach can be used, by companies, for the analysis of industrial controllers programs. For this purpose, it is discussed, in the paper, the best way to systematize these procedures, and this paper describes, only, the first step of a complex process and promotes a discussion of the main difficulties that can be found and a possibility for handle those difficulties. A library for formal verification purposes is obtained from original 3D CAD models using Software as a Service platform (SaaS) that, nowadays, has become a common deliverable model for many applications, because SaaS is typically accessed by users via internet access.
The FoReVer Methodology: A MBSE Framework for Formal Verification
NASA Astrophysics Data System (ADS)
Baracchi, Laura; Mazzini, Silvia; Cimatti, Alessandro; Tonetta, Stefano; Garcia, Gerald
2013-08-01
The need for high level of confidence and operational integrity in critical space (software) systems is well recognized in the Space industry and has been addressed so far through rigorous System and Software Development Processes and stringent Verification and Validation regimes. The Model Based Space System Engineering process (MBSSE) derived in the System and Software Functional Requirement Techniques study (SSFRT) focused on the application of model based engineering technologies to support the space system and software development processes, from mission level requirements to software implementation through model refinements and translations. In this paper we report on our work in the ESA-funded FoReVer project where we aim at developing methodological, theoretical and technological support for a systematic approach to the space avionics system development, in phases 0/A/B/C. FoReVer enriches the MBSSE process with contract-based formal verification of properties, at different stages from system to software, through a step-wise refinement approach, with the support for a Software Reference Architecture.
Constrained structural dynamic model verification using free vehicle suspension testing methods
NASA Technical Reports Server (NTRS)
Blair, Mark A.; Vadlamudi, Nagarjuna
1988-01-01
Verification of the validity of a spacecraft's structural dynamic math model used in computing ascent (or in the case of the STS, ascent and landing) loads is mandatory. This verification process requires that tests be carried out on both the payload and the math model such that the ensuing correlation may validate the flight loads calculations. To properly achieve this goal, the tests should be performed with the payload in the launch constraint (i.e., held fixed at only the payload-booster interface DOFs). The practical achievement of this set of boundary conditions is quite difficult, especially with larger payloads, such as the 12-ton Hubble Space Telescope. The development of equations in the paper will show that by exciting the payload at its booster interface while it is suspended in the 'free-free' state, a set of transfer functions can be produced that will have minima that are directly related to the fundamental modes of the payload when it is constrained in its launch configuration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Yong Joon; Yoo, Jun Soo; Smith, Curtis Lee
2015-09-01
This INL plan comprehensively describes the Requirements Traceability Matrix (RTM) on main physics and numerical method of the RELAP-7. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7.
Fabrication and verification testing of ETM 30 cm diameter ion thrusters
NASA Technical Reports Server (NTRS)
Collett, C.
1977-01-01
Engineering model designs and acceptance tests are described for the 800 and 900 series 30 cm electron bombardment thrustors. Modifications to the test console for a 1000 hr verification test were made. The 10,000 hr endurance test of the S/N 701 thruster is described, and post test analysis results are included.
Verification testing of the Practical Best Management, Inc., CrystalStream™ stormwater treatment system was conducted over a 15-month period starting in March, 2003. The system was installed in a test site in Griffin, Georgia, and served a drainage basin of approximately 4 ...
Evaluation of Mesoscale Model Phenomenological Verification Techniques
NASA Technical Reports Server (NTRS)
Lambert, Winifred
2006-01-01
Forecasters at the Spaceflight Meteorology Group, 45th Weather Squadron, and National Weather Service in Melbourne, FL use mesoscale numerical weather prediction model output in creating their operational forecasts. These models aid in forecasting weather phenomena that could compromise the safety of launch, landing, and daily ground operations and must produce reasonable weather forecasts in order for their output to be useful in operations. Considering the importance of model forecasts to operations, their accuracy in forecasting critical weather phenomena must be verified to determine their usefulness. The currently-used traditional verification techniques involve an objective point-by-point comparison of model output and observations valid at the same time and location. The resulting statistics can unfairly penalize high-resolution models that make realistic forecasts of a certain phenomena, but are offset from the observations in small time and/or space increments. Manual subjective verification can provide a more valid representation of model performance, but is time-consuming and prone to personal biases. An objective technique that verifies specific meteorological phenomena, much in the way a human would in a subjective evaluation, would likely produce a more realistic assessment of model performance. Such techniques are being developed in the research community. The Applied Meteorology Unit (AMU) was tasked to conduct a literature search to identify phenomenological verification techniques being developed, determine if any are ready to use operationally, and outline the steps needed to implement any operationally-ready techniques into the Advanced Weather Information Processing System (AWIPS). The AMU conducted a search of all literature on the topic of phenomenological-based mesoscale model verification techniques and found 10 different techniques in various stages of development. Six of the techniques were developed to verify precipitation forecasts, one to verify sea breeze forecasts, and three were capable of verifying several phenomena. The AMU also determined the feasibility of transitioning each technique into operations and rated the operational capability of each technique on a subjective 1-10 scale: (1) 1 indicates that the technique is only in the initial stages of development, (2) 2-5 indicates that the technique is still undergoing modifications and is not ready for operations, (3) 6-8 indicates a higher probability of integrating the technique into AWIPS with code modifications, and (4) 9-10 indicates that the technique was created for AWIPS and is ready for implementation. Eight of the techniques were assigned a rating of 5 or below. The other two received ratings of 6 and 7, and none of the techniques a rating of 9-10. At the current time, there are no phenomenological model verification techniques ready for operational use. However, several of the techniques described in this report may become viable techniques in the future and should be monitored for updates in the literature. The desire to use a phenomenological verification technique is widespread in the modeling community, and it is likely that other techniques besides those described herein are being developed, but the work has not yet been published. Therefore, the AMIU recommends that the literature continue to be monitored for updates to the techniques described in this report and for new techniques being developed whose results have not yet been published. 111
NASA Technical Reports Server (NTRS)
Lee, S. S.; Sengupta, S.; Tuann, S. Y.; Lee, C. R.
1982-01-01
The six-volume report: describes the theory of a three-dimensional (3-D) mathematical thermal discharge model and a related one-dimensional (1-D) model, includes model verification at two sites, and provides a separate user's manual for each model. The 3-D model has two forms: free surface and rigid lid. The former, verified at Anclote Anchorage (FL), allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth; e.g., estuaries and coastal regions. The latter, verified at Lake Keowee (SC), is suited for small surface wave heights compared to depth. These models allow computation of time-dependent velocity and temperature fields for given initial conditions and time-varying boundary conditions.
Loads and Structural Dynamics Requirements for Spaceflight Hardware
NASA Technical Reports Server (NTRS)
Schultz, Kenneth P.
2011-01-01
The purpose of this document is to establish requirements relating to the loads and structural dynamics technical discipline for NASA and commercial spaceflight launch vehicle and spacecraft hardware. Requirements are defined for the development of structural design loads and recommendations regarding methodologies and practices for the conduct of load analyses are provided. As such, this document represents an implementation of NASA STD-5002. Requirements are also defined for structural mathematical model development and verification to ensure sufficient accuracy of predicted responses. Finally, requirements for model/data delivery and exchange are specified to facilitate interactions between Launch Vehicle Providers (LVPs), Spacecraft Providers (SCPs), and the NASA Technical Authority (TA) providing insight/oversight and serving in the Independent Verification and Validation role. In addition to the analysis-related requirements described above, a set of requirements are established concerning coupling phenomena or other interaction between structural dynamics and aerodynamic environments or control or propulsion system elements. Such requirements may reasonably be considered structure or control system design criteria, since good engineering practice dictates consideration of and/or elimination of the identified conditions in the development of those subsystems. The requirements are included here, however, to ensure that such considerations are captured in the design space for launch vehicles (LV), spacecraft (SC) and the Launch Abort Vehicle (LAV). The requirements in this document are focused on analyses to be performed to develop data needed to support structural verification. As described in JSC 65828, Structural Design Requirements and Factors of Safety for Spaceflight Hardware, implementation of the structural verification requirements is expected to be described in a Structural Verification Plan (SVP), which should describe the verification of each structural item for the applicable requirements. The requirement for and expected contents of the SVP are defined in JSC 65828. The SVP may also document unique verifications that meet or exceed these requirements with Technical Authority approval.
NASA Astrophysics Data System (ADS)
Wood, Andy; Clark, Elizabeth; Mendoza, Pablo; Nijssen, Bart; Newman, Andy; Clark, Martyn; Nowak, Kenneth; Arnold, Jeffrey
2017-04-01
Many if not most national operational streamflow prediction systems rely on a forecaster-in-the-loop approach that require the hands-on-effort of an experienced human forecaster. This approach evolved from the need to correct for long-standing deficiencies in the models and datasets used in forecasting, and the practice often leads to skillful flow predictions despite the use of relatively simple, conceptual models. Yet the 'in-the-loop' forecast process is not reproducible, which limits opportunities to assess and incorporate new techniques systematically, and the effort required to make forecasts in this way is an obstacle to expanding forecast services - e.g., though adding new forecast locations or more frequent forecast updates, running more complex models, or producing forecast and hindcasts that can support verification. In the last decade, the hydrologic forecasting community has begun develop more centralized, 'over-the-loop' systems. The quality of these new forecast products will depend on their ability to leverage research in areas including earth system modeling, parameter estimation, data assimilation, statistical post-processing, weather and climate prediction, verification, and uncertainty estimation through the use of ensembles. Currently, many national operational streamflow forecasting and water management communities have little experience with the strengths and weaknesses of over-the-loop approaches, even as such systems are beginning to be deployed operationally in centers such as ECMWF. There is thus a need both to evaluate these forecasting advances and to demonstrate their potential in a public arena, raising awareness in forecast user communities and development programs alike. To address this need, the US National Center for Atmospheric Research is collaborating with the University of Washington, the Bureau of Reclamation and the US Army Corps of Engineers, using the NCAR 'System for Hydromet Analysis Research and Prediction Applications' (SHARP) to implement, assess and demonstrate real-time over-the-loop ensemble flow forecasts in a range of US watersheds. The system relies on fully ensemble techniques, including: an 100-member ensemble of meteorological model forcings and an ensemble particle filter data assimilation for initializing watershed states; analog/regression-based downscaling of ensemble weather forecasts from GEFS; and statistical post-processing of ensemble forecast outputs, all of which run in real-time within a workflow managed by ECWMF's ecFlow libraries over large US regional domains. We describe SHARP and present early hindcast and verification results for short to seasonal range streamflow forecasts in a number of US case study watersheds.
DeepMitosis: Mitosis detection via deep detection, verification and segmentation networks.
Li, Chao; Wang, Xinggang; Liu, Wenyu; Latecki, Longin Jan
2018-04-01
Mitotic count is a critical predictor of tumor aggressiveness in the breast cancer diagnosis. Nowadays mitosis counting is mainly performed by pathologists manually, which is extremely arduous and time-consuming. In this paper, we propose an accurate method for detecting the mitotic cells from histopathological slides using a novel multi-stage deep learning framework. Our method consists of a deep segmentation network for generating mitosis region when only a weak label is given (i.e., only the centroid pixel of mitosis is annotated), an elaborately designed deep detection network for localizing mitosis by using contextual region information, and a deep verification network for improving detection accuracy by removing false positives. We validate the proposed deep learning method on two widely used Mitosis Detection in Breast Cancer Histological Images (MITOSIS) datasets. Experimental results show that we can achieve the highest F-score on the MITOSIS dataset from ICPR 2012 grand challenge merely using the deep detection network. For the ICPR 2014 MITOSIS dataset that only provides the centroid location of mitosis, we employ the segmentation model to estimate the bounding box annotation for training the deep detection network. We also apply the verification model to eliminate some false positives produced from the detection model. By fusing scores of the detection and verification models, we achieve the state-of-the-art results. Moreover, our method is very fast with GPU computing, which makes it feasible for clinical practice. Copyright © 2018 Elsevier B.V. All rights reserved.
Creep fatigue life prediction for engine hot section materials (ISOTROPIC)
NASA Technical Reports Server (NTRS)
Nelson, R. S.; Schoendorf, J. F.; Lin, L. S.
1986-01-01
The specific activities summarized include: verification experiments (base program); thermomechanical cycling model; multiaxial stress state model; cumulative loading model; screening of potential environmental and protective coating models; and environmental attack model.
Quantitative reactive modeling and verification.
Henzinger, Thomas A
Formal verification aims to improve the quality of software by detecting errors before they do harm. At the basis of formal verification is the logical notion of correctness , which purports to capture whether or not a program behaves as desired. We suggest that the boolean partition of software into correct and incorrect programs falls short of the practical need to assess the behavior of software in a more nuanced fashion against multiple criteria. We therefore propose to introduce quantitative fitness measures for programs, specifically for measuring the function, performance, and robustness of reactive programs such as concurrent processes. This article describes the goals of the ERC Advanced Investigator Project QUAREM. The project aims to build and evaluate a theory of quantitative fitness measures for reactive models. Such a theory must strive to obtain quantitative generalizations of the paradigms that have been success stories in qualitative reactive modeling, such as compositionality, property-preserving abstraction and abstraction refinement, model checking, and synthesis. The theory will be evaluated not only in the context of software and hardware engineering, but also in the context of systems biology. In particular, we will use the quantitative reactive models and fitness measures developed in this project for testing hypotheses about the mechanisms behind data from biological experiments.
National Centers for Environmental Prediction
/ VISION | About EMC EMC > RAP, HRRR > Home Operational Products Experimental Data Verification Model Modeling Mesoscale Modeling Marine Modeling and Analysis Teams Climate Data Assimilation Ensembles and Post
ERIC Educational Resources Information Center
Andrzejewska, Magdalena; Stolinska, Anna; Blasiak, Wladyslaw; Peczkowski, Pawel; Rosiek, Roman; Rozek, Bozena; Sajka, Miroslawa; Wcislo, Dariusz
2016-01-01
The results of qualitative and quantitative investigations conducted with individuals who learned algorithms in school are presented in this article. In these investigations, eye-tracking technology was used to follow the process of solving algorithmic problems. The algorithmic problems were presented in two comparable variants: in a pseudocode…
Stern, Robin L; Heaton, Robert; Fraser, Martin W; Goddu, S Murty; Kirby, Thomas H; Lam, Kwok Leung; Molineu, Andrea; Zhu, Timothy C
2011-01-01
The requirement of an independent verification of the monitor units (MU) or time calculated to deliver the prescribed dose to a patient has been a mainstay of radiation oncology quality assurance. The need for and value of such a verification was obvious when calculations were performed by hand using look-up tables, and the verification was achieved by a second person independently repeating the calculation. However, in a modern clinic using CT/MR/PET simulation, computerized 3D treatment planning, heterogeneity corrections, and complex calculation algorithms such as convolution/superposition and Monte Carlo, the purpose of and methodology for the MU verification have come into question. In addition, since the verification is often performed using a simpler geometrical model and calculation algorithm than the primary calculation, exact or almost exact agreement between the two can no longer be expected. Guidelines are needed to help the physicist set clinically reasonable action levels for agreement. This report addresses the following charges of the task group: (1) To re-evaluate the purpose and methods of the "independent second check" for monitor unit calculations for non-IMRT radiation treatment in light of the complexities of modern-day treatment planning. (2) To present recommendations on how to perform verification of monitor unit calculations in a modern clinic. (3) To provide recommendations on establishing action levels for agreement between primary calculations and verification, and to provide guidance in addressing discrepancies outside the action levels. These recommendations are to be used as guidelines only and shall not be interpreted as requirements.
NASA Astrophysics Data System (ADS)
Obuchowski, Nancy A.; Bullen, Jennifer A.
2018-04-01
Receiver operating characteristic (ROC) analysis is a tool used to describe the discrimination accuracy of a diagnostic test or prediction model. While sensitivity and specificity are the basic metrics of accuracy, they have many limitations when characterizing test accuracy, particularly when comparing the accuracies of competing tests. In this article we review the basic study design features of ROC studies, illustrate sample size calculations, present statistical methods for measuring and comparing accuracy, and highlight commonly used ROC software. We include descriptions of multi-reader ROC study design and analysis, address frequently seen problems of verification and location bias, discuss clustered data, and provide strategies for testing endpoints in ROC studies. The methods are illustrated with a study of transmission ultrasound for diagnosing breast lesions.
Verification technology of remote sensing camera satellite imaging simulation based on ray tracing
NASA Astrophysics Data System (ADS)
Gu, Qiongqiong; Chen, Xiaomei; Yang, Deyun
2017-08-01
Remote sensing satellite camera imaging simulation technology is broadly used to evaluate the satellite imaging quality and to test the data application system. But the simulation precision is hard to examine. In this paper, we propose an experimental simulation verification method, which is based on the test parameter variation comparison. According to the simulation model based on ray-tracing, the experiment is to verify the model precision by changing the types of devices, which are corresponding the parameters of the model. The experimental results show that the similarity between the imaging model based on ray tracing and the experimental image is 91.4%, which can simulate the remote sensing satellite imaging system very well.
Nuclear Engine System Simulation (NESS) version 2.0
NASA Technical Reports Server (NTRS)
Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.
1993-01-01
The topics are presented in viewgraph form and include the following; nuclear thermal propulsion (NTP) engine system analysis program development; nuclear thermal propulsion engine analysis capability requirements; team resources used to support NESS development; expanded liquid engine simulations (ELES) computer model; ELES verification examples; NESS program development evolution; past NTP ELES analysis code modifications and verifications; general NTP engine system features modeled by NESS; representative NTP expander, gas generator, and bleed engine system cycles modeled by NESS; NESS program overview; NESS program flow logic; enabler (NERVA type) nuclear thermal rocket engine; prismatic fuel elements and supports; reactor fuel and support element parameters; reactor parameters as a function of thrust level; internal shield sizing; and reactor thermal model.
NASA Technical Reports Server (NTRS)
Muheim, Danniella; Menzel, Michael; Mosier, Gary; Irish, Sandra; Maghami, Peiman; Mehalick, Kimberly; Parrish, Keith
2010-01-01
The James Web Space Telescope (JWST) is a large, infrared-optimized space telescope scheduled for launch in 2014. System-level verification of critical performance requirements will rely on integrated observatory models that predict the wavefront error accurately enough to verify that allocated top-level wavefront error of 150 nm root-mean-squared (rms) through to the wave-front sensor focal plane is met. The assembled models themselves are complex and require the insight of technical experts to assess their ability to meet their objectives. This paper describes the systems engineering and modeling approach used on the JWST through the detailed design phase.
Using Model Replication to Improve the Reliability of Agent-Based Models
NASA Astrophysics Data System (ADS)
Zhong, Wei; Kim, Yushim
The basic presupposition of model replication activities for a computational model such as an agent-based model (ABM) is that, as a robust and reliable tool, it must be replicable in other computing settings. This assumption has recently gained attention in the community of artificial society and simulation due to the challenges of model verification and validation. Illustrating the replication of an ABM representing fraudulent behavior in a public service delivery system originally developed in the Java-based MASON toolkit for NetLogo by a different author, this paper exemplifies how model replication exercises provide unique opportunities for model verification and validation process. At the same time, it helps accumulate best practices and patterns of model replication and contributes to the agenda of developing a standard methodological protocol for agent-based social simulation.
NASA Astrophysics Data System (ADS)
Butov, R. A.; Drobyshevsky, N. I.; Moiseenko, E. V.; Tokarev, U. N.
2017-11-01
The verification of the FENIA finite element code on some problems and an example of its application are presented in the paper. The code is being developing for 3D modelling of thermal, mechanical and hydrodynamical (THM) problems related to the functioning of deep geological repositories. Verification of the code for two analytical problems has been performed. The first one is point heat source with exponential heat decrease, the second one - linear heat source with similar behavior. Analytical solutions have been obtained by the authors. The problems have been chosen because they reflect the processes influencing the thermal state of deep geological repository of radioactive waste. Verification was performed for several meshes with different resolution. Good convergence between analytical and numerical solutions was achieved. The application of the FENIA code is illustrated by 3D modelling of thermal state of a prototypic deep geological repository of radioactive waste. The repository is designed for disposal of radioactive waste in a rock at depth of several hundred meters with no intention of later retrieval. Vitrified radioactive waste is placed in the containers, which are placed in vertical boreholes. The residual decay heat of radioactive waste leads to containers, engineered safety barriers and host rock heating. Maximum temperatures and corresponding times of their establishment have been determined.
Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment
NASA Technical Reports Server (NTRS)
Prive, N. C.; Errico, Ronald M.
2015-01-01
The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.
Large Scale Skill in Regional Climate Modeling and the Lateral Boundary Condition Scheme
NASA Astrophysics Data System (ADS)
Veljović, K.; Rajković, B.; Mesinger, F.
2009-04-01
Several points are made concerning the somewhat controversial issue of regional climate modeling: should a regional climate model (RCM) be expected to maintain the large scale skill of the driver global model that is supplying its lateral boundary condition (LBC)? Given that this is normally desired, is it able to do so without help via the fairly popular large scale nudging? Specifically, without such nudging, will the RCM kinetic energy necessarily decrease with time compared to that of the driver model or analysis data as suggested by a study using the Regional Atmospheric Modeling System (RAMS)? Finally, can the lateral boundary condition scheme make a difference: is the almost universally used but somewhat costly relaxation scheme necessary for a desirable RCM performance? Experiments are made to explore these questions running the Eta model in two versions differing in the lateral boundary scheme used. One of these schemes is the traditional relaxation scheme, and the other the Eta model scheme in which information is used at the outermost boundary only, and not all variables are prescribed at the outflow boundary. Forecast lateral boundary conditions are used, and results are verified against the analyses. Thus, skill of the two RCM forecasts can be and is compared not only against each other but also against that of the driver global forecast. A novel verification method is used in the manner of customary precipitation verification in that forecast spatial wind speed distribution is verified against analyses by calculating bias adjusted equitable threat scores and bias scores for wind speeds greater than chosen wind speed thresholds. In this way, focusing on a high wind speed value in the upper troposphere, verification of large scale features we suggest can be done in a manner that may be more physically meaningful than verifications via spectral decomposition that are a standard RCM verification method. The results we have at this point are somewhat limited in view of the integrations having being done only for 10-day forecasts. Even so, one should note that they are among very few done using forecast as opposed to reanalysis or analysis global driving data. Our results suggest that (1) running the Eta as an RCM no significant loss of large-scale kinetic energy with time seems to be taking place; (2) no disadvantage from using the Eta LBC scheme compared to the relaxation scheme is seen, while enjoying the advantage of the scheme being significantly less demanding than the relaxation given that it needs driver model fields at the outermost domain boundary only; and (3) the Eta RCM skill in forecasting large scales, with no large scale nudging, seems to be just about the same as that of the driver model, or, in the terminology of Castro et al., the Eta RCM does not lose "value of the large scale" which exists in the larger global analyses used for the initial condition and for verification.
2007 Beyond SBIR Phase II: Bringing Technology Edge to the Warfighter
2007-08-23
Systems Trade-Off Analysis and Optimization Verification and Validation On-Board Diagnostics and Self - healing Security and Anti-Tampering Rapid...verification; Safety and reliability analysis of flight and mission critical systems On-Board Diagnostics and Self - Healing Model-based monitoring and... self - healing On-board diagnostics and self - healing ; Autonomic computing; Network intrusion detection and prevention Anti-Tampering and Trust
Formal verification of AI software
NASA Technical Reports Server (NTRS)
Rushby, John; Whitehurst, R. Alan
1989-01-01
The application of formal verification techniques to Artificial Intelligence (AI) software, particularly expert systems, is investigated. Constraint satisfaction and model inversion are identified as two formal specification paradigms for different classes of expert systems. A formal definition of consistency is developed, and the notion of approximate semantics is introduced. Examples are given of how these ideas can be applied in both declarative and imperative forms.
Reachability analysis of real-time systems using time Petri nets.
Wang, J; Deng, Y; Xu, G
2000-01-01
Time Petri nets (TPNs) are a popular Petri net model for specification and verification of real-time systems. A fundamental and most widely applied method for analyzing Petri nets is reachability analysis. The existing technique for reachability analysis of TPNs, however, is not suitable for timing property verification because one cannot derive end-to-end delay in task execution, an important issue for time-critical systems, from the reachability tree constructed using the technique. In this paper, we present a new reachability based analysis technique for TPNs for timing property analysis and verification that effectively addresses the problem. Our technique is based on a concept called clock-stamped state class (CS-class). With the reachability tree generated based on CS-classes, we can directly compute the end-to-end time delay in task execution. Moreover, a CS-class can be uniquely mapped to a traditional state class based on which the conventional reachability tree is constructed. Therefore, our CS-class-based analysis technique is more general than the existing technique. We show how to apply this technique to timing property verification of the TPN model of a command and control (C2) system.
Verification of the Sentinel-4 focal plane subsystem
NASA Astrophysics Data System (ADS)
Williges, Christian; Uhlig, Mathias; Hilbert, Stefan; Rossmann, Hannes; Buchwinkler, Kevin; Babben, Steffen; Sebastian, Ilse; Hohn, Rüdiger; Reulke, Ralf
2017-09-01
The Sentinel-4 payload is a multi-spectral camera system, designed to monitor atmospheric conditions over Europe from a geostationary orbit. The German Aerospace Center, DLR Berlin, conducted the verification campaign of the Focal Plane Subsystem (FPS) during the second half of 2016. The FPS consists, of two Focal Plane Assemblies (FPAs), two Front End Electronics (FEEs), one Front End Support Electronic (FSE) and one Instrument Control Unit (ICU). The FPAs are designed for two spectral ranges: UV-VIS (305 nm - 500 nm) and NIR (750 nm - 775 nm). In this publication, we will present in detail the set-up of the verification campaign of the Sentinel-4 Qualification Model (QM). This set up will also be used for the upcoming Flight Model (FM) verification, planned for early 2018. The FPAs have to be operated at 215 K +/- 5 K, making it necessary to exploit a thermal vacuum chamber (TVC) for the test accomplishment. The test campaign consists mainly of radiometric tests. This publication focuses on the challenge to remotely illuminate both Sentinel-4 detectors as well as a reference detector homogeneously over a distance of approximately 1 m from outside the TVC. Selected test analyses and results will be presented.
Verification of immune response optimality through cybernetic modeling.
Batt, B C; Kompala, D S
1990-02-09
An immune response cascade that is T cell independent begins with the stimulation of virgin lymphocytes by antigen to differentiate into large lymphocytes. These immune cells can either replicate themselves or differentiate into plasma cells or memory cells. Plasma cells produce antibody at a specific rate up to two orders of magnitude greater than large lymphocytes. However, plasma cells have short life-spans and cannot replicate. Memory cells produce only surface antibody, but in the event of a subsequent infection by the same antigen, memory cells revert rapidly to large lymphocytes. Immunologic memory is maintained throughout the organism's lifetime. Many immunologists believe that the optimal response strategy calls for large lymphocytes to replicate first, then differentiate into plasma cells and when the antigen has been nearly eliminated, they form memory cells. A mathematical model incorporating the concept of cybernetics has been developed to study the optimality of the immune response. Derived from the matching law of microeconomics, cybernetic variables control the allocation of large lymphocytes to maximize the instantaneous antibody production rate at any time during the response in order to most efficiently inactivate the antigen. A mouse is selected as the model organism and bacteria as the replicating antigen. In addition to verifying the optimal switching strategy, results showing how the immune response is affected by antigen growth rate, initial antigen concentration, and the number of antibodies required to eliminate an antigen are included.
Analysis of SSME HPOTP rotordynamics subsynchronous whirl
NASA Technical Reports Server (NTRS)
1984-01-01
The causes and remedies of vibration and subsynchronous whirl problems encountered in the Shuttle Main Engine SSME turbomachinery are analyzed. Because the nonlinear and linearized models of the turbopumps play such an important role in the analysis process, the main emphasis is concentrated on the verification and improvement of these tools. It has been the goal of our work to validate the equations of motion used in the models are validated, including the assumptions upon which they are based. Verification of th SSME rotordynamics simulation and the developed enhancements, are emphasized.
Bounded Parametric Model Checking for Elementary Net Systems
NASA Astrophysics Data System (ADS)
Knapik, Michał; Szreter, Maciej; Penczek, Wojciech
Bounded Model Checking (BMC) is an efficient verification method for reactive systems. BMC has been applied so far to verification of properties expressed in (timed) modal logics, but never to their parametric extensions. In this paper we show, for the first time that BMC can be extended to PRTECTL - a parametric extension of the existential version of CTL. To this aim we define a bounded semantics and a translation from PRTECTL to SAT. The implementation of the algorithm for Elementary Net Systems is presented, together with some experimental results.
An ontology based trust verification of software license agreement
NASA Astrophysics Data System (ADS)
Lu, Wenhuan; Li, Xiaoqing; Gan, Zengqin; Wei, Jianguo
2017-08-01
When we install software or download software, there will show up so big mass document to state the rights and obligations, for which lots of person are not patient to read it or understand it. That would may make users feel distrust for the software. In this paper, we propose an ontology based verification for Software License Agreement. First of all, this work proposed an ontology model for domain of Software License Agreement. The domain ontology is constructed by proposed methodology according to copyright laws and 30 software license agreements. The License Ontology can act as a part of generalized copyright law knowledge model, and also can work as visualization of software licenses. Based on this proposed ontology, a software license oriented text summarization approach is proposed which performances showing that it can improve the accuracy of software licenses summarizing. Based on the summarization, the underline purpose of the software license can be explicitly explored for trust verification.
Enhanced Verification Test Suite for Physics Simulation Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamm, J R; Brock, J S; Brandon, S T
2008-10-10
This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations. The key points of this document are: (1) Verification deals with mathematical correctness of the numerical algorithms in a code, while validation deals with physical correctness of a simulation in a regime of interest.more » This document is about verification. (2) The current seven-problem Tri-Laboratory Verification Test Suite, which has been used for approximately five years at the DOE WP laboratories, is limited. (3) Both the methodology for and technology used in verification analysis have evolved and been improved since the original test suite was proposed. (4) The proposed test problems are in three basic areas: (a) Hydrodynamics; (b) Transport processes; and (c) Dynamic strength-of-materials. (5) For several of the proposed problems we provide a 'strong sense verification benchmark', consisting of (i) a clear mathematical statement of the problem with sufficient information to run a computer simulation, (ii) an explanation of how the code result and benchmark solution are to be evaluated, and (iii) a description of the acceptance criterion for simulation code results. (6) It is proposed that the set of verification test problems with which any particular code be evaluated include some of the problems described in this document. Analysis of the proposed verification test problems constitutes part of a necessary--but not sufficient--step that builds confidence in physics and engineering simulation codes. More complicated test cases, including physics models of greater sophistication or other physics regimes (e.g., energetic material response, magneto-hydrodynamics), would represent a scientifically desirable complement to the fundamental test cases discussed in this report. The authors believe that this document can be used to enhance the verification analyses undertaken at the DOE WP Laboratories and, thus, to improve the quality, credibility, and usefulness of the simulation codes that are analyzed with these problems.« less
NASA Technical Reports Server (NTRS)
Jackson, E. Bruce; Madden, Michael M.; Shelton, Robert; Jackson, A. A.; Castro, Manuel P.; Noble, Deleena M.; Zimmerman, Curtis J.; Shidner, Jeremy D.; White, Joseph P.; Dutta, Doumyo;
2015-01-01
This follow-on paper describes the principal methods of implementing, and documents the results of exercising, a set of six-degree-of-freedom rigid-body equations of motion and planetary geodetic, gravitation and atmospheric models for simple vehicles in a variety of endo- and exo-atmospheric conditions with various NASA, and one popular open-source, engineering simulation tools. This effort is intended to provide an additional means of verification of flight simulations. The models used in this comparison, as well as the resulting time-history trajectory data, are available electronically for persons and organizations wishing to compare their flight simulation implementations of the same models.
Forecasting of cyanobacterial density in Torrão reservoir using artificial neural networks.
Torres, Rita; Pereira, Elisa; Vasconcelos, Vítor; Teles, Luís Oliva
2011-06-01
The ability of general regression neural networks (GRNN) to forecast the density of cyanobacteria in the Torrão reservoir (Tâmega river, Portugal), in a period of 15 days, based on three years of collected physical and chemical data, was assessed. Several models were developed and 176 were selected based on their correlation values for the verification series. A time lag of 11 was used, equivalent to one sample (periods of 15 days in the summer and 30 days in the winter). Several combinations of the series were used. Input and output data collected from three depths of the reservoir were applied (surface, euphotic zone limit and bottom). The model that presented a higher average correlation value presented the correlations 0.991; 0.843; 0.978 for training, verification and test series. This model had the three series independent in time: first test series, then verification series and, finally, training series. Only six input variables were considered significant to the performance of this model: ammonia, phosphates, dissolved oxygen, water temperature, pH and water evaporation, physical and chemical parameters referring to the three depths of the reservoir. These variables are common to the next four best models produced and, although these included other input variables, their performance was not better than the selected best model.
Bauer, Julia; Chen, Wenjing; Nischwitz, Sebastian; Liebl, Jakob; Rieken, Stefan; Welzel, Thomas; Debus, Juergen; Parodi, Katia
2018-04-24
A reliable Monte Carlo prediction of proton-induced brain tissue activation used for comparison to particle therapy positron-emission-tomography (PT-PET) measurements is crucial for in vivo treatment verification. Major limitations of current approaches to overcome include the CT-based patient model and the description of activity washout due to tissue perfusion. Two approaches were studied to improve the activity prediction for brain irradiation: (i) a refined patient model using tissue classification based on MR information and (ii) a PT-PET data-driven refinement of washout model parameters. Improvements of the activity predictions compared to post-treatment PT-PET measurements were assessed in terms of activity profile similarity for six patients treated with a single or two almost parallel fields delivered by active proton beam scanning. The refined patient model yields a generally higher similarity for most of the patients, except in highly pathological areas leading to tissue misclassification. Using washout model parameters deduced from clinical patient data could considerably improve the activity profile similarity for all patients. Current methods used to predict proton-induced brain tissue activation can be improved with MR-based tissue classification and data-driven washout parameters, thus providing a more reliable basis for PT-PET verification. Copyright © 2018 Elsevier B.V. All rights reserved.
Probabilistic Elastic Part Model: A Pose-Invariant Representation for Real-World Face Verification.
Li, Haoxiang; Hua, Gang
2018-04-01
Pose variation remains to be a major challenge for real-world face recognition. We approach this problem through a probabilistic elastic part model. We extract local descriptors (e.g., LBP or SIFT) from densely sampled multi-scale image patches. By augmenting each descriptor with its location, a Gaussian mixture model (GMM) is trained to capture the spatial-appearance distribution of the face parts of all face images in the training corpus, namely the probabilistic elastic part (PEP) model. Each mixture component of the GMM is confined to be a spherical Gaussian to balance the influence of the appearance and the location terms, which naturally defines a part. Given one or multiple face images of the same subject, the PEP-model builds its PEP representation by sequentially concatenating descriptors identified by each Gaussian component in a maximum likelihood sense. We further propose a joint Bayesian adaptation algorithm to adapt the universally trained GMM to better model the pose variations between the target pair of faces/face tracks, which consistently improves face verification accuracy. Our experiments show that we achieve state-of-the-art face verification accuracy with the proposed representations on the Labeled Face in the Wild (LFW) dataset, the YouTube video face database, and the CMU MultiPIE dataset.
Behavioral biometrics for verification and recognition of malicious software agents
NASA Astrophysics Data System (ADS)
Yampolskiy, Roman V.; Govindaraju, Venu
2008-04-01
Homeland security requires technologies capable of positive and reliable identification of humans for law enforcement, government, and commercial applications. As artificially intelligent agents improve in their abilities and become a part of our everyday life, the possibility of using such programs for undermining homeland security increases. Virtual assistants, shopping bots, and game playing programs are used daily by millions of people. We propose applying statistical behavior modeling techniques developed by us for recognition of humans to the identification and verification of intelligent and potentially malicious software agents. Our experimental results demonstrate feasibility of such methods for both artificial agent verification and even for recognition purposes.
Model verification of mixed dynamic systems. [POGO problem in liquid propellant rockets
NASA Technical Reports Server (NTRS)
Chrostowski, J. D.; Evensen, D. A.; Hasselman, T. K.
1978-01-01
A parameter-estimation method is described for verifying the mathematical model of mixed (combined interactive components from various engineering fields) dynamic systems against pertinent experimental data. The model verification problem is divided into two separate parts: defining a proper model and evaluating the parameters of that model. The main idea is to use differences between measured and predicted behavior (response) to adjust automatically the key parameters of a model so as to minimize response differences. To achieve the goal of modeling flexibility, the method combines the convenience of automated matrix generation with the generality of direct matrix input. The equations of motion are treated in first-order form, allowing for nonsymmetric matrices, modeling of general networks, and complex-mode analysis. The effectiveness of the method is demonstrated for an example problem involving a complex hydraulic-mechanical system.
NASA Astrophysics Data System (ADS)
Fischer, Bennet; Hopf, Barbara; Lindner, Markus; Koch, Alexander W.; Roths, Johannes
2017-04-01
A 3D FEM model of an FBG in a PANDA fiber with an extended fiber length of 25.4 mm is presented. Simulating long fiber lengths with limited computer power is achieved by using an iterative solver and by optimizing the FEM mesh. For verification purposes, the model is adapted to a configuration with transversal loads on the fiber. The 3D FEM model results correspond with experimental data and with the results of an additional 2D FEM plain strain model. In further studies, this 3D model shall be applied to more sophisticated situations, for example to study the temperature dependence of surface-glued or embedded FBGs in PANDA fibers that are used for strain-temperature decoupling.
Program Model Checking: A Practitioner's Guide
NASA Technical Reports Server (NTRS)
Pressburger, Thomas T.; Mansouri-Samani, Masoud; Mehlitz, Peter C.; Pasareanu, Corina S.; Markosian, Lawrence Z.; Penix, John J.; Brat, Guillaume P.; Visser, Willem C.
2008-01-01
Program model checking is a verification technology that uses state-space exploration to evaluate large numbers of potential program executions. Program model checking provides improved coverage over testing by systematically evaluating all possible test inputs and all possible interleavings of threads in a multithreaded system. Model-checking algorithms use several classes of optimizations to reduce the time and memory requirements for analysis, as well as heuristics for meaningful analysis of partial areas of the state space Our goal in this guidebook is to assemble, distill, and demonstrate emerging best practices for applying program model checking. We offer it as a starting point and introduction for those who want to apply model checking to software verification and validation. The guidebook will not discuss any specific tool in great detail, but we provide references for specific tools.
Known-component 3D-2D registration for quality assurance of spine surgery pedicle screw placement
NASA Astrophysics Data System (ADS)
Uneri, A.; De Silva, T.; Stayman, J. W.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Gokaslan, Z. L.; Wolinsky, J.-P.; Siewerdsen, J. H.
2015-10-01
A 3D-2D image registration method is presented that exploits knowledge of interventional devices (e.g. K-wires or spine screws—referred to as ‘known components’) to extend the functionality of intraoperative radiography/fluoroscopy by providing quantitative measurement and quality assurance (QA) of the surgical product. The known-component registration (KC-Reg) algorithm uses robust 3D-2D registration combined with 3D component models of surgical devices known to be present in intraoperative 2D radiographs. Component models were investigated that vary in fidelity from simple parametric models (e.g. approximation of a screw as a simple cylinder, referred to as ‘parametrically-known’ component [pKC] registration) to precise models based on device-specific CAD drawings (referred to as ‘exactly-known’ component [eKC] registration). 3D-2D registration from three intraoperative radiographs was solved using the covariance matrix adaptation evolution strategy (CMA-ES) to maximize image-gradient similarity, relating device placement relative to 3D preoperative CT of the patient. Spine phantom and cadaver studies were conducted to evaluate registration accuracy and demonstrate QA of the surgical product by verification of the type of devices delivered and conformance within the ‘acceptance window’ of the spinal pedicle. Pedicle screws were successfully registered to radiographs acquired from a mobile C-arm, providing TRE 1-4 mm and <5° using simple parametric (pKC) models, further improved to <1 mm and <1° using eKC registration. Using advanced pKC models, screws that did not match the device models specified in the surgical plan were detected with an accuracy of >99%. Visualization of registered devices relative to surgical planning and the pedicle acceptance window provided potentially valuable QA of the surgical product and reliable detection of pedicle screw breach. 3D-2D registration combined with 3D models of known surgical devices offers a novel method for intraoperative QA. The method provides a near-real-time independent check against pedicle breach, facilitating revision within the same procedure if necessary and providing more rigorous verification of the surgical product.
NASA Astrophysics Data System (ADS)
Kerschbaum, M.; Hopmann, C.
2016-06-01
The computationally efficient simulation of the progressive damage behaviour of continuous fibre reinforced plastics is still a challenging task with currently available computer aided engineering methods. This paper presents an original approach for an energy based continuum damage model which accounts for stress-/strain nonlinearities, transverse and shear stress interaction phenomena, quasi-plastic shear strain components, strain rate effects, regularised damage evolution and consideration of load reversal effects. The physically based modelling approach enables experimental determination of all parameters on ply level to avoid expensive inverse analysis procedures. The modelling strategy, implementation and verification of this model using commercially available explicit finite element software are detailed. The model is then applied to simulate the impact and penetration of carbon fibre reinforced cross-ply specimens with variation of the impact speed. The simulation results show that the presented approach enables a good representation of the force-/displacement curves and especially well agreement with the experimentally observed fracture patterns. In addition, the mesh dependency of the results were assessed for one impact case showing only very little change of the simulation results which emphasises the general applicability of the presented method.
A Self-Stabilizing Hybrid Fault-Tolerant Synchronization Protocol
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2015-01-01
This paper presents a strategy for solving the Byzantine general problem for self-stabilizing a fully connected network from an arbitrary state and in the presence of any number of faults with various severities including any number of arbitrary (Byzantine) faulty nodes. The strategy consists of two parts: first, converting Byzantine faults into symmetric faults, and second, using a proven symmetric-fault tolerant algorithm to solve the general case of the problem. A protocol (algorithm) is also present that tolerates symmetric faults, provided that there are more good nodes than faulty ones. The solution applies to realizable systems, while allowing for differences in the network elements, provided that the number of arbitrary faults is not more than a third of the network size. The only constraint on the behavior of a node is that the interactions with other nodes are restricted to defined links and interfaces. The solution does not rely on assumptions about the initial state of the system and no central clock nor centrally generated signal, pulse, or message is used. Nodes are anonymous, i.e., they do not have unique identities. A mechanical verification of a proposed protocol is also present. A bounded model of the protocol is verified using the Symbolic Model Verifier (SMV). The model checking effort is focused on verifying correctness of the bounded model of the protocol as well as confirming claims of determinism and linear convergence with respect to the self-stabilization period.
NASA Technical Reports Server (NTRS)
Ghatas, Rania W.; Jack, Devin P.; Tsakpinis, Dimitrios; Vincent, Michael J.; Sturdy, James L.; Munoz, Cesar A.; Hoffler, Keith D.; Dutle, Aaron M.; Myer, Robert R.; Dehaven, Anna M.;
2017-01-01
As Unmanned Aircraft Systems (UAS) make their way to mainstream aviation operations within the National Airspace System (NAS), research efforts are underway to develop a safe and effective environment for their integration into the NAS. Detect and Avoid (DAA) systems are required to account for the lack of "eyes in the sky" due to having no human on-board the aircraft. The current NAS relies on pilot's vigilance and judgement to remain Well Clear (CFR 14 91.113) of other aircraft. RTCA SC-228 has defined DAA Well Clear (DAAWC) to provide a quantified Well Clear volume to allow systems to be designed and measured against. Extended research efforts have been conducted to understand and quantify system requirements needed to support a UAS pilot's ability to remain well clear of other aircraft. The efforts have included developing and testing sensor, algorithm, alerting, and display requirements. More recently, sensor uncertainty and uncertainty mitigation strategies have been evaluated. This paper discusses results and lessons learned from an End-to-End Verification and Validation (E2-V2) simulation study of a DAA system representative of RTCA SC-228's proposed Phase I DAA Minimum Operational Performance Standards (MOPS). NASA Langley Research Center (LaRC) was called upon to develop a system that evaluates a specific set of encounters, in a variety of geometries, with end-to-end DAA functionality including the use of sensor and tracker models, a sensor uncertainty mitigation model, DAA algorithmic guidance in both vertical and horizontal maneuvering, and a pilot model which maneuvers the ownship aircraft to remain well clear from intruder aircraft, having received collective input from the previous modules of the system. LaRC developed a functioning batch simulation and added a sensor/tracker model from the Federal Aviation Administration (FAA) William J. Hughes Technical Center, an in-house developed sensor uncertainty mitigation strategy, and implemented a pilot model similar to one from the Massachusetts Institute of Technology's Lincoln Laboratory (MIT/LL). The resulting simulation provides the following key parameters, among others, to evaluate the effectiveness of the MOPS DAA system: severity of loss of well clear (SLoWC), alert scoring, and number of increasing alerts (alert jitter). The technique, results, and lessons learned from a detailed examination of DAA system performance over specific test vectors and encounter cases during the simulation experiment will be presented in this paper.
Feedback Seeking in Early Adolescence: Self-Enhancement or Self-Verification?
Rosen, Lisa H; Principe, Connor P; Langlois, Judith H
2013-02-13
The authors examined whether early adolescents ( N = 90) solicit self-enhancing feedback (i.e., positive feedback) or self-verifying feedback (i.e., feedback congruent with self-views, even when these views are negative). Sixth, seventh, and eighth graders first completed a self-perception measure and then selected whether to receive positive or negative feedback from an unknown peer in different domains of self. Results were consistent with self-verification theory; adolescents who perceived themselves as having both strengths and weaknesses were more likely to seek negative feedback regarding a self-perceived weakness compared to a self-perceived strength. The authors found similar support for self-verification processes when they considered the entire sample regardless of perceived strengths and weaknesses; hierarchical linear modeling (HLM) examined the predictive power of ratings of self-perceived ability, certainty, and importance on feedback seeking for all participants and provided additional evidence of self-verification strivings in adolescence.
Feedback Seeking in Early Adolescence: Self-Enhancement or Self-Verification?
Rosen, Lisa H.; Principe, Connor P.; Langlois, Judith H.
2012-01-01
The authors examined whether early adolescents (N = 90) solicit self-enhancing feedback (i.e., positive feedback) or self-verifying feedback (i.e., feedback congruent with self-views, even when these views are negative). Sixth, seventh, and eighth graders first completed a self-perception measure and then selected whether to receive positive or negative feedback from an unknown peer in different domains of self. Results were consistent with self-verification theory; adolescents who perceived themselves as having both strengths and weaknesses were more likely to seek negative feedback regarding a self-perceived weakness compared to a self-perceived strength. The authors found similar support for self-verification processes when they considered the entire sample regardless of perceived strengths and weaknesses; hierarchical linear modeling (HLM) examined the predictive power of ratings of self-perceived ability, certainty, and importance on feedback seeking for all participants and provided additional evidence of self-verification strivings in adolescence. PMID:23543746
Exploring system interconnection architectures with VIPACES: from direct connections to NOCs
NASA Astrophysics Data System (ADS)
Sánchez-Peña, Armando; Carballo, Pedro P.; Núñez, Antonio
2007-05-01
This paper presents a simple environment for the verification of AMBA 3 AXI systems in Verification IP (VIP) production called VIPACES (Verification Interface Primitives for the development of AXI Compliant Elements and Systems). These primitives are presented as a not compiled library written in SystemC where interfaces are the core of the library. The definition of interfaces instead of generic modules let the user construct custom modules improving the resources spent during the verification phase as well as easily adapting his modules to the AMBA 3 AXI protocol. This topic is the main discussion in the VIPACES library. The paper focuses on comparing and contrasting the main interconnection schemes for AMBA 3 AXI as modeled by VIPACES. For assessing these results we propose a validation scenario with a particular architecture belonging to the domain of MPEG4 video decoding, which is compound by an AXI bus connecting an IDCT and other processing resources.
The Impact of Advanced Greenhouse Gas Measurement Science on Policy Goals and Research Strategies
NASA Astrophysics Data System (ADS)
Abrahams, L.; Clavin, C.; McKittrick, A.
2016-12-01
In support of the Paris agreement, accurate characterizations of U.S. greenhouse gas (GHG) emissions estimates have been area of increased scientific focus. Over the last several years, the scientific community has placed significant emphasis on understanding, quantifying, and reconciling measurement and modeling methods that characterize methane emissions from petroleum and natural gas sources. This work has prompted national policy discussions and led to the improvement of regional and national methane emissions estimates. Research campaigns focusing on reconciling atmospheric measurements ("top-down") and process-based emissions estimates ("bottom-up") have sought to identify where measurement technology advances could inform policy objectives. A clear next step is development and deployment of advanced detection capabilities that could aid U.S. emissions mitigation and verification goals. The breadth of policy-relevant outcomes associated with advances in GHG measurement science are demonstrated by recent improvements in the petroleum and natural gas sector emission estimates in the EPA Greenhouse Gas Inventory, ambitious efforts to apply inverse modeling results to inform or validate national GHG inventory, and outcomes from federal GHG measurement science technology development programs. In this work, we explore the variety of policy-relevant outcomes impacted by advances in GHG measurement science, with an emphasis on improving GHG inventory estimates, identifying emissions mitigation strategies, and informing technology development requirements.
Numerical Simulation of the Fluid-Structure Interaction of a Surface Effect Ship Bow Seal
NASA Astrophysics Data System (ADS)
Bloxom, Andrew L.
Numerical simulations of fluid-structure interaction (FSI) problems were performed in an effort to verify and validate a commercially available FSI tool. This tool uses an iterative partitioned coupling scheme between CD-adapco's STAR-CCM+ finite volume fluid solver and Simulia's Abaqus finite element structural solver to simulate the FSI response of a system. Preliminary verification and validation work (V&V) was carried out to understand the numerical behavior of the codes individually and together as a FSI tool. Verification and Validation work that was completed included code order verification of the respective fluid and structural solvers with Couette-Poiseuille flow and Euler-Bernoulli beam theory. These results confirmed the 2 nd order accuracy of the spatial discretizations used. Following that, a mixture of solution verifications and model calibrations was performed with the inclusion of the physics models implemented in the solution of the FSI problems. Solution verifications were completed for fluid and structural stand-alone models as well as for the coupled FSI solutions. These results re-confirmed the spatial order of accuracy but for more complex flows and physics models as well as the order of accuracy of the temporal discretizations. In lieu of a good material definition, model calibration is performed to reproduce the experimental results. This work used model calibration for both instances of hyperelastic materials which were presented in the literature as validation cases because these materials were defined as linear elastic. Calibrated, three dimensional models of the bow seal on the University of Michigan bow seal test platform showed the ability to reproduce the experimental results qualitatively through averaging of the forces and seal displacements. These simulations represent the only current 3D results for this case. One significant result of this study is the ability to visualize the flow around the seal and to directly measure the seal resistances at varying cushion pressures, seal immersions, forward speeds, and different seal materials. SES design analysis could greatly benefit from the inclusion of flexible seals in simulations, and this work is a positive step in that direction. In future work, the inclusion of more complex seal geometries and contact will further enhance the capability of this tool.
Zhen, Shuai; Li, Xu
2017-01-01
Oncogenic human papillomaviruses (HPVs) cause different types of cancer especially cervical cancer. HPV-associated carcinogenesis provides a classical model system for clustered regularly interspaced short palindromic repeats (CRISPR/Cas9) based cancer therapies since the viral oncogenes E6 and E7 are exclusively expressed in cancerous cells. Sequence-specific gene knockdown/knockout using CRISPR/Cas9 shows promise as a novel therapeutic approach for the treatment of a variety of diseases that currently lack effective treatments. However, CRISPR/Cas9-based targeting therapy requires further validation of its efficacy in vitro and in vivo to eliminate the potential off-target effects, necessitates verification of the delivery vehicles and the combinatory use of conventional therapies with CRISPR/Cas9 to ensure the feasibility and safety. In this review we discuss the potential of combining CRISPR/Cas9 with other treatment options as therapies for oncogenic HPVs-associated carcinogenesis. and present our assessment of the promising path to the development of CRISPR/Cas9 therapeutic strategies for clinical settings. © 2017 The Author(s). Published by S. Karger AG, Basel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCarroll, R; UT Health Science Center, Graduate School of Biomedical Sciences, Houston, TX; Beadle, B
Purpose: To investigate and validate the use of an independent deformable-based contouring algorithm for automatic verification of auto-contoured structures in the head and neck towards fully automated treatment planning. Methods: Two independent automatic contouring algorithms [(1) Eclipse’s Smart Segmentation followed by pixel-wise majority voting, (2) an in-house multi-atlas based method] were used to create contours of 6 normal structures of 10 head-and-neck patients. After rating by a radiation oncologist, the higher performing algorithm was selected as the primary contouring method, the other used for automatic verification of the primary. To determine the ability of the verification algorithm to detect incorrectmore » contours, contours from the primary method were shifted from 0.5 to 2cm. Using a logit model the structure-specific minimum detectable shift was identified. The models were then applied to a set of twenty different patients and the sensitivity and specificity of the models verified. Results: Per physician rating, the multi-atlas method (4.8/5 point scale, with 3 rated as generally acceptable for planning purposes) was selected as primary and the Eclipse-based method (3.5/5) for verification. Mean distance to agreement and true positive rate were selected as covariates in an optimized logit model. These models, when applied to a group of twenty different patients, indicated that shifts could be detected at 0.5cm (brain), 0.75cm (mandible, cord), 1cm (brainstem, cochlea), or 1.25cm (parotid), with sensitivity and specificity greater than 0.95. If sensitivity and specificity constraints are reduced to 0.9, detectable shifts of mandible and brainstem were reduced by 0.25cm. These shifts represent additional safety margins which might be considered if auto-contours are used for automatic treatment planning without physician review. Conclusion: Automatically contoured structures can be automatically verified. This fully automated process could be used to flag auto-contours for special review or used with safety margins in a fully automatic treatment planning system.« less
Formally verifying human–automation interaction as part of a system model: limitations and tradeoffs
Bass, Ellen J.
2011-01-01
Both the human factors engineering (HFE) and formal methods communities are concerned with improving the design of safety-critical systems. This work discusses a modeling effort that leveraged methods from both fields to perform formal verification of human–automation interaction with a programmable device. This effort utilizes a system architecture composed of independent models of the human mission, human task behavior, human-device interface, device automation, and operational environment. The goals of this architecture were to allow HFE practitioners to perform formal verifications of realistic systems that depend on human–automation interaction in a reasonable amount of time using representative models, intuitive modeling constructs, and decoupled models of system components that could be easily changed to support multiple analyses. This framework was instantiated using a patient controlled analgesia pump in a two phased process where models in each phase were verified using a common set of specifications. The first phase focused on the mission, human-device interface, and device automation; and included a simple, unconstrained human task behavior model. The second phase replaced the unconstrained task model with one representing normative pump programming behavior. Because models produced in the first phase were too large for the model checker to verify, a number of model revisions were undertaken that affected the goals of the effort. While the use of human task behavior models in the second phase helped mitigate model complexity, verification time increased. Additional modeling tools and technological developments are necessary for model checking to become a more usable technique for HFE. PMID:21572930
Dynamic Modeling of the SMAP Rotating Flexible Antenna
NASA Technical Reports Server (NTRS)
Nayeri, Reza D.
2012-01-01
Dynamic model development in ADAMS for the SMAP project is explained: The main objective of the dynamic models are for pointing error assessment, and the control/stability margin requirement verifications
Strategy for long-term 3D cloud-resolving simulations over the ARM SGP site and preliminary results
NASA Astrophysics Data System (ADS)
Lin, W.; Liu, Y.; Song, H.; Endo, S.
2011-12-01
Parametric representations of cloud/precipitation processes continue having to be adopted in climate simulations with increasingly higher spatial resolution or with emerging adaptive mesh framework; and it is only becoming more critical that such parameterizations have to be scale aware. Continuous cloud measurements at DOE's ARM sites have provided a strong observational basis for novel cloud parameterization research at various scales. Despite significant progress in our observational ability, there are important cloud-scale physical and dynamical quantities that are either not currently observable or insufficiently sampled. To complement the long-term ARM measurements, we have explored an optimal strategy to carry out long-term 3-D cloud-resolving simulations over the ARM SGP site using Weather Research and Forecasting (WRF) model with multi-domain nesting. The factors that are considered to have important influences on the simulated cloud fields include domain size, spatial resolution, model top, forcing data set, model physics and the growth of model errors. The hydrometeor advection that may play a significant role in hydrological process within the observational domain but is often lacking, and the limitations due to the constraint of domain-wide uniform forcing in conventional cloud system-resolving model simulations, are at least partly accounted for in our approach. Conventional and probabilistic verification approaches are employed first for selected cases to optimize the model's capability of faithfully reproducing the observed mean and statistical distributions of cloud-scale quantities. This then forms the basis of our setup for long-term cloud-resolving simulations over the ARM SGP site. The model results will facilitate parameterization research, as well as understanding and dissecting parameterization deficiencies in climate models.
NASA Astrophysics Data System (ADS)
Dang, Haizheng; Zhao, Yibo
2016-09-01
This paper presents the CFD modeling and experimental verifications of a single-stage inertance tube coaxial Stirling-type pulse tube cryocooler operating at 30-35 K using mixed stainless steel mesh regenerator matrices without either double-inlet or multi-bypass. A two-dimensional axis-symmetric CFD model with the thermal non-equilibrium mode is developed to simulate the internal process, and the underlying mechanism of significantly reducing the regenerator losses with mixed matrices is discussed in detail based on the given six cases. The modeling also indicates that the combination of the given different mesh segments can be optimized to achieve the highest cooling efficiency or the largest exergy ratio, and then the verification experiments are conducted in which the satisfactory agreements between simulated and tested results are observed. The experiments achieve a no-load temperature of 27.2 K and the cooling power of 0.78 W at 35 K, or 0.29 W at 30 K, with an input electric power of 220 W and a reject temperature of 300 K.
Exploring Formalized Elite Coach Mentoring Programmes in the UK: 'We've Had to Play the Game'
ERIC Educational Resources Information Center
Sawiuk, Rebecca; Taylor, William G.; Groom, Ryan
2018-01-01
Formalized mentoring programmes have been implemented increasingly by UK sporting institutions as a central coach development tool, yet claims supporting formal mentoring as an effective learning strategy are often speculative, scarce, ill-defined and accepted without verification. The aim of this study, therefore, was to explore some of the…
Frame synchronization methods based on channel symbol measurements
NASA Technical Reports Server (NTRS)
Dolinar, S.; Cheung, K.-M.
1989-01-01
The current DSN frame synchronization procedure is based on monitoring the decoded bit stream for the appearance of a sync marker sequence that is transmitted once every data frame. The possibility of obtaining frame synchronization by processing the raw received channel symbols rather than the decoded bits is explored. Performance results are derived for three channel symbol sync methods, and these are compared with results for decoded bit sync methods reported elsewhere. It is shown that each class of methods has advantages or disadvantages under different assumptions on the frame length, the global acquisition strategy, and the desired measure of acquisition timeliness. It is shown that the sync statistics based on decoded bits are superior to the statistics based on channel symbols, if the desired operating region utilizes a probability of miss many orders of magnitude higher than the probability of false alarm. This operating point is applicable for very large frame lengths and minimal frame-to-frame verification strategy. On the other hand, the statistics based on channel symbols are superior if the desired operating point has a miss probability only a few orders of magnitude greater than the false alarm probability. This happens for small frames or when frame-to-frame verifications are required.
Recruitment Strategies of Methamphetamine-Using Men Who Have Sex with Men into an Online Survey
Wilkerson, J. Michael; Shenk, Jared E.; Grey, Jeremy A.; Simon Rosser, B. R.; Noor, Syed W.
2014-01-01
Recruiting hidden populations into online research remains challenging. In this manuscript, we report lessons learned from our efforts to recruit methamphetamine-using men who have sex with men. Between July and October 2012, we implemented a four-phase recruitment strategy to enroll a total of 343 methamphetamine-using MSM into an online survey about recent substance use, sexual behavior, and various psychosocial measures. The four phases were implemented sequentially. During phase one, we placed advertisements on mobile applications, and during phase two, we placed advertisements on traditional websites formatted for browsers. During phase three, we used e-mail to initiate snowball recruitment, and during phase four, we used social media for snowball recruitment. Advertisements on mobile devices and websites formatted for browsers proved to be expensive options and resulted in few eligible participants. Our attempts to initiate a snowball through e-mail also proved unsuccessful. The majority (n=320) of observations in our final dataset came from our use of social media. However, participant fraud was a concern, requiring us to implement a strong participant verification protocol. For maximum recruitment and cost-effectiveness, researchers should use social media for recruitment provided they employ strong participant verification protocols. PMID:25642143
Recruitment Strategies of Methamphetamine-Using Men Who Have Sex with Men into an Online Survey.
Wilkerson, J Michael; Shenk, Jared E; Grey, Jeremy A; Simon Rosser, B R; Noor, Syed W
Recruiting hidden populations into online research remains challenging. In this manuscript, we report lessons learned from our efforts to recruit methamphetamine-using men who have sex with men. Between July and October 2012, we implemented a four-phase recruitment strategy to enroll a total of 343 methamphetamine-using MSM into an online survey about recent substance use, sexual behavior, and various psychosocial measures. The four phases were implemented sequentially. During phase one, we placed advertisements on mobile applications, and during phase two, we placed advertisements on traditional websites formatted for browsers. During phase three, we used e-mail to initiate snowball recruitment, and during phase four, we used social media for snowball recruitment. Advertisements on mobile devices and websites formatted for browsers proved to be expensive options and resulted in few eligible participants. Our attempts to initiate a snowball through e-mail also proved unsuccessful. The majority (n=320) of observations in our final dataset came from our use of social media. However, participant fraud was a concern, requiring us to implement a strong participant verification protocol. For maximum recruitment and cost-effectiveness, researchers should use social media for recruitment provided they employ strong participant verification protocols.
NASA Astrophysics Data System (ADS)
Oldenburg, C. M.; Lewicki, J. L.; Zhang, Y.
2003-12-01
The injection of CO2 into deep geologic formations for the purpose of carbon sequestration entails risk that CO2 will leak upward from the target formation and ultimately seep out of the ground surface. We have developed a coupled subsurface and atmospheric surface layer modeling capability based on TOUGH2 to simulate CO2 leakage and seepage. Simulation results for representative subsurface and surface layer conditions are used to specify the requirements of potential near-surface monitoring strategies relevant to both health, safety, and environmental risk assessment as well as sequestration verification. The coupled model makes use of the standard multicomponent and multiphase framework of TOUGH2 and extends the model domain to include an atmospheric surface layer. In the atmospheric surface layer, we assume a logarithmic velocity profile for the time-averaged wind and make use of Pasquill-Gifford and Smagorinski dispersion coefficients to model surface layer dispersion. Results for the unsaturated zone and surface layer show that the vadose zone pore space can become filled with pure CO2 even for small leakage fluxes, but that CO2 concentrations above the ground surface are very low due to the strong effects of dispersion caused by surface winds. Ecological processes such as plant photosynthesis and root respiration, as well as biodegradation in soils, strongly affect near-surface CO2 concentrations and fluxes. The challenge for geologic carbon sequestration verification is to discern the leakage and seepage signal from the ecological signal. Our simulations point to the importance of subsurface monitoring and the need for geochemical (e.g., isotopic) analyses to distinguish leaking injected fossil CO2 from natural ecological CO2. This work was supported by the Office of Science, U.S. Department of Energy under contract No. DE-AC03-76SF00098.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarkar, Avik; Sun, Xin; Sundaresan, Sankaran
2014-04-23
The accuracy of coarse-grid multiphase CFD simulations of fluidized beds may be improved via the inclusion of filtered constitutive models. In our previous study (Sarkar et al., Chem. Eng. Sci., 104, 399-412), we developed such a set of filtered drag relationships for beds with immersed arrays of cooling tubes. Verification of these filtered drag models is addressed in this work. Predictions from coarse-grid simulations with the sub-grid filtered corrections are compared against accurate, highly-resolved simulations of full-scale turbulent and bubbling fluidized beds. The filtered drag models offer a computationally efficient yet accurate alternative for obtaining macroscopic predictions, but the spatialmore » resolution of meso-scale clustering heterogeneities is sacrificed.« less
A formally verified algorithm for interactive consistency under a hybrid fault model
NASA Technical Reports Server (NTRS)
Lincoln, Patrick; Rushby, John
1993-01-01
Consistent distribution of single-source data to replicated computing channels is a fundamental problem in fault-tolerant system design. The 'Oral Messages' (OM) algorithm solves this problem of Interactive Consistency (Byzantine Agreement) assuming that all faults are worst-cass. Thambidurai and Park introduced a 'hybrid' fault model that distinguished three fault modes: asymmetric (Byzantine), symmetric, and benign; they also exhibited, along with an informal 'proof of correctness', a modified version of OM. Unfortunately, their algorithm is flawed. The discipline of mechanically checked formal verification eventually enabled us to develop a correct algorithm for Interactive Consistency under the hybrid fault model. This algorithm withstands $a$ asymmetric, $s$ symmetric, and $b$ benign faults simultaneously, using $m+1$ rounds, provided $n is greater than 2a + 2s + b + m$, and $m\\geg a$. We present this algorithm, discuss its subtle points, and describe its formal specification and verification in PVS. We argue that formal verification systems such as PVS are now sufficiently effective that their application to fault-tolerance algorithms should be considered routine.
Constitutive modeling of superalloy single crystals with verification testing
NASA Technical Reports Server (NTRS)
Jordan, Eric; Walker, Kevin P.
1985-01-01
The goal is the development of constitutive equations to describe the elevated temperature stress-strain behavior of single crystal turbine blade alloys. The program includes both the development of a suitable model and verification of the model through elevated temperature-torsion testing. A constitutive model is derived from postulated constitutive behavior on individual crystallographic slip systems. The behavior of the entire single crystal is then arrived at by summing up the slip on all the operative crystallographic slip systems. This type of formulation has a number of important advantages, including the prediction orientation dependence and the ability to directly represent the constitutive behavior in terms which metallurgists use in describing the micromechanisms. Here, the model is briefly described, followed by the experimental set-up and some experimental findings to date.
SLS Navigation Model-Based Design Approach
NASA Technical Reports Server (NTRS)
Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas
2018-01-01
The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and management of design requirements to the development of usable models, model requirements, and model verification and validation efforts. The models themselves are represented in C/C++ code and accompanying data files. Under the idealized process, potential ambiguity in specification is reduced because the model must be implementable versus a requirement which is not necessarily subject to this constraint. Further, the models are shown to emulate the hardware during validation. For models developed by the Navigation Team, a common interface/standalone environment was developed. The common environment allows for easy implementation in design and analysis tools. Mechanisms such as unit test cases ensure implementation as the developer intended. The model verification and validation process provides a very high level of component design insight. The origin and implementation of the SLS variant of Model-based Design is described from the perspective of the SLS Navigation Team. The format of the models and the requirements are described. The Model-based Design approach has many benefits but is not without potential complications. Key lessons learned associated with the implementation of the Model Based Design approach and process from infancy to verification and certification are discussed
Yin, Hong-Rui; Zhang, Lei; Xie, Li-Qi; Huang, Li-Yong; Xu, Ye; Cai, San-Jun; Yang, Peng-Yuan; Lu, Hao-Jie
2013-09-06
Novel biomarker verification assays are urgently required to improve the efficiency of biomarker development. Benefitting from lower development costs, multiple reaction monitoring (MRM) has been used for biomarker verification as an alternative to immunoassay. However, in general MRM analysis, only one sample can be quantified in a single experiment, which restricts its application. Here, a Hyperplex-MRM quantification approach, which combined mTRAQ for absolute quantification and iTRAQ for relative quantification, was developed to increase the throughput of biomarker verification. In this strategy, equal amounts of internal standard peptides were labeled with mTRAQ reagents Δ0 and Δ8, respectively, as double references, while 4-plex iTRAQ reagents were used to label four different samples as an alternative to mTRAQ Δ4. From the MRM trace and MS/MS spectrum, total amounts and relative ratios of target proteins/peptides of four samples could be acquired simultaneously. Accordingly, absolute amounts of target proteins/peptides in four different samples could be achieved in a single run. In addition, double references were used to increase the reliability of the quantification results. Using this approach, three biomarker candidates, ademosylhomocysteinase (AHCY), cathepsin D (CTSD), and lysozyme C (LYZ), were successfully quantified in colorectal cancer (CRC) tissue specimens of different stages with high accuracy, sensitivity, and reproducibility. To summarize, we demonstrated a promising quantification method for high-throughput verification of biomarker candidates.
A New Integrated Weighted Model in SNOW-V10: Verification of Categorical Variables
NASA Astrophysics Data System (ADS)
Huang, Laura X.; Isaac, George A.; Sheng, Grant
2014-01-01
This paper presents the verification results for nowcasts of seven categorical variables from an integrated weighted model (INTW) and the underlying numerical weather prediction (NWP) models. Nowcasting, or short range forecasting (0-6 h), over complex terrain with sufficient accuracy is highly desirable but a very challenging task. A weighting, evaluation, bias correction and integration system (WEBIS) for generating nowcasts by integrating NWP forecasts and high frequency observations was used during the Vancouver 2010 Olympic and Paralympic Winter Games as part of the Science of Nowcasting Olympic Weather for Vancouver 2010 (SNOW-V10) project. Forecast data from Canadian high-resolution deterministic NWP system with three nested grids (at 15-, 2.5- and 1-km horizontal grid-spacing) were selected as background gridded data for generating the integrated nowcasts. Seven forecast variables of temperature, relative humidity, wind speed, wind gust, visibility, ceiling and precipitation rate are treated as categorical variables for verifying the integrated weighted forecasts. By analyzing the verification of forecasts from INTW and the NWP models among 15 sites, the integrated weighted model was found to produce more accurate forecasts for the 7 selected forecast variables, regardless of location. This is based on the multi-categorical Heidke skill scores for the test period 12 February to 21 March 2010.
Presnyakova, Darya; Archer, Will; Braun, David R; Flear, Wesley
2015-01-01
This study investigates morphological differences between flakes produced via "core and flake" technologies and those resulting from bifacial shaping strategies. We investigate systematic variation between two technological groups of flakes using experimentally produced assemblages, and then apply the experimental model to the Cutting 10 Mid -Pleistocene archaeological collection from Elandsfontein, South Africa. We argue that a specific set of independent variables--and their interactions--including external platform angle, platform depth, measures of thickness variance and flake curvature should distinguish between these two technological groups. The role of these variables in technological group separation was further investigated using the Generalized Linear Model as well as Linear Discriminant Analysis. The Discriminant model was used to classify archaeological flakes from the Cutting 10 locality in terms of their probability of association, within either experimentally developed technological group. The results indicate that the selected independent variables play a central role in separating core and flake from bifacial technologies. Thickness evenness and curvature had the greatest effect sizes in both the Generalized Linear and Discriminant models. Interestingly the interaction between thickness evenness and platform depth was significant and played an important role in influencing technological group membership. The identified interaction emphasizes the complexity in attempting to distinguish flake production strategies based on flake morphological attributes. The results of the discriminant function analysis demonstrate that the majority of flakes at the Cutting 10 locality were not associated with the production of the numerous Large Cutting Tools found at the site, which corresponds with previous suggestions regarding technological behaviors reflected in this assemblage.
Presnyakova, Darya; Archer, Will; Braun, David R.; Flear, Wesley
2015-01-01
This study investigates morphological differences between flakes produced via “core and flake” technologies and those resulting from bifacial shaping strategies. We investigate systematic variation between two technological groups of flakes using experimentally produced assemblages, and then apply the experimental model to the Cutting 10 Mid -Pleistocene archaeological collection from Elandsfontein, South Africa. We argue that a specific set of independent variables—and their interactions—including external platform angle, platform depth, measures of thickness variance and flake curvature should distinguish between these two technological groups. The role of these variables in technological group separation was further investigated using the Generalized Linear Model as well as Linear Discriminant Analysis. The Discriminant model was used to classify archaeological flakes from the Cutting 10 locality in terms of their probability of association, within either experimentally developed technological group. The results indicate that the selected independent variables play a central role in separating core and flake from bifacial technologies. Thickness evenness and curvature had the greatest effect sizes in both the Generalized Linear and Discriminant models. Interestingly the interaction between thickness evenness and platform depth was significant and played an important role in influencing technological group membership. The identified interaction emphasizes the complexity in attempting to distinguish flake production strategies based on flake morphological attributes. The results of the discriminant function analysis demonstrate that the majority of flakes at the Cutting 10 locality were not associated with the production of the numerous Large Cutting Tools found at the site, which corresponds with previous suggestions regarding technological behaviors reflected in this assemblage. PMID:26111251
Verification and Validation Strategy for LWRS Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carl M. Stoots; Richard R. Schultz; Hans D. Gougar
2012-09-01
One intension of the Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program is to create advanced computational tools for safety assessment that enable more accurate representation of a nuclear power plant safety margin. These tools are to be used to study the unique issues posed by lifetime extension and relicensing of the existing operating fleet of nuclear power plants well beyond their first license extension period. The extent to which new computational models / codes such as RELAP-7 can be used for reactor licensing / relicensing activities depends mainly upon the thoroughness with which they have been verifiedmore » and validated (V&V). This document outlines the LWRS program strategy by which RELAP-7 code V&V planning is to be accomplished. From the perspective of developing and applying thermal-hydraulic and reactivity-specific models to reactor systems, the US Nuclear Regulatory Commission (NRC) Regulatory Guide 1.203 gives key guidance to numeric model developers and those tasked with the validation of numeric models. By creating Regulatory Guide 1.203 the NRC defined a framework for development, assessment, and approval of transient and accident analysis methods. As a result, this methodology is very relevant and is recommended as the path forward for RELAP-7 V&V. However, the unique issues posed by lifetime extension will require considerations in addition to those addressed in Regulatory Guide 1.203. Some of these include prioritization of which plants / designs should be studied first, coupling modern supporting experiments to the stringent needs of new high fidelity models / codes, and scaling of aging effects.« less
Prieto-Torres, David A.; Rojas-Soto, Octavio R.
2016-01-01
We used Ecological Niche Modeling (ENM) of individual species of two taxonomic groups (plants and birds) in order to reconstruct the climatic distribution of Tropical Dry Forests (TDFs) in Mexico and to analyze their boundaries with other terrestrial ecosystems. The reconstruction for TDFs’ distribution was analyzed considering the prediction and omission errors based upon the combination of species, obtained from the overlap of individual models (only plants, only birds, and all species combined). Two verifications were used: a primary vegetation map and 100 independent TDFs localities. We performed a Principal Component (PCA) and Discriminant Analysis (DA) to evaluate the variation in the environmental variables and ecological overlap among ecosystems. The modeling strategies showed differences in the ecological patterns and prediction areas, where the “all species combined” model (with a threshold of ≥10 species) was the best strategy to use in the TDFs reconstruction. We observed a concordance of 78% with the primary vegetation map and a prediction of 98% of independent locality records. Although PCA and DA tests explained 75.78% and 97.9% of variance observed, respectively, we observed an important overlap among the TDFs with other adjacent ecosystems, confirming the existence of transition zones among them. We successfully modeled the distribution of Mexican TDFs using a number of bioclimatic variables and co-distributed species. This autoecological niche approach suggests the necessity of rethinking the delimitations of ecosystems based on the recognition of transition zones among them in order to understand the real nature of communities and association patterns of species. PMID:26968031
Verification of the predictive capabilities of the 4C code cryogenic circuit model
NASA Astrophysics Data System (ADS)
Zanino, R.; Bonifetto, R.; Hoa, C.; Richard, L. Savoldi
2014-01-01
The 4C code was developed to model thermal-hydraulics in superconducting magnet systems and related cryogenic circuits. It consists of three coupled modules: a quasi-3D thermal-hydraulic model of the winding; a quasi-3D model of heat conduction in the magnet structures; an object-oriented a-causal model of the cryogenic circuit. In the last couple of years the code and its different modules have undergone a series of validation exercises against experimental data, including also data coming from the supercritical He loop HELIOS at CEA Grenoble. However, all this analysis work was done each time after the experiments had been performed. In this paper a first demonstration is given of the predictive capabilities of the 4C code cryogenic circuit module. To do that, a set of ad-hoc experimental scenarios have been designed, including different heating and control strategies. Simulations with the cryogenic circuit module of 4C have then been performed before the experiment. The comparison presented here between the code predictions and the results of the HELIOS measurements gives the first proof of the excellent predictive capability of the 4C code cryogenic circuit module.
Knowledge-based verification of clinical guidelines by detection of anomalies.
Duftschmid, G; Miksch, S
2001-04-01
As shown in numerous studies, a significant part of published clinical guidelines is tainted with different types of semantical errors that interfere with their practical application. The adaptation of generic guidelines, necessitated by circumstances such as resource limitations within the applying organization or unexpected events arising in the course of patient care, further promotes the introduction of defects. Still, most current approaches for the automation of clinical guidelines are lacking mechanisms, which check the overall correctness of their output. In the domain of software engineering in general and in the domain of knowledge-based systems (KBS) in particular, a common strategy to examine a system for potential defects consists in its verification. The focus of this work is to present an approach, which helps to ensure the semantical correctness of clinical guidelines in a three-step process. We use a particular guideline specification language called Asbru to demonstrate our verification mechanism. A scenario-based evaluation of our method is provided based on a guideline for the artificial ventilation of newborn infants. The described approach is kept sufficiently general in order to allow its application to several other guideline representation formats.
Video-Based Fingerprint Verification
Qin, Wei; Yin, Yilong; Liu, Lili
2013-01-01
Conventional fingerprint verification systems use only static information. In this paper, fingerprint videos, which contain dynamic information, are utilized for verification. Fingerprint videos are acquired by the same capture device that acquires conventional fingerprint images, and the user experience of providing a fingerprint video is the same as that of providing a single impression. After preprocessing and aligning processes, “inside similarity” and “outside similarity” are defined and calculated to take advantage of both dynamic and static information contained in fingerprint videos. Match scores between two matching fingerprint videos are then calculated by combining the two kinds of similarity. Experimental results show that the proposed video-based method leads to a relative reduction of 60 percent in the equal error rate (EER) in comparison to the conventional single impression-based method. We also analyze the time complexity of our method when different combinations of strategies are used. Our method still outperforms the conventional method, even if both methods have the same time complexity. Finally, experimental results demonstrate that the proposed video-based method can lead to better accuracy than the multiple impressions fusion method, and the proposed method has a much lower false acceptance rate (FAR) when the false rejection rate (FRR) is quite low. PMID:24008283
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woods, Nathan; Menikoff, Ralph
2017-02-03
Equilibrium thermodynamics underpins many of the technologies used throughout theoretical physics, yet verification of the various theoretical models in the open literature remains challenging. EOSlib provides a single, consistent, verifiable implementation of these models, in a single, easy-to-use software package. It consists of three parts: a software library implementing various published equation-of-state (EOS) models; a database of fitting parameters for various materials for these models; and a number of useful utility functions for simplifying thermodynamic calculations such as computing Hugoniot curves or Riemann problem solutions. Ready availability of this library will enable reliable code-to- code testing of equation-of-state implementations, asmore » well as a starting point for more rigorous verification work. EOSlib also provides a single, consistent API for its analytic and tabular EOS models, which simplifies the process of comparing models for a particular application.« less
Randell, Edward W; Short, Garry; Lee, Natasha; Beresford, Allison; Spencer, Margaret; Kennell, Marina; Moores, Zoë; Parry, David
2018-06-01
Six Sigma involves a structured process improvement strategy that places processes on a pathway to continued improvement. The data presented here summarizes a project that took three clinical laboratories from autoverification processes that allowed between about 40% to 60% of tests being auto-verified to more than 90% of tests and samples auto-verified. The project schedule, metrics and targets, a description of the previous system and detailed information on the changes made to achieve greater than 90% auto-verification is presented for this Six Sigma DMAIC (Design, Measure, Analyze, Improve, Control) process improvement project.
NASA Astrophysics Data System (ADS)
Grenier, Christophe; Anbergen, Hauke; Bense, Victor; Chanzy, Quentin; Coon, Ethan; Collier, Nathaniel; Costard, François; Ferry, Michel; Frampton, Andrew; Frederick, Jennifer; Gonçalvès, Julio; Holmén, Johann; Jost, Anne; Kokh, Samuel; Kurylyk, Barret; McKenzie, Jeffrey; Molson, John; Mouche, Emmanuel; Orgogozo, Laurent; Pannetier, Romain; Rivière, Agnès; Roux, Nicolas; Rühaak, Wolfram; Scheidegger, Johanna; Selroos, Jan-Olof; Therrien, René; Vidstrand, Patrik; Voss, Clifford
2018-04-01
In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. This issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatial and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.
Integrating uniform design and response surface methodology to optimize thiacloprid suspension
Li, Bei-xing; Wang, Wei-chang; Zhang, Xian-peng; Zhang, Da-xia; Mu, Wei; Liu, Feng
2017-01-01
A model 25% suspension concentrate (SC) of thiacloprid was adopted to evaluate an integrative approach of uniform design and response surface methodology. Tersperse2700, PE1601, xanthan gum and veegum were the four experimental factors, and the aqueous separation ratio and viscosity were the two dependent variables. Linear and quadratic polynomial models of stepwise regression and partial least squares were adopted to test the fit of the experimental data. Verification tests revealed satisfactory agreement between the experimental and predicted data. The measured values for the aqueous separation ratio and viscosity were 3.45% and 278.8 mPa·s, respectively, and the relative errors of the predicted values were 9.57% and 2.65%, respectively (prepared under the proposed conditions). Comprehensive benefits could also be obtained by appropriately adjusting the amount of certain adjuvants based on practical requirements. Integrating uniform design and response surface methodology is an effective strategy for optimizing SC formulas. PMID:28383036