Science.gov

Sample records for distributed location verification

  1. Collaborative Localization and Location Verification in WSNs

    PubMed Central

    Miao, Chunyu; Dai, Guoyong; Ying, Kezhen; Chen, Qingzhang

    2015-01-01

    Localization is one of the most important technologies in wireless sensor networks. A lightweight distributed node localization scheme is proposed by considering the limited computational capacity of WSNs. The proposed scheme introduces the virtual force model to determine the location by incremental refinement. Aiming at solving the drifting problem and malicious anchor problem, a location verification algorithm based on the virtual force mode is presented. In addition, an anchor promotion algorithm using the localization reliability model is proposed to re-locate the drifted nodes. Extended simulation experiments indicate that the localization algorithm has relatively high precision and the location verification algorithm has relatively high accuracy. The communication overhead of these algorithms is relative low, and the whole set of reliable localization methods is practical as well as comprehensive. PMID:25954948

  2. Verification of LHS distributions.

    SciTech Connect

    Swiler, Laura Painton

    2006-04-01

    This document provides verification test results for normal, lognormal, and uniform distributions that are used in Sandia's Latin Hypercube Sampling (LHS) software. The purpose of this testing is to verify that the sample values being generated in LHS are distributed according to the desired distribution types. The testing of distribution correctness is done by examining summary statistics, graphical comparisons using quantile-quantile plots, and format statistical tests such as the Chisquare test, the Kolmogorov-Smirnov test, and the Anderson-Darling test. The overall results from the testing indicate that the generation of normal, lognormal, and uniform distributions in LHS is acceptable.

  3. 37 CFR 384.7 - Verification of royalty distributions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... distributions. 384.7 Section 384.7 Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF... BUSINESS ESTABLISHMENT SERVICES § 384.7 Verification of royalty distributions. (a) General. This section prescribes procedures by which any Copyright Owner may verify the royalty distributions made by...

  4. The Error Distribution of BATSE GRB Location

    NASA Technical Reports Server (NTRS)

    Briggs, Michael S.; Pendleton, Geoffrey N.; Kippen, R. Marc; Brainerd, J. J.; Hurley, Kevin; Connaughton, Valerie; Meegan, Charles A.

    1998-01-01

    We develop empirical probability models for BATSE GRB location errors by a Bayesian analysis of the separations between BATSE GRB locations and locations obtained with the InterPlanetary Network (IPN). Models are compared and their parameters estimated using 394 GRBs with single IPN annuli and 20 GRBs with intersecting IPN annuli. Most of the analysis is for the 4B (rev) BATSE catalog; earlier catalogs are also analyzed. The simplest model that provides a good representation of the error distribution has 78% of the locations in a 'core' term with a systematic error of 1.85 degrees and the remainder in an extended tail with a systematic error of 5.36 degrees, implying a 68% confidence region for bursts with negligible statistical errors of 2.3 degrees. There is some evidence for a more complicated model in which the error distribution depends on the BATSE datatype that was used to obtain the location. Bright bursts are typically located using the CONT datatype, and according to the more complicated model, the 68% confidence region for CONT-located bursts with negligible statistical errors is 2.0 degrees.

  5. Distributed Avionics and Software Verification for the Constellation Program

    NASA Technical Reports Server (NTRS)

    Hood, Laura E.; Adams, James E.

    2008-01-01

    This viewgraph presentation reviews the planned verification of the avionics and software being developed for the Constellation program.The Constellation Distributed System Integration Laboratory (DSIL) will consist of multiple System Integration Labs (SILs), Simulators, Emulators, Testbeds, and Control Centers interacting with each other over a broadband network to provide virtual test systems for multiple test scenarios.

  6. Modeling and Verification of Distributed Generation and Voltage Regulation Equipment for Unbalanced Distribution Power Systems; Annual Subcontract Report, June 2007

    SciTech Connect

    Davis, M. W.; Broadwater, R.; Hambrick, J.

    2007-07-01

    This report summarizes the development of models for distributed generation and distribution circuit voltage regulation equipment for unbalanced power systems and their verification through actual field measurements.

  7. Mobile agent location in distributed environments

    NASA Astrophysics Data System (ADS)

    Fountoukis, S. G.; Argyropoulos, I. P.

    2012-12-01

    An agent is a small program acting on behalf of a user or an application which plays the role of a user. Artificial intelligence can be encapsulated in agents so that they can be capable of both behaving autonomously and showing an elementary decision ability regarding movement and some specific actions. Therefore they are often called autonomous mobile agents. In a distributed system, they can move themselves from one processing node to another through the interconnecting network infrastructure. Their purpose is to collect useful information and to carry it back to their user. Also, agents are used to start, monitor and stop processes running on the individual interconnected processing nodes of computer cluster systems. An agent has a unique id to discriminate itself from other agents and a current position. The position can be expressed as the address of the processing node which currently hosts the agent. Very often, it is necessary for a user, a processing node or another agent to know the current position of an agent in a distributed system. Several procedures and algorithms have been proposed for the purpose of position location of mobile agents. The most basic of all employs a fixed computing node, which acts as agent position repository, receiving messages from all the moving agents and keeping records of their current positions. The fixed node, responds to position queries and informs users, other nodes and other agents about the position of an agent. Herein, a model is proposed that considers pairs and triples of agents instead of single ones. A location method, which is investigated in this paper, attempts to exploit this model.

  8. GENERIC VERIFICATION PROTOCOL: DISTRIBUTED GENERATION AND COMBINED HEAT AND POWER FIELD TESTING PROTOCOL

    EPA Science Inventory

    This report is a generic verification protocol by which EPA’s Environmental Technology Verification program tests newly developed equipment for distributed generation of electric power, usually micro-turbine generators and internal combustion engine generators. The protocol will ...

  9. DOE-EPRI distributed wind Turbine Verification Program (TVP III)

    SciTech Connect

    McGowin, C.; DeMeo, E.; Calvert, S.

    1997-12-31

    In 1992, the Electric Power Research Institute (EPRI) and the U.S. Department of Energy (DOE) initiated the Utility Wind Turbine Verification Program (TVP). The goal of the program is to evaluate prototype advanced wind turbines at several sites developed by U.S. electric utility companies. Two six MW wind projects have been installed under the TVP program by Central and South West Services in Fort Davis, Texas and Green Mountain Power Corporation in Searsburg, Vermont. In early 1997, DOE and EPRI selected five more utility projects to evaluate distributed wind generation using smaller {open_quotes}clusters{close_quotes} of wind turbines connected directly to the electricity distribution system. This paper presents an overview of the objectives, scope, and status of the EPRI-DOE TVP program and the existing and planned TVP projects.

  10. Reconstructing Spatial Distributions from Anonymized Locations

    SciTech Connect

    Horey, James L; Forrest, Stephanie; Groat, Michael

    2012-01-01

    Devices such as mobile phones, tablets, and sensors are often equipped with GPS that accurately report a person's location. Combined with wireless communication, these devices enable a wide range of new social tools and applications. These same qualities, however, leave location-aware applications vulnerable to privacy violations. This paper introduces the Negative Quad Tree, a privacy protection method for location aware applications. The method is broadly applicable to applications that use spatial density information, such as social applications that measure the popularity of social venues. The method employs a simple anonymization algorithm running on mobile devices, and a more complex reconstruction algorithm on a central server. This strategy is well suited to low-powered mobile devices. The paper analyzes the accuracy of the reconstruction method in a variety of simulated and real-world settings and demonstrates that the method is accurate enough to be used in many real-world scenarios.

  11. Design and Verification of a Distributed Communication Protocol

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar A.; Goodloe, Alwyn E.

    2009-01-01

    The safety of remotely operated vehicles depends on the correctness of the distributed protocol that facilitates the communication between the vehicle and the operator. A failure in this communication can result in catastrophic loss of the vehicle. To complicate matters, the communication system may be required to satisfy several, possibly conflicting, requirements. The design of protocols is typically an informal process based on successive iterations of a prototype implementation. Yet distributed protocols are notoriously difficult to get correct using such informal techniques. We present a formal specification of the design of a distributed protocol intended for use in a remotely operated vehicle, which is built from the composition of several simpler protocols. We demonstrate proof strategies that allow us to prove properties of each component protocol individually while ensuring that the property is preserved in the composition forming the entire system. Given that designs are likely to evolve as additional requirements emerge, we show how we have automated most of the repetitive proof steps to enable verification of rapidly changing designs.

  12. Distributed Engine Control Empirical/Analytical Verification Tools

    NASA Technical Reports Server (NTRS)

    DeCastro, Jonathan; Hettler, Eric; Yedavalli, Rama; Mitra, Sayan

    2013-01-01

    NASA's vision for an intelligent engine will be realized with the development of a truly distributed control system featuring highly reliable, modular, and dependable components capable of both surviving the harsh engine operating environment and decentralized functionality. A set of control system verification tools was developed and applied to a C-MAPSS40K engine model, and metrics were established to assess the stability and performance of these control systems on the same platform. A software tool was developed that allows designers to assemble easily a distributed control system in software and immediately assess the overall impacts of the system on the target (simulated) platform, allowing control system designers to converge rapidly on acceptable architectures with consideration to all required hardware elements. The software developed in this program will be installed on a distributed hardware-in-the-loop (DHIL) simulation tool to assist NASA and the Distributed Engine Control Working Group (DECWG) in integrating DCS (distributed engine control systems) components onto existing and next-generation engines.The distributed engine control simulator blockset for MATLAB/Simulink and hardware simulator provides the capability to simulate virtual subcomponents, as well as swap actual subcomponents for hardware-in-the-loop (HIL) analysis. Subcomponents can be the communication network, smart sensor or actuator nodes, or a centralized control system. The distributed engine control blockset for MATLAB/Simulink is a software development tool. The software includes an engine simulation, a communication network simulation, control algorithms, and analysis algorithms set up in a modular environment for rapid simulation of different network architectures; the hardware consists of an embedded device running parts of the CMAPSS engine simulator and controlled through Simulink. The distributed engine control simulation, evaluation, and analysis technology provides unique

  13. The Role of Experience in Location Estimation: Target Distributions Shift Location Memory Biases

    ERIC Educational Resources Information Center

    Lipinski, John; Simmering, Vanessa R.; Johnson, Jeffrey S.; Spencer, John P.

    2010-01-01

    Research based on the Category Adjustment model concluded that the spatial distribution of target locations does not influence location estimation responses [Huttenlocher, J., Hedges, L., Corrigan, B., & Crawford, L. E. (2004). Spatial categories and the estimation of location. "Cognition, 93", 75-97]. This conflicts with earlier results showing…

  14. Radionuclide Inventory Distribution Project Data Evaluation and Verification White Paper

    SciTech Connect

    NSTec Environmental Restoration

    2010-05-17

    Testing of nuclear explosives caused widespread contamination of surface soils on the Nevada Test Site (NTS). Atmospheric tests produced the majority of this contamination. The Radionuclide Inventory and Distribution Program (RIDP) was developed to determine distribution and total inventory of radionuclides in surface soils at the NTS to evaluate areas that may present long-term health hazards. The RIDP achieved this objective with aerial radiological surveys, soil sample results, and in situ gamma spectroscopy. This white paper presents the justification to support the use of RIDP data as a guide for future evaluation and to support closure of Soils Sub-Project sites under the purview of the Federal Facility Agreement and Consent Order. Use of the RIDP data as part of the Data Quality Objective process is expected to provide considerable cost savings and accelerate site closures. The following steps were completed: - Summarize the RIDP data set and evaluate the quality of the data. - Determine the current uses of the RIDP data and cautions associated with its use. - Provide recommendations for enhancing data use through field verification or other methods. The data quality is sufficient to utilize RIDP data during the planning process for site investigation and closure. Project planning activities may include estimating 25-millirem per industrial access year dose rate boundaries, optimizing characterization efforts, projecting final end states, and planning remedial actions. In addition, RIDP data may be used to identify specific radionuclide distributions, and augment other non-radionuclide dose rate data. Finally, the RIDP data can be used to estimate internal and external dose rates. The data quality is sufficient to utilize RIDP data during the planning process for site investigation and closure. Project planning activities may include estimating 25-millirem per industrial access year dose rate boundaries, optimizing characterization efforts, projecting final

  15. Automated fault location and diagnosis on electric power distribution feeders

    SciTech Connect

    Zhu, J.; Lubkeman, D.L.; Girgis, A.A.

    1997-04-01

    This paper presents new techniques for locating and diagnosing faults on electric power distribution feeders. The proposed fault location and diagnosis scheme is capable of accurately identifying the location of a fault upon its occurrence, based on the integration of information available from disturbance recording devices with knowledge contained in a distribution feeder database. The developed fault location and diagnosis system can also be applied to the investigation of temporary faults that may not result in a blown fuse. The proposed fault location algorithm is based on the steady-state analysis of the faulted distribution network. To deal with the uncertainties inherent in the system modeling and the phasor estimation, the fault location algorithm has been adapted to estimate fault regions based on probabilistic modeling and analysis. Since the distribution feeder is a radial network, multiple possibilities of fault locations could be computed with measurements available only at the substation. To identify the actual fault location, a fault diagnosis algorithm has been developed to prune down and rank the possible fault locations by integrating the available pieces of evidence. Testing of the developed fault location and diagnosis system using field data has demonstrated its potential for practical use.

  16. Fault Location Methods for Ungrounded Distribution Systems Using Local Measurements

    NASA Astrophysics Data System (ADS)

    Xiu, Wanjing; Liao, Yuan

    2013-08-01

    This article presents novel fault location algorithms for ungrounded distribution systems. The proposed methods are capable of locating faults by using obtained voltage and current measurements at the local substation. Two types of fault location algorithms, using line to neutral and line to line measurements, are presented. The network structure and parameters are assumed to be known. The network structure needs to be updated based on information obtained from utility telemetry system. With the help of bus impedance matrix, local voltage changes due to the fault can be expressed as a function of fault currents. Since the bus impedance matrix contains information about fault location, superimposed voltages at local substation can be expressed as a function of fault location, through which fault location can be solved. Simulation studies have been carried out based on a sample distribution power system. From the evaluation study, it is evinced that very accurate fault location estimates are obtained from both types of methods.

  17. A expert system for locating distribution system faults

    SciTech Connect

    Hsu, Y.Y.; Lu, F.C.; Chien, Y. . Dept. of Electrical Engineering); Liu, J.P.; Lin, J.T. ); Yu, H.S.; Kuo, R.T )

    1991-01-01

    A rule-based expert system is designed to locate the faults in a distribution system. Distribution system component data and network topology are stored in the database. A set of heuristic rules are compiled from the dispatchers' experience and are imbedded in the rule base. To locate distribution system fault, an inference engine is developed to perform deductive reasonings on the rules in the knowledge base. The inference engine comprises three major parts: the dynamic searching method, the backtracking approach, and the set intersection operation. The expert system is implemented on a personal computer using the artificial intelligence language PROLOG. To demonstrate the effectiveness of the proposed approach, the expert system has been applied to locate faults in a real underground distribution system.

  18. Modeling and verification of distributed systems with labeled predicate transition nets

    NASA Astrophysics Data System (ADS)

    Lloret, Jean-Christophe

    Two main steps in the design of distributed systems are modeling and verification. Petri nets and CCS are two basic formal models. CCS is a modular language supporting compositional verification. Conversely, the petri net theory requires an accurate description of parallelism and focuses on property global verification. A structuring technique based on CCS concepts is introduced for predicate/transition nets. It consists of a high level petri net that permits expression of communication with value passing. In particular, a petri net composition operator, that can be interpreted as a multi-rendezvous between communicating systems, is defined. The multi rendezvous allows abstract modeling, with small state graphs. The developed formalism is highly convenient for refining abstract models relative to less abstract levels. Based on this work, a software tool, supporting distributed system design and verification, is developed. The advantage of this approach is shown in many research and industrial applications.

  19. Ground vibration tests of a high fidelity truss for verification of on orbit damage location techniques

    NASA Technical Reports Server (NTRS)

    Kashangaki, Thomas A. L.

    1992-01-01

    This paper describes a series of modal tests that were performed on a cantilevered truss structure. The goal of the tests was to assemble a large database of high quality modal test data for use in verification of proposed methods for on orbit model verification and damage detection in flexible truss structures. A description of the hardware is provided along with details of the experimental setup and procedures for 16 damage cases. Results from selected cases are presented and discussed. Differences between ground vibration testing and on orbit modal testing are also described.

  20. Discrete Wavelet Transform for Fault Locations in Underground Distribution System

    NASA Astrophysics Data System (ADS)

    Apisit, C.; Ngaopitakkul, A.

    2010-10-01

    In this paper, a technique for detecting faults in underground distribution system is presented. Discrete Wavelet Transform (DWT) based on traveling wave is employed in order to detect the high frequency components and to identify fault locations in the underground distribution system. The first peak time obtained from the faulty bus is employed for calculating the distance of fault from sending end. The validity of the proposed technique is tested with various fault inception angles, fault locations and faulty phases. The result is found that the proposed technique provides satisfactory result and will be very useful in the development of power systems protection scheme.

  1. The Error Distribution of BATSE Gamma-Ray Burst Locations

    NASA Technical Reports Server (NTRS)

    Briggs, Michael S.; Pendleton, Geoffrey N.; Kippen, R. Marc; Brainerd, J. J.; Hurley, Kevin; Connaughton, Valerie; Meegan, Charles A.

    1999-01-01

    Empirical probability models for BATSE gamma-ray burst (GRB) location errors are developed via a Bayesian analysis of the separations between BATSE GRB locations and locations obtained with the Interplanetary Network (IPN). Models are compared and their parameters estimated using 392 GRBs with single IPN annuli and 19 GRBs with intersecting IPN annuli. Most of the analysis is for the 4Br BATSE catalog; earlier catalogs are also analyzed. The simplest model that provides a good representation of the error distribution has 78% of the probability in a "core" term with a systematic error of 1.85 deg and the remainder in an extended tail with a systematic error of 5.1 deg, which implies a 68% confidence radius for bursts with negligible statistical uncertainties of 2.2 deg. There is evidence for a more complicated model in which the error distribution depends on the BATSE data type that was used to obtain the location. Bright bursts are typically located using the CONT data type, and according to the more complicated model, the 68% confidence radius for CONT-located bursts with negligible statistical uncertainties is 2.0 deg.

  2. 37 CFR 380.7 - Verification of royalty distributions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... distributions. 380.7 Section 380.7 Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF... royalty distributions. (a) General. This section prescribes procedures by which any Copyright Owner or Performer may verify the royalty distributions made by the Collective; Provided, however, that...

  3. Logistics distribution centers location problem and algorithm under fuzzy environment

    NASA Astrophysics Data System (ADS)

    Yang, Lixing; Ji, Xiaoyu; Gao, Ziyou; Li, Keping

    2007-11-01

    Distribution centers location problem is concerned with how to select distribution centers from the potential set so that the total relevant cost is minimized. This paper mainly investigates this problem under fuzzy environment. Consequentially, chance-constrained programming model for the problem is designed and some properties of the model are investigated. Tabu search algorithm, genetic algorithm and fuzzy simulation algorithm are integrated to seek the approximate best solution of the model. A numerical example is also given to show the application of the algorithm.

  4. Abstractions for Fault-Tolerant Distributed System Verification

    NASA Technical Reports Server (NTRS)

    Pike, Lee S.; Maddalon, Jeffrey M.; Miner, Paul S.; Geser, Alfons

    2004-01-01

    Four kinds of abstraction for the design and analysis of fault tolerant distributed systems are discussed. These abstractions concern system messages, faults, fault masking voting, and communication. The abstractions are formalized in higher order logic, and are intended to facilitate specifying and verifying such systems in higher order theorem provers.

  5. Solute location in a nanoconfined liquid depends on charge distribution

    SciTech Connect

    Harvey, Jacob A.; Thompson, Ward H.

    2015-07-28

    Nanostructured materials that can confine liquids have attracted increasing attention for their diverse properties and potential applications. Yet, significant gaps remain in our fundamental understanding of such nanoconfined liquids. Using replica exchange molecular dynamics simulations of a nanoscale, hydroxyl-terminated silica pore system, we determine how the locations explored by a coumarin 153 (C153) solute in ethanol depend on its charge distribution, which can be changed through a charge transfer electronic excitation. The solute position change is driven by the internal energy, which favors C153 at the pore surface compared to the pore interior, but less so for the more polar, excited-state molecule. This is attributed to more favorable non-specific solvation of the large dipole moment excited-state C153 by ethanol at the expense of hydrogen-bonding with the pore. It is shown that a change in molecule location resulting from shifts in the charge distribution is a general result, though how the solute position changes will depend upon the specific system. This has important implications for interpreting measurements and designing applications of mesoporous materials.

  6. The verification of lightning location accuracy in Finland deduced from lightning strikes to trees

    NASA Astrophysics Data System (ADS)

    Mäkelä, Antti; Mäkelä, Jakke; Haapalainen, Jussi; Porjo, Niko

    2016-05-01

    We present a new method to determine the ground truth and accuracy of lightning location systems (LLS), using natural lightning strikes to trees. Observations of strikes to trees are being collected with a Web-based survey tool at the Finnish Meteorological Institute. Since the Finnish thunderstorms tend to have on average a low flash rate, it is often possible to identify from the LLS data unambiguously the stroke that caused damage to a given tree. The coordinates of the tree are then the ground truth for that stroke. The technique has clear advantages over other methods used to determine the ground truth. Instrumented towers and rocket launches measure upward-propagating lightning. Video and audio records, even with triangulation, are rarely capable of high accuracy. We present data for 36 quality-controlled tree strikes in the years 2007-2008. We show that the average inaccuracy of the lightning location network for that period was 600 m. In addition, we show that the 50% confidence ellipse calculated by the lightning location network and used operationally for describing the location accuracy is physically meaningful: half of all the strikes were located within the uncertainty ellipse of the nearest recorded stroke. Using tree strike data thus allows not only the accuracy of the LLS to be estimated but also the reliability of the uncertainty ellipse. To our knowledge, this method has not been attempted before for natural lightning.

  7. Design and verification of distributed logic controllers with application of Petri nets

    SciTech Connect

    Wiśniewski, Remigiusz; Grobelna, Iwona; Grobelny, Michał; Wiśniewska, Monika

    2015-12-31

    The paper deals with the designing and verification of distributed logic controllers. The control system is initially modelled with Petri nets and formally verified against structural and behavioral properties with the application of the temporal logic and model checking technique. After that it is decomposed into separate sequential automata that are working concurrently. Each of them is re-verified and if the validation is successful, the system can be finally implemented.

  8. Towards the Formal Verification of a Distributed Real-Time Automotive System

    NASA Technical Reports Server (NTRS)

    Endres, Erik; Mueller, Christian; Shadrin, Andrey; Tverdyshev, Sergey

    2010-01-01

    We present the status of a project which aims at building, formally and pervasively verifying a distributed automotive system. The target system is a gate-level model which consists of several interconnected electronic control units with independent clocks. This model is verified against the specification as seen by a system programmer. The automotive system is implemented on several FPGA boards. The pervasive verification is carried out using combination of interactive theorem proving (Isabelle/HOL) and model checking (LTL).

  9. Verification of the use of completion-location analysis for initial assessment of reservoir heterogeneity

    SciTech Connect

    McDowell, R.R.; Avary, K.L.; Hohn, M.E.; Matchen, D.L. )

    1996-01-01

    In 1991, a technique (completion-location analysis) was developed for a U.S. DOE-funded study to give a preliminary assessment of field-scale reservoir heterogeneity in two West Virginia oil fields (Granny Creek and Rock Creek). The study's conclusions regarding heterogeneity agreed with initial predictions. However, as these fields were investigated specifically because they were thought to be heterogeneous, this test of the analysis was biased. In 1995, as part of a proposal to study siliciclastic strandplain reservoirs, the Jacksonburg- Stringtown field in West Virginia, was selected because it met the depositional criterion and was still being actively produced. Completion-location analysis was undertaken on 214 producing oil wells from the field. Analysis indicated that drilling in the fields is clustered into eight time periods (1890-1903, 1904-1911, 1912-1916, 1917-1934, 1935-1953, 1954-1975, 1975-1985, and 1986-1995). Mapping of the locations of wells for each time period indicated that from 1890-1903 approximately 50% of the current geographic extent of the field was defined. Drilling in the periods 1935-1953, 1954-1975, 1975-1985, and 1985-1995 added significantly to the extent of the field - these episodes, especially 1986-1995, represent the discovery of new production. On this basis, a preliminary prediction was made that Jacksonburg-Stringtown field should exhibit a relatively high degree of reservoir heterogeneity. Subsequent discussions with the producer revealed that the reservoir varies considerably in pay thickness and quality across the field, has localized areas with high water injection rates and early water breakthrough, and has areas of anomalously high production. This suggests significant reservoir heterogeneity and appears to verify the utility of completion-location analysis.

  10. Verification of the use of completion-location analysis for initial assessment of reservoir heterogeneity

    SciTech Connect

    McDowell, R.R.; Avary, K.L.; Hohn, M.E.; Matchen, D.L.

    1996-12-31

    In 1991, a technique (completion-location analysis) was developed for a U.S. DOE-funded study to give a preliminary assessment of field-scale reservoir heterogeneity in two West Virginia oil fields (Granny Creek and Rock Creek). The study`s conclusions regarding heterogeneity agreed with initial predictions. However, as these fields were investigated specifically because they were thought to be heterogeneous, this test of the analysis was biased. In 1995, as part of a proposal to study siliciclastic strandplain reservoirs, the Jacksonburg- Stringtown field in West Virginia, was selected because it met the depositional criterion and was still being actively produced. Completion-location analysis was undertaken on 214 producing oil wells from the field. Analysis indicated that drilling in the fields is clustered into eight time periods (1890-1903, 1904-1911, 1912-1916, 1917-1934, 1935-1953, 1954-1975, 1975-1985, and 1986-1995). Mapping of the locations of wells for each time period indicated that from 1890-1903 approximately 50% of the current geographic extent of the field was defined. Drilling in the periods 1935-1953, 1954-1975, 1975-1985, and 1985-1995 added significantly to the extent of the field - these episodes, especially 1986-1995, represent the discovery of new production. On this basis, a preliminary prediction was made that Jacksonburg-Stringtown field should exhibit a relatively high degree of reservoir heterogeneity. Subsequent discussions with the producer revealed that the reservoir varies considerably in pay thickness and quality across the field, has localized areas with high water injection rates and early water breakthrough, and has areas of anomalously high production. This suggests significant reservoir heterogeneity and appears to verify the utility of completion-location analysis.

  11. Evaluation of gafchromic EBT film for intensity modulated radiation therapy dose distribution verification

    PubMed Central

    Sankar, A.; Kurup, P. G. Goplakrishna; Murali, V.; Ayyangar, Komanduri M.; Nehru, R. Mothilal; Velmurugan, J.

    2006-01-01

    This work was undertaken with the intention of investigating the possibility of clinical use of commercially available self-developing radiochromic film – Gafchromic EBT film – for IMRT dose verification. The dose response curves were generated for the films using VXR-16 film scanner. The results obtained with EBT films were compared with the results of Kodak EDR2 films. It was found that the EBT film has a linear response between the dose ranges of 0 and 600 cGy. The dose-related characteristics of the EBT film, like post-irradiation color growth with time, film uniformity and effect of scanning orientation, were studied. There is up to 8.6% increase in the color density between 2 and 40 h after irradiation. There was a considerable variation, up to 8.5%, in the film uniformity over its sensitive region. The quantitative difference between calculated and measured dose distributions was analyzed using Gamma index with the tolerance of 3% dose difference and 3 mm distance agreement. EDR2 films showed good and consistent results with the calculated dose distribution, whereas the results obtained using EBT were inconsistent. The variation in the film uniformity limits the use of EBT film for conventional large field IMRT verification. For IMRT of smaller field size (4.5 × 4.5 cm), the results obtained with EBT were comparable with results of EDR2 films. PMID:21206669

  12. Three-dimensional gamma analysis of dose distributions in individual structures for IMRT dose verification.

    PubMed

    Tomiyama, Yuuki; Araki, Fujio; Oono, Takeshi; Hioki, Kazunari

    2014-07-01

    Our purpose in this study was to implement three-dimensional (3D) gamma analysis for structures of interest such as the planning target volume (PTV) or clinical target volume (CTV), and organs at risk (OARs) for intensity-modulated radiation therapy (IMRT) dose verification. IMRT dose distributions for prostate and head and neck (HN) cancer patients were calculated with an analytical anisotropic algorithm in an Eclipse (Varian Medical Systems) treatment planning system (TPS) and by Monte Carlo (MC) simulation. The MC dose distributions were calculated with EGSnrc/BEAMnrc and DOSXYZnrc user codes under conditions identical to those for the TPS. The prescribed doses were 76 Gy/38 fractions with five-field IMRT for the prostate and 33 Gy/17 fractions with seven-field IMRT for the HN. TPS dose distributions were verified by the gamma passing rates for the whole calculated volume, PTV or CTV, and OARs by use of 3D gamma analysis with reference to MC dose distributions. The acceptance criteria for the 3D gamma analysis were 3/3 and 2 %/2 mm for a dose difference and a distance to agreement. The gamma passing rates in PTV and OARs for the prostate IMRT plan were close to 100 %. For the HN IMRT plan, the passing rates of 2 %/2 mm in CTV and OARs were substantially lower because inhomogeneous tissues such as bone and air in the HN are included in the calculation area. 3D gamma analysis for individual structures is useful for IMRT dose verification. PMID:24796955

  13. Distribution and Location of Genetic Effects for Dairy Traits

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genetic effects for many dairy traits and for total economic merit are fairly evenly distributed across all chromosomes. A high-density scan using 38,416 SNP markers for 5,285 bulls confirmed two previously-known major genes on Bos taurus autosomes (BTA) 6 and 14 but revealed few other large effects...

  14. Distribution and Location of Genetic Effects for Dairy Traits

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genetic effects for many dairy traits and for total economic merit are fairly evenly distributed across all chromosomes. A high-density scan using 39,314 SNP markers for 5,285 bulls confirmed two previously-known major genes on BTA 6 and 14 but revealed few other large effects. Markers on BTA 18 had...

  15. Direct and full-scale experimental verifications towards ground-satellite quantum key distribution

    NASA Astrophysics Data System (ADS)

    Wang, Jian-Yu; Yang, Bin; Liao, Sheng-Kai; Zhang, Liang; Shen, Qi; Hu, Xiao-Fang; Wu, Jin-Cai; Yang, Shi-Ji; Jiang, Hao; Tang, Yan-Lin; Zhong, Bo; Liang, Hao; Liu, Wei-Yue; Hu, Yi-Hua; Huang, Yong-Mei; Qi, Bo; Ren, Ji-Gang; Pan, Ge-Sheng; Yin, Juan; Jia, Jian-Jun; Chen, Yu-Ao; Chen, Kai; Peng, Cheng-Zhi; Pan, Jian-Wei

    2013-05-01

    Quantum key distribution (QKD) provides the only intrinsically unconditional secure method for communication based on the principle of quantum mechanics. Compared with fibre-based demonstrations, free-space links could provide the most appealing solution for communication over much larger distances. Despite significant efforts, all realizations to date rely on stationary sites. Experimental verifications are therefore extremely crucial for applications to a typical low Earth orbit satellite. To achieve direct and full-scale verifications of our set-up, we have carried out three independent experiments with a decoy-state QKD system, and overcome all conditions. The system is operated on a moving platform (using a turntable), on a floating platform (using a hot-air balloon), and with a high-loss channel to demonstrate performances under conditions of rapid motion, attitude change, vibration, random movement of satellites, and a high-loss regime. The experiments address wide ranges of all leading parameters relevant to low Earth orbit satellites. Our results pave the way towards ground-satellite QKD and a global quantum communication network.

  16. Mass and galaxy distributions of four massive galaxy clusters from Dark Energy Survey Science Verification data

    SciTech Connect

    Melchior, P.; Suchyta, E.; Huff, E.; Hirsch, M.; Kacprzak, T.; Rykoff, E.; Gruen, D.; Armstrong, R.; Bacon, D.; Bechtol, K.; Bernstein, G. M.; Bridle, S.; Clampitt, J.; Honscheid, K.; Jain, B.; Jouvel, S.; Krause, E.; Lin, H.; MacCrann, N.; Patton, K.; Plazas, A.; Rowe, B.; Vikram, V.; Wilcox, H.; Young, J.; Zuntz, J.; Abbott, T.; Abdalla, F. B.; Allam, S. S.; Banerji, M.; Bernstein, J. P.; Bernstein, R. A.; Bertin, E.; Buckley-Geer, E.; Burke, D. L.; Castander, F. J.; da Costa, L. N.; Cunha, C. E.; Depoy, D. L.; Desai, S.; Diehl, H. T.; Doel, P.; Estrada, J.; Evrard, A. E.; Neto, A. F.; Fernandez, E.; Finley, D. A.; Flaugher, B.; Frieman, J. A.; Gaztanaga, E.; Gerdes, D.; Gruendl, R. A.; Gutierrez, G. R.; Jarvis, M.; Karliner, I.; Kent, S.; Kuehn, K.; Kuropatkin, N.; Lahav, O.; Maia, M. A. G.; Makler, M.; Marriner, J.; Marshall, J. L.; Merritt, K. W.; Miller, C. J.; Miquel, R.; Mohr, J.; Neilsen, E.; Nichol, R. C.; Nord, B. D.; Reil, K.; Roe, N. A.; Roodman, A.; Sako, M.; Sanchez, E.; Santiago, B. X.; Schindler, R.; Schubnell, M.; Sevilla-Noarbe, I.; Sheldon, E.; Smith, C.; Soares-Santos, M.; Swanson, M. E. C.; Sypniewski, A. J.; Tarle, G.; Thaler, J.; Thomas, D.; Tucker, D. L.; Walker, A.; Wechsler, R.; Weller, J.; Wester, W.

    2015-03-31

    We measure the weak-lensing masses and galaxy distributions of four massive galaxy clusters observed during the Science Verification phase of the Dark Energy Survey. This pathfinder study is meant to 1) validate the DECam imager for the task of measuring weak-lensing shapes, and 2) utilize DECam's large field of view to map out the clusters and their environments over 90 arcmin. We conduct a series of rigorous tests on astrometry, photometry, image quality, PSF modelling, and shear measurement accuracy to single out flaws in the data and also to identify the optimal data processing steps and parameters. We find Science Verification data from DECam to be suitable for the lensing analysis described in this paper. The PSF is generally well-behaved, but the modelling is rendered difficult by a flux-dependent PSF width and ellipticity. We employ photometric redshifts to distinguish between foreground and background galaxies, and a red-sequence cluster finder to provide cluster richness estimates and cluster-galaxy distributions. By fitting NFW profiles to the clusters in this study, we determine weak-lensing masses that are in agreement with previous work. For Abell 3261, we provide the first estimates of redshift, weak-lensing mass, and richness. Additionally, the cluster-galaxy distributions indicate the presence of filamentary structures attached to 1E 0657-56 and RXC J2248.7-4431, stretching out as far as 1degree (approximately 20 Mpc), showcasing the potential of DECam and DES for detailed studies of degree-scale features on the sky.

  17. Mass and galaxy distributions of four massive galaxy clusters from Dark Energy Survey Science Verification data

    SciTech Connect

    Melchior, P.; et al.

    2015-05-21

    We measure the weak-lensing masses and galaxy distributions of four massive galaxy clusters observed during the Science Verification phase of the Dark Energy Survey. This pathfinder study is meant to 1) validate the DECam imager for the task of measuring weak-lensing shapes, and 2) utilize DECam's large field of view to map out the clusters and their environments over 90 arcmin. We conduct a series of rigorous tests on astrometry, photometry, image quality, PSF modeling, and shear measurement accuracy to single out flaws in the data and also to identify the optimal data processing steps and parameters. We find Science Verification data from DECam to be suitable for the lensing analysis described in this paper. The PSF is generally well-behaved, but the modeling is rendered difficult by a flux-dependent PSF width and ellipticity. We employ photometric redshifts to distinguish between foreground and background galaxies, and a red-sequence cluster finder to provide cluster richness estimates and cluster-galaxy distributions. By fitting NFW profiles to the clusters in this study, we determine weak-lensing masses that are in agreement with previous work. For Abell 3261, we provide the first estimates of redshift, weak-lensing mass, and richness. In addition, the cluster-galaxy distributions indicate the presence of filamentary structures attached to 1E 0657-56 and RXC J2248.7-4431, stretching out as far as 1 degree (approximately 20 Mpc), showcasing the potential of DECam and DES for detailed studies of degree-scale features on the sky.

  18. Mass and galaxy distributions of four massive galaxy clusters from Dark Energy Survey Science Verification data

    NASA Astrophysics Data System (ADS)

    Melchior, P.; Suchyta, E.; Huff, E.; Hirsch, M.; Kacprzak, T.; Rykoff, E.; Gruen, D.; Armstrong, R.; Bacon, D.; Bechtol, K.; Bernstein, G. M.; Bridle, S.; Clampitt, J.; Honscheid, K.; Jain, B.; Jouvel, S.; Krause, E.; Lin, H.; MacCrann, N.; Patton, K.; Plazas, A.; Rowe, B.; Vikram, V.; Wilcox, H.; Young, J.; Zuntz, J.; Abbott, T.; Abdalla, F. B.; Allam, S. S.; Banerji, M.; Bernstein, J. P.; Bernstein, R. A.; Bertin, E.; Buckley-Geer, E.; Burke, D. L.; Castander, F. J.; da Costa, L. N.; Cunha, C. E.; Depoy, D. L.; Desai, S.; Diehl, H. T.; Doel, P.; Estrada, J.; Evrard, A. E.; Neto, A. Fausti; Fernandez, E.; Finley, D. A.; Flaugher, B.; Frieman, J. A.; Gaztanaga, E.; Gerdes, D.; Gruendl, R. A.; Gutierrez, G. R.; Jarvis, M.; Karliner, I.; Kent, S.; Kuehn, K.; Kuropatkin, N.; Lahav, O.; Maia, M. A. G.; Makler, M.; Marriner, J.; Marshall, J. L.; Merritt, K. W.; Miller, C. J.; Miquel, R.; Mohr, J.; Neilsen, E.; Nichol, R. C.; Nord, B. D.; Reil, K.; Roe, N. A.; Roodman, A.; Sako, M.; Sanchez, E.; Santiago, B. X.; Schindler, R.; Schubnell, M.; Sevilla-Noarbe, I.; Sheldon, E.; Smith, C.; Soares-Santos, M.; Swanson, M. E. C.; Sypniewski, A. J.; Tarle, G.; Thaler, J.; Thomas, D.; Tucker, D. L.; Walker, A.; Wechsler, R.; Weller, J.; Wester, W.

    2015-05-01

    We measure the weak lensing masses and galaxy distributions of four massive galaxy clusters observed during the Science Verification phase of the Dark Energy Survey (DES). This pathfinder study is meant to (1) validate the Dark Energy Camera (DECam) imager for the task of measuring weak lensing shapes, and (2) utilize DECam's large field of view to map out the clusters and their environments over 90 arcmin. We conduct a series of rigorous tests on astrometry, photometry, image quality, point spread function (PSF) modelling, and shear measurement accuracy to single out flaws in the data and also to identify the optimal data processing steps and parameters. We find Science Verification data from DECam to be suitable for the lensing analysis described in this paper. The PSF is generally well behaved, but the modelling is rendered difficult by a flux-dependent PSF width and ellipticity. We employ photometric redshifts to distinguish between foreground and background galaxies, and a red-sequence cluster finder to provide cluster richness estimates and cluster-galaxy distributions. By fitting Navarro-Frenk-White profiles to the clusters in this study, we determine weak lensing masses that are in agreement with previous work. For Abell 3261, we provide the first estimates of redshift, weak lensing mass, and richness. In addition, the cluster-galaxy distributions indicate the presence of filamentary structures attached to 1E 0657-56 and RXC J2248.7-4431, stretching out as far as 1°(approximately 20 Mpc), showcasing the potential of DECam and DES for detailed studies of degree-scale features on the sky.

  19. Mass and galaxy distributions of four massive galaxy clusters from Dark Energy Survey Science Verification data

    DOE PAGESBeta

    Melchior, P.; Suchyta, E.; Huff, E.; Hirsch, M.; Kacprzak, T.; Rykoff, E.; Gruen, D.; Armstrong, R.; Bacon, D.; Bechtol, K.; et al

    2015-03-31

    We measure the weak-lensing masses and galaxy distributions of four massive galaxy clusters observed during the Science Verification phase of the Dark Energy Survey. This pathfinder study is meant to 1) validate the DECam imager for the task of measuring weak-lensing shapes, and 2) utilize DECam's large field of view to map out the clusters and their environments over 90 arcmin. We conduct a series of rigorous tests on astrometry, photometry, image quality, PSF modelling, and shear measurement accuracy to single out flaws in the data and also to identify the optimal data processing steps and parameters. We find Sciencemore » Verification data from DECam to be suitable for the lensing analysis described in this paper. The PSF is generally well-behaved, but the modelling is rendered difficult by a flux-dependent PSF width and ellipticity. We employ photometric redshifts to distinguish between foreground and background galaxies, and a red-sequence cluster finder to provide cluster richness estimates and cluster-galaxy distributions. By fitting NFW profiles to the clusters in this study, we determine weak-lensing masses that are in agreement with previous work. For Abell 3261, we provide the first estimates of redshift, weak-lensing mass, and richness. Additionally, the cluster-galaxy distributions indicate the presence of filamentary structures attached to 1E 0657-56 and RXC J2248.7-4431, stretching out as far as 1degree (approximately 20 Mpc), showcasing the potential of DECam and DES for detailed studies of degree-scale features on the sky.« less

  20. Verification of spatial and temporal pressure distributions in segmented solid rocket motors

    NASA Technical Reports Server (NTRS)

    Salita, Mark

    1989-01-01

    A wide variety of analytical tools are in use today to predict the history and spatial distributions of pressure in the combustion chambers of solid rocket motors (SRMs). Experimental and analytical methods are presented here that allow the verification of many of these predictions. These methods are applied to the redesigned space shuttle booster (RSRM). Girth strain-gage data is compared to the predictions of various one-dimensional quasisteady analyses in order to verify the axial drop in motor static pressure during ignition transients as well as quasisteady motor operation. The results of previous modeling of radial flows in the bore, slots, and around grain overhangs are supported by approximate analytical and empirical techniques presented here. The predictions of circumferential flows induced by inhibitor asymmetries, nozzle vectoring, and propellant slump are compared to each other and to subscale cold air and water tunnel measurements to ascertain their validity.

  1. Dosimetric verification of stereotactic radiosurgery/stereotactic radiotherapy dose distributions using Gafchromic EBT3

    SciTech Connect

    Cusumano, Davide; Fumagalli, Maria L.; Marchetti, Marcello; Fariselli, Laura; De Martin, Elena

    2015-10-01

    Aim of this study is to examine the feasibility of using the new Gafchromic EBT3 film in a high-dose stereotactic radiosurgery and radiotherapy quality assurance procedure. Owing to the reduced dimensions of the involved lesions, the feasibility of scanning plan verification films on the scanner plate area with the best uniformity rather than using a correction mask was evaluated. For this purpose, signal values dispersion and reproducibility of film scans were investigated. Uniformity was then quantified in the selected area and was found to be within 1.5% for doses up to 8 Gy. A high-dose threshold level for analyses using this procedure was established evaluating the sensitivity of the irradiated films. Sensitivity was found to be of the order of centiGray for doses up to 6.2 Gy and decreasing for higher doses. The obtained results were used to implement a procedure comparing dose distributions delivered with a CyberKnife system to planned ones. The procedure was validated through single beam irradiation on a Gafchromic film. The agreement between dose distributions was then evaluated for 13 patients (brain lesions, 5 Gy/die prescription isodose ~80%) using gamma analysis. Results obtained using Gamma test criteria of 5%/1 mm show a pass rate of 94.3%. Gamma frequency parameters calculation for EBT3 films showed to strongly depend on subtraction of unexposed film pixel values from irradiated ones. In the framework of the described dosimetric procedure, EBT3 films proved to be effective in the verification of high doses delivered to lesions with complex shapes and adjacent to organs at risk.

  2. SLR data screening; location of peak of data distribution

    NASA Technical Reports Server (NTRS)

    Sinclair, Andrew T.

    1993-01-01

    At the 5th Laser Ranging Instrumentation Workshop held at Herstmonceux in 1984, consideration was given to the formation of on-site normal points by laser stations, and an algorithm was formulated. The algorithm included a recommendation that an iterated 3.0 x rms rejection criterion should be used to screen the data, and that arithmetic means should be formed within the normal point bins of the retained data. From Sept. 1990 onwards, this algorithm and screening criterion have been brought into effect by various laser stations for forming on-site normal points, and small variants of the algorithm are used by most analysis centers for forming normal points from full-rate data, although the data screening criterion they use ranges from about 2.5 to 3.0 x rms. At the CSTG Satellite Laser Ranging (SLR) Subcommission, a working group was set up in Mar. 1991 to review the recommended screening procedure. This paper has been influenced by the discussions of this working group, although the views expressed are primarily those of this author. The main thrust of this paper is that, particularly for single photon systems, a more important issue than data screening is the determination of the peak of a data distribution and hence, the determination of the bias of the peak from the mean. Several methods of determining the peak are discussed.

  3. Software for Statistical Analysis of Weibull Distributions with Application to Gear Fatigue Data: User Manual with Verification

    NASA Technical Reports Server (NTRS)

    Kranz, Timothy L.

    2002-01-01

    The Weibull distribution has been widely adopted for the statistical description and inference of fatigue data. This document provides user instructions, examples, and verification for software to analyze gear fatigue test data. The software was developed presuming the data are adequately modeled using a two-parameter Weibull distribution. The calculations are based on likelihood methods, and the approach taken is valid for data that include type I censoring. The software was verified by reproducing results published by others.

  4. Software for Statistical Analysis of Weibull Distributions with Application to Gear Fatigue Data: User Manual with Verification

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    2002-01-01

    The Weibull distribution has been widely adopted for the statistical description and inference of fatigue data. This document provides user instructions, examples, and verification for software to analyze gear fatigue test data. The software was developed presuming the data are adequately modeled using a two-parameter Weibull distribution. The calculations are based on likelihood methods, and the approach taken is valid for data that include type 1 censoring. The software was verified by reproducing results published by others.

  5. Study On Burst Location Technology under Steady-state in Water Distribution System

    NASA Astrophysics Data System (ADS)

    Liu, Xianpin; Li, Shuping; Wang, Shaowei; He, Fang; He, Zhixun; Cao, Guodong

    2010-11-01

    According to the characteristics of hydraulic information under the state of burst in water distribution system, to get the correlation of monitoring values and burst location and locate the position of burst on time by mathematical fitting. This method can effectively make use of the information of SCADA in water distribution system to active locating burst position. A new idea of burst location in water distribution systems to shorten the burst time, reduce the impact on urban water supply, economic losses and waste of water resources.

  6. Pretreatment verification of IMRT absolute dose distributions using a commercial a-Si EPID

    SciTech Connect

    Talamonti, C.; Casati, M.; Bucciolini, M.

    2006-11-15

    A commercial amorphous silicon electronic portal imaging device (EPID) has been studied to investigate its potential in the field of pretreatment verifications of step and shoot, intensity modulated radiation therapy (IMRT), 6 MV photon beams. The EPID was calibrated to measure absolute exit dose in a water-equivalent phantom at patient level, following an experimental approach, which does not require sophisticated calculation algorithms. The procedure presented was specifically intended to replace the time-consuming in-phantom film dosimetry. The dosimetric response was characterized on the central axis in terms of stability, linearity, and pulse repetition frequency dependence. The a-Si EPID demonstrated a good linearity with dose (within 2% from 1 monitor unit), which represent a prerequisite for the application in IMRT. A series of measurements, in which phantom thickness, air gap between the phantom and the EPID, field size and position of measurement of dose in the phantom (entrance or exit) varied, was performed to find the optimal calibration conditions, for which the field size dependence is minimized. In these conditions (20 cm phantom thickness, 56 cm air gap, exit dose measured at the isocenter), the introduction of a filter for the low-energy scattered radiation allowed us to define a universal calibration factor, independent of field size. The off-axis extension of the dose calibration was performed by applying a radial correction for the beam profile, distorted due to the standard flood field calibration of the device. For the acquisition of IMRT fields, it was necessary to employ home-made software and a specific procedure. This method was applied for the measurement of the dose distributions for 15 clinical IMRT fields. The agreement between the dose distributions, quantified by the gamma index, was found, on average, in 97.6% and 98.3% of the analyzed points for EPID versus TPS and for EPID versus FILM, respectively, thus suggesting a great

  7. Redshift Distributions of Galaxies in the DES Science Verification Shear Catalogue and Implications for Weak Lensing

    SciTech Connect

    Bonnett, C.

    2015-07-21

    We present photometric redshift estimates for galaxies used in the weak lensing analysis of the Dark Energy Survey Science Verification (DES SV) data. Four model- or machine learning-based photometric redshift methods { annz2, bpz calibrated against BCC-U fig simulations, skynet, and tpz { are analysed. For training, calibration, and testing of these methods, we also construct a catalogue of spectroscopically confirmed galaxies matched against DES SV data. The performance of the methods is evalu-ated against the matched spectroscopic catalogue, focusing on metrics relevant for weak lensing analyses, with additional validation against COSMOS photo-zs. From the galaxies in the DES SV shear catalogue, which have mean redshift 0.72 ±0.01 over the range 0:3 < z < 1:3, we construct three tomographic bins with means of z = {0.45; 0.67,1.00g}. These bins each have systematic uncertainties δz ≲ 0.05 in the mean of the fiducial skynet photo-z n(z). We propagate the errors in the redshift distributions through to their impact on cosmological parameters estimated with cosmic shear, and find that they cause shifts in the value of σ8 of approx. 3%. This shift is within the one sigma statistical errors on σ8 for the DES SV shear catalog. We also found that further study of the potential impact of systematic differences on the critical surface density, Σcrit, contained levels of bias safely less than the statistical power of DES SV data. We recommend a final Gaussian prior for the photo-z bias in the mean of n(z) of width 0:05 for each of the three tomographic bins, and show that this is a sufficient bias model for the corresponding cosmology analysis.

  8. Distribution and location of Daxx in cervical epithelial cells with high risk human papillomavirus positive

    PubMed Central

    2014-01-01

    Aims To provide the basis for further exploring the effect and its mechanism of Death domain associated protein (Daxx) on the progress of cervical carcinoma induced by human papillomavirus (HPV), the distribution and location of Daxx in cervical carcinoma with high risk HPV(HR-HPV) positive was analyzed. Methods The samples of normal cervical epithelial cells, cervical intraepithelial neoplasia grade I (CINI), CINII CINIII and cervical cancers were collected. Immunohistochemistry assay was used to analyze the distributions and locations of Daxx in the cervical tissue. Indirect immunoinfluorescence test was utilized to observe the locations of Daxx in Caski cells with HPV16 positive. Results Under the light microscopy, the brown signals of Daxx distributed in the nuclei of normal cervical epithelial cells; Daxx mainly distributed in nuclear membrane and there were a small amount of Daxx in the nuclei in CINI. Daxx intensively distributed in the cytoplasm and cell membrane in CINII, CINIII and cervical cancer. Under fluorescent microscopy, the distribution and location of Daxx in Caski cells was similarly to that in cervical cells of CINII, CINIII and cervical cancer. Conclusion In the progress of the cervical cancer, Daxx gradually translocates from nucleus into nuclear membrane, cytoplasm and cell membrane. Daxx locates in the cytoplasm and cell membrane in CINII, CINIII and cervical cancer. Virtual slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/4671548951113870. PMID:24398161

  9. Fault location of underground distribution network based on RBF network optimized by improved PSO algorithm

    NASA Astrophysics Data System (ADS)

    Tian, Shu; Zhao, Min

    2013-03-01

    To solve the difficult problem that exists in the location of single-phase ground fault for coal mine underground distribution network, a fault location method using RBF network optimized by improved PSO algorithm based on the mapping relationship between wavelet packet transform modulus maxima of specific frequency bands transient state zero sequence current in the fault line and fault point position is presented. The simulation analysis results in the cases of different transition resistances and fault distances show that the RBF network optimized by improved PSO algorithm can obtain accurate and reliable fault location results, and the fault location perfor- mance is better than traditional RBF network.

  10. A Distribution-class Locational Marginal Price (DLMP) Index for Enhanced Distribution Systems

    NASA Astrophysics Data System (ADS)

    Akinbode, Oluwaseyi Wemimo

    The smart grid initiative is the impetus behind changes that are expected to culminate into an enhanced distribution system with the communication and control infrastructure to support advanced distribution system applications and resources such as distributed generation, energy storage systems, and price responsive loads. This research proposes a distribution-class analog of the transmission LMP (DLMP) as an enabler of the advanced applications of the enhanced distribution system. The DLMP is envisioned as a control signal that can incentivize distribution system resources to behave optimally in a manner that benefits economic efficiency and system reliability and that can optimally couple the transmission and the distribution systems. The DLMP is calculated from a two-stage optimization problem; a transmission system OPF and a distribution system OPF. An iterative framework that ensures accurate representation of the distribution system's price sensitive resources for the transmission system problem and vice versa is developed and its convergence problem is discussed. As part of the DLMP calculation framework, a DCOPF formulation that endogenously captures the effect of real power losses is discussed. The formulation uses piecewise linear functions to approximate losses. This thesis explores, with theoretical proofs, the breakdown of the loss approximation technique when non-positive DLMPs/LMPs occur and discusses a mixed integer linear programming formulation that corrects the breakdown. The DLMP is numerically illustrated in traditional and enhanced distribution systems and its superiority to contemporary pricing mechanisms is demonstrated using price responsive loads. Results show that the impact of the inaccuracy of contemporary pricing schemes becomes significant as flexible resources increase. At high elasticity, aggregate load consumption deviated from the optimal consumption by up to about 45 percent when using a flat or time-of-use rate. Individual load

  11. Geometric verification

    NASA Technical Reports Server (NTRS)

    Grebowsky, G. J.

    1982-01-01

    Present LANDSAT data formats are reviewed to clarify how the geodetic location and registration capabilities were defined for P-tape products and RBV data. Since there is only one geometric model used in the master data processor, geometric location accuracy of P-tape products depends on the absolute accuracy of the model and registration accuracy is determined by the stability of the model. Due primarily to inaccuracies in data provided by the LANDSAT attitude management system, desired accuracies are obtained only by using ground control points and a correlation process. The verification of system performance with regards to geodetic location requires the capability to determine pixel positions of map points in a P-tape array. Verification of registration performance requires the capability to determine pixel positions of common points (not necessarily map points) in 2 or more P-tape arrays for a given world reference system scene. Techniques for registration verification can be more varied and automated since map data are not required. The verification of LACIE extractions is used as an example.

  12. A location-routing-inventory model for designing multisource distribution networks

    NASA Astrophysics Data System (ADS)

    Ahmadi-Javid, Amir; Seddighi, Amir Hossein

    2012-06-01

    This article studies a ternary-integration problem that incorporates location, inventory and routing decisions in designing a multisource distribution network. The objective of the problem is to minimize the total cost of location, routing and inventory. A mixed-integer programming formulation is first presented, and then a three-phase heuristic is developed to solve large-sized instances of the problem. The numerical study indicates that the proposed heuristic is both effective and efficient.

  13. A novel multi-human location method for distributed binary pyroelectric infrared sensor tracking system: Region partition using PNN and bearing-crossing location

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Li, Xiaoshan; Luo, Jing

    2015-01-01

    This paper proposes a novel multi-human location method for distributed binary pyroelectric infrared sensor tracking system based on region partition using probabilistic neural network and bearing-crossing location. The detection space of system is divided into many sub-regions and encoded uniformly. The human region is located by an integrated neural network classifier, which is developed based on the probabilistic neural network ensembles and the Bagging algorithm. The location of a human target can be achieved by first determining a coarse location by this classifier and then a fine location using our previous bearing-crossing location method. Simulation and experimental results have shown that the human region can be judged rapidly and the false detection points of multi-human location can be eliminated effectively. Compared with the bearing-crossing location method, the novel method has significantly improved the locating and tracking accuracy of multiple human targets in infrared sensor tracking system.

  14. Development of Micro Discharge Locator for Distribution Line using Analogue Signal Processing

    NASA Astrophysics Data System (ADS)

    Kumazawa, Takao; Oka, Fujio

    Micro discharges (MDs) such as spark or partial discharges on distribution lines, which occur by degradation of insulators, insulated wires, bushings, etc., may cause television interference or ground fault. A technique for locating MDs using differences in arrival time of electromagnetic pulses radiated from the MDs has been investigated recently. However, the technique requires a large and expensive apparatus such as a digital storage oscilloscope able to record the received pulse signals very fast. We investigated a new technique to estimate the direction of arrival (DOA) of the electromagnetic pulses using analogue signal processing, and produced a prototype of a MD locator. In order to evaluate the estimation error of DOA, we performed several experiments to locate spark discharges about 50nC/pulse on testing distribution line by using the MD locator. The average estimation error was about 5 degree, and the error of azimuth was several times larger than that of elevation in most cases. This reason is considered that resolving power of azimuth became lower than that of elevation owing to configuration of receiving antennas. We also tried to locate MDs on real distribution lines, and confirmed that there was no significant influence of reflected or carrier waves on DOA estimation.

  15. Using fuzzy sets to model the uncertainty in the fault location process of distribution networks

    SciTech Connect

    Jaerventausta, P.; Verho, P.; Partanen, J. )

    1994-04-01

    In the computerized fault diagnosis of distribution networks the heuristic knowledge of the control center operators can be combined with the information obtained from the network data base and SCADA system. However, the nature of the heuristic knowledge is inexact and uncertain. Also the information obtained from the remote control system contains uncertainty and may be incorrect, conflicting or inadequate. This paper proposes a method based on fuzzy set theory to deal with the uncertainty involved in the process of locating faults in distribution networks. The method is implemented in a prototype version of the distribution network operation support system.

  16. A method to optimize sampling locations for measuring indoor air distributions

    NASA Astrophysics Data System (ADS)

    Huang, Yan; Shen, Xiong; Li, Jianmin; Li, Bingye; Duan, Ran; Lin, Chao-Hsin; Liu, Junjie; Chen, Qingyan

    2015-02-01

    Indoor air distributions, such as the distributions of air temperature, air velocity, and contaminant concentrations, are very important to occupants' health and comfort in enclosed spaces. When point data is collected for interpolation to form field distributions, the sampling locations (the locations of the point sensors) have a significant effect on time invested, labor costs and measuring accuracy on field interpolation. This investigation compared two different sampling methods: the grid method and the gradient-based method, for determining sampling locations. The two methods were applied to obtain point air parameter data in an office room and in a section of an economy-class aircraft cabin. The point data obtained was then interpolated to form field distributions by the ordinary Kriging method. Our error analysis shows that the gradient-based sampling method has 32.6% smaller error of interpolation than the grid sampling method. We acquired the function between the interpolation errors and the sampling size (the number of sampling points). According to the function, the sampling size has an optimal value and the maximum sampling size can be determined by the sensor and system errors. This study recommends the gradient-based sampling method for measuring indoor air distributions.

  17. Optimal Capacity and Location Assessment of Natural Gas Fired Distributed Generation in Residential Areas

    NASA Astrophysics Data System (ADS)

    Khalil, Sarah My

    With ever increasing use of natural gas to generate electricity, installed natural gas fired microturbines are found in residential areas to generate electricity locally. This research work discusses a generalized methodology for assessing optimal capacity and locations for installing natural gas fired microturbines in a distribution residential network. The overall objective is to place microturbines to minimize the system power loss occurring in the electrical distribution network; in such a way that the electric feeder does not need any up-gradation. The IEEE 123 Node Test Feeder is selected as the test bed for validating the developed methodology. Three-phase unbalanced electric power flow is run in OpenDSS through COM server, and the gas distribution network is analyzed using GASWorkS. The continual sensitivity analysis methodology is developed to select multiple DG locations and annual simulation is run to minimize annual average losses. The proposed placement of microturbines must be feasible in the gas distribution network and should not result into gas pipeline reinforcement. The corresponding gas distribution network is developed in GASWorkS software, and nodal pressures of the gas system are checked for various cases to investigate if the existing gas distribution network can accommodate the penetration of selected microturbines. The results indicate the optimal locations suitable to place microturbines and capacity that can be accommodated by the system, based on the consideration of overall minimum annual average losses as well as the guarantee of nodal pressure provided by the gas distribution network. The proposed method is generalized and can be used for any IEEE test feeder or an actual residential distribution network.

  18. Acoustic Emission Source Location Using a Distributed Feedback Fiber Laser Rosette

    PubMed Central

    Huang, Wenzhu; Zhang, Wentao; Li, Fang

    2013-01-01

    This paper proposes an approach for acoustic emission (AE) source localization in a large marble stone using distributed feedback (DFB) fiber lasers. The aim of this study is to detect damage in structures such as those found in civil applications. The directional sensitivity of DFB fiber laser is investigated by calculating location coefficient using a method of digital signal analysis. In this, autocorrelation is used to extract the location coefficient from the periodic AE signal and wavelet packet energy is calculated to get the location coefficient of a burst AE source. Normalization is processed to eliminate the influence of distance and intensity of AE source. Then a new location algorithm based on the location coefficient is presented and tested to determine the location of AE source using a Delta (Δ) DFB fiber laser rosette configuration. The advantage of the proposed algorithm over the traditional methods based on fiber Bragg Grating (FBG) include the capability of: having higher strain resolution for AE detection and taking into account two different types of AE source for location. PMID:24141266

  19. A simple method to determine leakage location in water distribution based on pressure profiles

    NASA Astrophysics Data System (ADS)

    Prihtiadi, Hafizh; Azwar, Azrul; Djamal, Mitra

    2016-03-01

    Nowadays, the pipeline leak is a serious problem for water distributions in big cities and the government that needs action and a great solution. Several techniques have been developed to improve the accuracy, the limitation of losses, and decrease environmental damage. However, these methods need highly costs and complexity equipment. This paper presents a simple method to determine leak location with the gradient intersection method calculations. A simple water distribution system have been built on PVC pipeline along 4m, diameter 15mm and 12 pressure sensors which placed into the pipeline. Each sensor measured the pressure for each point and send the data to microcontroller. The artificial hole was made between the sixth and seventh of sensor. With three holes, the system calculated and analyzed the leak location with error 3.67%.

  20. Location of lightning stroke on OPGW by use of distributed optical fiber sensor

    NASA Astrophysics Data System (ADS)

    Lu, Lidong; Liang, Yun; Li, Binglin; Guo, Jinghong; Zhang, Hao; Zhang, Xuping

    2014-12-01

    A new method based on a distributed optical fiber sensor (DOFS) to locate the position of lightning stroke on the optical fiber ground wire (OPGW) is proposed and experimentally demonstrated. In the method, the lightning stroke process is considered to be a heat release process at the lightning stroke position, so Brillouin optical time domain reflectometry (BOTDR) with spatial resolution of 2m is used as the distributed temperature sensor. To simulate the lightning stroke process, an electric anode with high pulsed current and a negative electrode (the OPGW) are adopted to form a lightning impulse system with duration time of 200ms. In the experiment, lightning strokes with the quantity of electric discharge of 100 Coul and 200 Coul are generated respectively, and the DOFS can sensitively capture the temperature change of the lightning stroke position in the transient electric discharging process. Experimental results show that DOFS is a feasible instrument to locate the lightning stroke on the OPGW and it has excellent potential for the maintenance of electric power transmission line. Additionally, as the range of lightning stroke is usually within 10cm and the spatial resolution of a typical DOFS is beyond 1m, the temperature characteristics in a small area cannot be accurately represented by a DOFS with a large spatial resolution. Therefore, for further application of distributed optical fiber temperature sensors for lightning stroke location on OPGW, such as BOTDR and ROTDR, it is important to enhance the spatial resolution.

  1. Estimation of Distributed Fermat-Point Location for Wireless Sensor Networking

    PubMed Central

    Huang, Po-Hsian; Chen, Jiann-Liang; Larosa, Yanuarius Teofilus; Chiang, Tsui-Lien

    2011-01-01

    This work presents a localization scheme for use in wireless sensor networks (WSNs) that is based on a proposed connectivity-based RF localization strategy called the distributed Fermat-point location estimation algorithm (DFPLE). DFPLE applies triangle area of location estimation formed by intersections of three neighboring beacon nodes. The Fermat point is determined as the shortest path from three vertices of the triangle. The area of estimated location then refined using Fermat point to achieve minimum error in estimating sensor nodes location. DFPLE solves problems of large errors and poor performance encountered by localization schemes that are based on a bounding box algorithm. Performance analysis of a 200-node development environment reveals that, when the number of sensor nodes is below 150, the mean error decreases rapidly as the node density increases, and when the number of sensor nodes exceeds 170, the mean error remains below 1% as the node density increases. Second, when the number of beacon nodes is less than 60, normal nodes lack sufficient beacon nodes to enable their locations to be estimated. However, the mean error changes slightly as the number of beacon nodes increases above 60. Simulation results revealed that the proposed algorithm for estimating sensor positions is more accurate than existing algorithms, and improves upon conventional bounding box strategies. PMID:22163851

  2. Comparison of Kodak EDR2 and Gafchromic EBT film for intensity-modulated radiation therapy dose distribution verification

    SciTech Connect

    Sankar, A. . E-mail: asankar_phy@yahoo.co.in; Ayyangar, Komanduri M.; Nehru, R. Mothilal; Gopalakrishna Kurup, P.G.; Murali, V.; Enke, Charles A.; Velmurugan, J.

    2006-01-01

    The quantitative dose validation of intensity-modulated radiation therapy (IMRT) plans require 2-dimensional (2D) high-resolution dosimetry systems with uniform response over its sensitive region. The present work deals with clinical use of commercially available self-developing Radio Chromic Film, Gafchromic EBT film, for IMRT dose verification. Dose response curves were generated for the films using a VXR-16 film scanner. The results obtained with EBT films were compared with the results of Kodak extended dose range 2 (EDR2) films. The EBT film had a linear response between the dose range of 0 to 600 cGy. The dose-related characteristics of the EBT film, such as post irradiation color growth with time, film uniformity, and effect of scanning orientation, were studied. There was up to 8.6% increase in the color density between 2 to 40 hours after irradiation. There was a considerable variation, up to 8.5%, in the film uniformity over its sensitive region. The quantitative differences between calculated and measured dose distributions were analyzed using DTA and Gamma index with the tolerance of 3% dose difference and 3-mm distance agreement. The EDR2 films showed consistent results with the calculated dose distributions, whereas the results obtained using EBT were inconsistent. The variation in the film uniformity limits the use of EBT film for conventional large-field IMRT verification. For IMRT of smaller field sizes (4.5 x 4.5 cm), the results obtained with EBT were comparable with results of EDR2 films.

  3. Distributed fiber sensing system with wide frequency response and accurate location

    NASA Astrophysics Data System (ADS)

    Shi, Yi; Feng, Hao; Zeng, Zhoumo

    2016-02-01

    A distributed fiber sensing system merging Mach-Zehnder interferometer and phase-sensitive optical time domain reflectometer (Φ-OTDR) is demonstrated for vibration measurement, which requires wide frequency response and accurate location. Two narrow line-width lasers with delicately different wavelengths are used to constitute the interferometer and reflectometer respectively. A narrow band Fiber Bragg Grating is responsible for separating the two wavelengths. In addition, heterodyne detection is applied to maintain the signal to noise rate of the locating signal. Experiment results show that the novel system has a wide frequency from 1 Hz to 50 MHz, limited by the sample frequency of data acquisition card, and a spatial resolution of 20 m, according to 200 ns pulse width, along 2.5 km fiber link.

  4. Tomotherapy dose distribution verification using MAGIC-f polymer gel dosimetry

    SciTech Connect

    Pavoni, J. F.; Pike, T. L.; Snow, J.; DeWerd, L.; Baffa, O.

    2012-05-15

    Purpose: This paper presents the application of MAGIC-f gel in a three-dimensional dose distribution measurement and its ability to accurately measure the dose distribution from a tomotherapy unit. Methods: A prostate intensity-modulated radiation therapy (IMRT) irradiation was simulated in the gel phantom and the treatment was delivered by a TomoTherapy equipment. Dose distribution was evaluated by the R2 distribution measured in magnetic resonance imaging. Results: A high similarity was found by overlapping of isodoses of the dose distribution measured with the gel and expected by the treatment planning system (TPS). Another analysis was done by comparing the relative absorbed dose profiles in the measured and in the expected dose distributions extracted along indicated lines of the volume and the results were also in agreement. The gamma index analysis was also applied to the data and a high pass rate was achieved (88.4% for analysis using 3%/3 mm and of 96.5% using 4%/4 mm). The real three-dimensional analysis compared the dose-volume histograms measured for the planning volumes and expected by the treatment planning, being the results also in good agreement by the overlapping of the curves. Conclusions: These results show that MAGIC-f gel is a promise for tridimensional dose distribution measurements.

  5. Experimental verification of reconstructed absorbers embedded in scattering media by optical power ratio distribution.

    PubMed

    Yamaoki, Toshihiko; Hamada, Hiroaki; Matoba, Osamu

    2016-09-01

    Experimental investigation to show the effectiveness of the extraction method of absorber information in a scattering medium by taking the output power ratio distribution is presented. In the experiment, two metallic wires sandwiched by three homogeneous scattering media are used as absorbers in transmission geometry. The output power ratio distributions can extract the influence of the absorbers to enhance the optical signal. The peak position of the output power ratio distributions agree with the results suggested by numerical simulation. From the reconstructed results of tomography in the scattering media, we have confirmed that the tomographic image of two wires can distinguish them successfully from 41×21 output power ratio distributions by using continuous-wave light. PMID:27607261

  6. Location, Location, Location!

    ERIC Educational Resources Information Center

    Ramsdell, Kristin

    2004-01-01

    Of prime importance in real estate, location is also a key element in the appeal of romances. Popular geographic settings and historical periods sell, unpopular ones do not--not always with a logical explanation, as the author discovered when she conducted a survey on this topic last year. (Why, for example, are the French Revolution and the…

  7. Data-base location problems in distributed data-base management systems

    SciTech Connect

    Chahande, A.I.

    1989-01-01

    Recent years have witnessed an increasing number of systems, usually heterogeneous, that are geographically distributed and connected by high capacity communication channels, eg. ARPANET system, CYCLADES network, TymNET, etc. In the design and management of such systems, a major portion of the planning is concerned with storing large quantities of information (data) at judiciously selected nodes in the network, in adherence to some optimal criterion. This necessitates analysis pertaining to information storage costs, transaction (update and query) costs, response times, processing locality, etc. There are essentially two definitions of optimality - Cost measures and Performance measures. The two measures of optimality parallel each other. This research essentially considers the minimal cost objective, but incorporates the performance objectives as well, by considering cost penalties for sub-optimal performance. The distributed database design problem is fully characterized by two sub-problems: (a) Design of the Fragmentation Schema, and (b) Designing the Allocation Schema for these fragments. These problems have been addressed independently in the literature. This research, appreciating the mutual interdependence of the issues, attempts the distributed database location problem considering both aspects in unison (logical as well as physical criteria). The problem can be succinctly stated as follows: Given the set of user nodes with their respective transaction (update and query) frequencies, and a set of application programs, the database location problem assigns copies of various database files (or fragments thereof) to candidate nodes, such that the total cost is minimized. The decision must trade-off the cost of accessing, which is reduced by additional copies, against the cost of updating and storing these additional copies.

  8. Generation and use of measurement-based 3-D dose distributions for 3-D dose calculation verification.

    PubMed

    Stern, R L; Fraass, B A; Gerhardsson, A; McShan, D L; Lam, K L

    1992-01-01

    A 3-D radiation therapy treatment planning system calculates dose to an entire volume of points and therefore requires a 3-D distribution of measured dose values for quality assurance and dose calculation verification. To measure such a volumetric distribution with a scanning ion chamber is prohibitively time consuming. A method is presented for the generation of a 3-D grid of dose values based on beam's-eye-view (BEV) film dosimetry. For each field configuration of interest, a set of BEV films at different depths is obtained and digitized, and the optical densities are converted to dose. To reduce inaccuracies associated with film measurement of megavoltage photon depth doses, doses on the different planes are normalized using an ion-chamber measurement of the depth dose. A 3-D grid of dose values is created by interpolation between BEV planes along divergent beam rays. This matrix of measurement-based dose values can then be compared to calculations over the entire volume of interest. This method is demonstrated for three different field configurations. Accuracy of the film-measured dose values is determined by 1-D and 2-D comparisons with ion chamber measurements. Film and ion chamber measurements agree within 2% in the central field regions and within 2.0 mm in the penumbral regions. PMID:1620042

  9. Approaches to verification of two-dimensional water quality models

    SciTech Connect

    Butkus, S.R. . Water Quality Dept.)

    1990-11-01

    The verification of a water quality model is the one procedure most needed by decision making evaluating a model predictions, but is often not adequate or done at all. The results of a properly conducted verification provide the decision makers with an estimate of the uncertainty associated with model predictions. Several statistical tests are available for quantifying of the performance of a model. Six methods of verification were evaluated using an application of the BETTER two-dimensional water quality model for Chickamauga reservoir. Model predictions for ten state variables were compared to observed conditions from 1989. Spatial distributions of the verification measures showed the model predictions were generally adequate, except at a few specific locations in the reservoir. The most useful statistics were the mean standard error of the residuals. Quantifiable measures of model performance should be calculated during calibration and verification of future applications of the BETTER model. 25 refs., 5 figs., 7 tabs.

  10. FDTD verification of deep-set brain tumor hyperthermia using a spherical microwave source distribution

    SciTech Connect

    Dunn, D.; Rappaport, C.M.; Terzuoli, A.J. Jr.

    1996-10-01

    Although use of noninvasive microwave hyperthermia to treat cancer is problematic in many human body structures, careful selection of the source electric field distribution around the entire surface of the head can generate a tightly focused global power density maximum at the deepest point within the brain. An analytic prediction of the optimum volume field distribution in a layered concentric head model based on summing spherical harmonic modes is derived and presented. This ideal distribution is then verified using a three-dimensional finite difference time domain (TDTD) simulation with a discretized, MRI-based head model excited by the spherical source. The numerical computation gives a very similar dissipated power pattern as the analytic prediction. This study demonstrates that microwave hyperthermia can theoretically be a feasible cancer treatment modality for tumors in the head, providing a well-resolved hot-spot at depth without overheating any other healthy tissue.

  11. Verification, Validation, and Accreditation Challenges of Distributed Simulation for Space Exploration Technology

    NASA Technical Reports Server (NTRS)

    Thomas, Danny; Hartway, Bobby; Hale, Joe

    2006-01-01

    Throughout its rich history, NASA has invested heavily in sophisticated simulation capabilities. These capabilities reside in NASA facilities across the country - and with partners around the world. NASA s Exploration Systems Mission Directorate (ESMD) has the opportunity to leverage these considerable investments to resolve technical questions relating to its missions. The distributed nature of the assets, both in terms of geography and organization, present challenges to their combined and coordinated use, but precedents of geographically distributed real-time simulations exist. This paper will show how technological advances in simulation can be employed to address the issues associated with netting NASA simulation assets.

  12. Gas Chromatographic Verification of a Mathematical Model: Product Distribution Following Methanolysis Reactions.

    ERIC Educational Resources Information Center

    Lam, R. B.; And Others

    1983-01-01

    Investigated application of binomial statistics to equilibrium distribution of ester systems by employing gas chromatography to verify the mathematical model used. Discusses model development and experimental techniques, indicating the model enables a straightforward extension to symmetrical polyfunctional esters and presents a mathematical basis…

  13. Monte Carlo verification of IMRT dose distributions from a commercial treatment planning optimization system

    NASA Astrophysics Data System (ADS)

    Ma, C.-M.; Pawlicki, T.; Jiang, S. B.; Li, J. S.; Deng, J.; Mok, E.; Kapur, A.; Xing, L.; Ma, L.; Boyer, A. L.

    2000-09-01

    The purpose of this work was to use Monte Carlo simulations to verify the accuracy of the dose distributions from a commercial treatment planning optimization system (Corvus, Nomos Corp., Sewickley, PA) for intensity-modulated radiotherapy (IMRT). A Monte Carlo treatment planning system has been implemented clinically to improve and verify the accuracy of radiotherapy dose calculations. Further modifications to the system were made to compute the dose in a patient for multiple fixed-gantry IMRT fields. The dose distributions in the experimental phantoms and in the patients were calculated and used to verify the optimized treatment plans generated by the Corvus system. The Monte Carlo calculated IMRT dose distributions agreed with the measurements to within 2% of the maximum dose for all the beam energies and field sizes for both the homogeneous and heterogeneous phantoms. The dose distributions predicted by the Corvus system, which employs a finite-size pencil beam (FSPB) algorithm, agreed with the Monte Carlo simulations and measurements to within 4% in a cylindrical water phantom with various hypothetical target shapes. Discrepancies of more than 5% (relative to the prescribed target dose) in the target region and over 20% in the critical structures were found in some IMRT patient calculations. The FSPB algorithm as implemented in the Corvus system is adequate for homogeneous phantoms (such as prostate) but may result in significant under- or over-estimation of the dose in some cases involving heterogeneities such as the air-tissue, lung-tissue and tissue-bone interfaces.

  14. Commercial milk distribution profiles and production locations. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Deonigi, D.E.; Anderson, D.M.; Wilfert, G.L.

    1993-12-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project was established to estimate radiation doses that people could have received from nuclear operations at the Hanford Site since 1944. For this period iodine-131 is the most important offsite contributor to radiation doses from Hanford operations. Consumption of milk from cows that ate vegetation contaminated by iodine-131 is the dominant radiation pathway for individuals who drank milk. Information has been developed on commercial milk cow locations and commercial milk distribution during 1945 and 1951. The year 1945 was selected because during 1945 the largest amount of iodine-131 was released from Hanford facilities in a calendar year; therefore, 1945 was the year in which an individual was likely to have received the highest dose. The year 1951 was selected to provide data for comparing the changes that occurred in commercial milk flows (i.e., sources, processing locations, and market areas) between World War II and the post-war period. To estimate the doses people could have received from this milk flow, it is necessary to estimate the amount of milk people consumed, the source of the milk, the specific feeding regime used for milk cows, and the amount of iodine-131 contamination deposited on feed.

  15. Commercial milk distribution profiles and production locations. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Deonigi, D.E.; Anderson, D.M.; Wilfert, G.L.

    1994-04-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project was established to estimate radiation doses that people could have received from nuclear operations at the Hanford Site since 1944. For this period iodine-131 is the most important offsite contributor to radiation doses from Hanford operations. Consumption of milk from cows that ate vegetation contaminated by iodine-131 is the dominant radiation pathway for individuals who drank milk (Napier 1992). Information has been developed on commercial milk cow locations and commercial milk distribution during 1945 and 1951. The year 1945 was selected because during 1945 the largest amount of iodine-131 was released from Hanford facilities in a calendar year (Heeb 1993); therefore, 1945 was the year in which an individual was likely to have received the highest dose. The year 1951 was selected to provide data for comparing the changes that occurred in commercial milk flows (i.e., sources, processing locations, and market areas) between World War II and the post-war period. To estimate the doses people could have received from this milk flow, it is necessary to estimate the amount of milk people consumed, the source of the milk, the specific feeding regime used for milk cows, and the amount of iodine-131 contamination deposited on feed.

  16. Locating illicit connections in storm water sewers using fiber-optic distributed temperature sensing.

    PubMed

    Hoes, O A C; Schilperoort, R P S; Luxemburg, W M J; Clemens, F H L R; van de Giesen, N C

    2009-12-01

    A newly developed technique using distributed temperature sensing (DTS) has been developed to find illicit household sewage connections to storm water systems in the Netherlands. DTS allows for the accurate measurement of temperature along a fiber-optic cable, with high spatial (2m) and temporal (30s) resolution. We inserted a fiber-optic cable of 1300m in two storm water drains. At certain locations, significant temperature differences with an intermittent character were measured, indicating inflow of water that was not storm water. In all cases, we found that foul water from households or companies entered the storm water system through an illicit sewage connection. The method of using temperature differences for illicit connection detection in storm water networks is discussed. The technique of using fiber-optic cables for distributed temperature sensing is explained in detail. The DTS method is a reliable, inexpensive and practically feasible method to detect illicit connections to storm water systems, which does not require access to private property. PMID:19735929

  17. Detecting surface runoff location in a small catchment using distributed and simple observation method

    NASA Astrophysics Data System (ADS)

    Dehotin, Judicaël; Breil, Pascal; Braud, Isabelle; de Lavenne, Alban; Lagouy, Mickaël; Sarrazin, Benoît

    2015-06-01

    Surface runoff is one of the hydrological processes involved in floods, pollution transfer, soil erosion and mudslide. Many models allow the simulation and the mapping of surface runoff and erosion hazards. Field observations of this hydrological process are not common although they are crucial to evaluate surface runoff models and to investigate or assess different kinds of hazards linked to this process. In this study, a simple field monitoring network is implemented to assess the relevance of a surface runoff susceptibility mapping method. The network is based on spatially distributed observations (nine different locations in the catchment) of soil water content and rainfall events. These data are analyzed to determine if surface runoff occurs. Two surface runoff mechanisms are considered: surface runoff by saturation of the soil surface horizon and surface runoff by infiltration excess (also called hortonian runoff). The monitoring strategy includes continuous records of soil surface water content and rainfall with a 5 min time step. Soil infiltration capacity time series are calculated using field soil water content and in situ measurements of soil hydraulic conductivity. Comparison of soil infiltration capacity and rainfall intensity time series allows detecting the occurrence of surface runoff by infiltration-excess. Comparison of surface soil water content with saturated water content values allows detecting the occurrence of surface runoff by saturation of the soil surface horizon. Automatic records were complemented with direct field observations of surface runoff in the experimental catchment after each significant rainfall event. The presented observation method allows the identification of fast and short-lived surface runoff processes at a small spatial and temporal resolution in natural conditions. The results also highlight the relationship between surface runoff and factors usually integrated in surface runoff mapping such as topography, rainfall

  18. Aerosol number size distributions over a coastal semi urban location: Seasonal changes and ultrafine particle bursts.

    PubMed

    Babu, S Suresh; Kompalli, Sobhan Kumar; Moorthy, K Krishna

    2016-09-01

    Number-size distribution is one of the important microphysical properties of atmospheric aerosols that influence aerosol life cycle, aerosol-radiation interaction as well as aerosol-cloud interactions. Making use of one-yearlong measurements of aerosol particle number-size distributions (PNSD) over a broad size spectrum (~15-15,000nm) from a tropical coastal semi-urban location-Trivandrum (Thiruvananthapuram), the size characteristics, their seasonality and response to mesoscale and synoptic scale meteorology are examined. While the accumulation mode contributed mostly to the annual mean concentration, ultrafine particles (having diameter <100nm) contributed as much as 45% to the total concentration, and thus constitute a strong reservoir, that would add to the larger particles through size transformation. The size distributions were, in general, bimodal with well-defined modes in the accumulation and coarse regimes, with mode diameters lying in the range 141 to 167nm and 1150 to 1760nm respectively, in different seasons. Despite the contribution of the coarse sized particles to the total number concentration being meager, they contributed significantly to the surface area and volume, especially during transport of marine air mass highlighting the role of synoptic air mass changes. Significant diurnal variation occurred in the number concentrations, geometric mean diameters, which is mostly attributed to the dynamics of the local coastal atmospheric boundary layer and the effect of mesoscale land/sea breeze circulation. Bursts of ultrafine particles (UFP) occurred quite frequently, apparently during periods of land-sea breeze transitions, caused by the strong mixing of precursor-rich urban air mass with the cleaner marine air mass; the resulting turbulence along with boundary layer dynamics aiding the nucleation. These ex-situ particles were observed at the surface due to the transport associated with boundary layer dynamics. The particle growth rates from

  19. Relation Between Sprite Distribution and Source Locations of VHF Pulses Derived From JEM- GLIMS Measurements

    NASA Astrophysics Data System (ADS)

    Sato, Mitsuteru; Mihara, Masahiro; Ushio, Tomoo; Morimoto, Takeshi; Kikuchi, Hiroshi; Adachi, Toru; Suzuki, Makoto; Yamazaki, Atsushi; Takahashi, Yukihiro

    2015-04-01

    JEM-GLIMS is continuing the comprehensive nadir observations of lightning and TLEs using optical instruments and electromagnetic wave receivers since November 2012. For the period between November 20, 2012 and November 30, 2014, JEM-GLIMS succeeded in detecting 5,048 lightning events. A total of 567 events in 5,048 lightning events were TLEs, which were mostly elves events. To identify the sprite occurrences from the transient optical flash data, it is necessary to perform the following data analysis: (1) a subtraction of the appropriately scaled wideband camera data from the narrowband camera data; (2) a calculation of intensity ratio between different spectrophotometer channels; and (3) an estimation of the polarization and CMC for the parent CG discharges using ground-based ELF measurement data. From a synthetic comparison of these results, it is confirmed that JEM-GLISM succeeded in detecting sprite events. The VHF receiver (VITF) onboard JEM-GLIMS uses two patch-type antennas separated by a 1.6-m interval and can detect VHF pulses emitted by lightning discharges in the 70-100 MHz frequency range. Using both an interferometric technique and a group delay technique, we can estimate the source locations of VHF pulses excited by lightning discharges. In the event detected at 06:41:15.68565 UT on June 12, 2014 over central North America, sprite was distributed with a horizontal displacement of 20 km from the peak location of the parent lightning emission. In this event, a total of 180 VHF pulses were simultaneously detected by VITF. From the detailed data analysis of these VHF pulse data, it is found that the majority of the source locations were placed near the area of the dim lightning emission, which may imply that the VHF pulses were associated with the in-cloud lightning current. At the presentation, we will show detailed comparison between the spatiotemporal characteristics of sprite emission and source locations of VHF pulses excited by the parent lightning

  20. Experimental verification of a model describing the intensity distribution from a single mode optical fiber

    SciTech Connect

    Moro, Erik A; Puckett, Anthony D; Todd, Michael D

    2011-01-24

    The intensity distribution of a transmission from a single mode optical fiber is often approximated using a Gaussian-shaped curve. While this approximation is useful for some applications such as fiber alignment, it does not accurately describe transmission behavior off the axis of propagation. In this paper, another model is presented, which describes the intensity distribution of the transmission from a single mode optical fiber. A simple experimental setup is used to verify the model's accuracy, and agreement between model and experiment is established both on and off the axis of propagation. Displacement sensor designs based on the extrinsic optical lever architecture are presented. The behavior of the transmission off the axis of propagation dictates the performance of sensor architectures where large lateral offsets (25-1500 {micro}m) exist between transmitting and receiving fibers. The practical implications of modeling accuracy over this lateral offset region are discussed as they relate to the development of high-performance intensity modulated optical displacement sensors. In particular, the sensitivity, linearity, resolution, and displacement range of a sensor are functions of the relative positioning of the sensor's transmitting and receiving fibers. Sensor architectures with high combinations of sensitivity and displacement range are discussed. It is concluded that the utility of the accurate model is in its predicative capability and that this research could lead to an improved methodology for high-performance sensor design.

  1. Experimental study and verification of the residence time distribution using fluorescence spectroscopy and color measurement

    NASA Astrophysics Data System (ADS)

    Aigner, Michael; Lepschi, Alexander; Aigner, Jakob; Garmendia, Izaro; Miethlinger, Jürgen

    2015-05-01

    We report on the inline measurement of residence time (RT) and residence time distribution (RTD) by means of fluorescence spectroscopy [1] and optical color measurements [2]. Measurements of thermoplastics in a variety of single-screw extruders were conducted. To assess the influence of screw configurations, screw speeds and mass throughput on the RT and RTD, tracer particles were introduced into the feeding section and the RT was measured inline in the plasticization unit. Using special measurement probes that can be inserted into 1/2″ - 20 UNF (unified fine thread) bore holes, the mixing ability of either the whole plasticization unit or selected screw regions, e.g., mixing parts, can be validated during the extrusion process. The measurement setups complement each other well, and their combined use can provide further insights into the mixing behavior of single-screw plasticization units.

  2. Verification of dose distribution for volumetric modulated arc therapy total marrow irradiation in a humanlike phantom

    SciTech Connect

    Surucu, Murat; Yeginer, Mete; Kavak, Gulbin O.; Fan, John; Radosevich, James A.; Aydogan, Bulent

    2012-01-15

    Purpose: Volumetric modulated arc therapy (VMAT) treatment planning studies have been reported to provide good target coverage and organs at risk (OARs) sparing in total marrow irradiation (TMI). A comprehensive dosimetric study simulating the clinical situation as close as possible is a norm in radiotherapy before a technique can be used to treat a patient. Without such a study, it would be difficult to make a reliable and safe clinical transition especially with a technique as complicated as VMAT-TMI. To this end, the dosimetric feasibility of VMAT-TMI technique in terms of treatment planning, delivery efficiency, and the most importantly three dimensional dose distribution accuracy was investigated in this study. The VMAT-TMI dose distribution inside a humanlike Rando phantom was measured and compared to the dose calculated using RapidArc especially in the field junctions and the inhomogeneous tissues including the lungs, which is the dose-limiting organ in TMI. Methods: Three subplans with a total of nine arcs were used to treat the planning target volume (PTV), which was determined as all the bones plus the 3 mm margin. Thermoluminescent detectors (TLDs) were placed at 39 positions throughout the phantom. The measured TLD doses were compared to the calculated plan doses. Planar dose for each arc was verified using mapcheck. Results: TLD readings demonstrated accurate dose delivery, with a median dose difference of 0.5% (range: -4.3% and 6.6%) from the calculated dose in the junctions and in the inhomogeneous medium including the lungs. Conclusions: The results from this study suggest that RapidArc VMAT technique is dosimetrically accurate, safe, and efficient in delivering TMI within clinically acceptable time frame.

  3. System performance and performance enhancement relative to element position location errors for distributed linear antenna arrays

    NASA Astrophysics Data System (ADS)

    Adrian, Andrew

    For the most part, antenna phased arrays have traditionally been comprised of antenna elements that are very carefully and precisely placed in very periodic grid structures. Additionally, the relative positions of the elements to each other are typically mechanically fixed as best as possible. There is never an assumption the relative positions of the elements are a function of time or some random behavior. In fact, every array design is typically analyzed for necessary element position tolerances in order to meet necessary performance requirements such as directivity, beamwidth, sidelobe level, and beam scanning capability. Consider an antenna array that is composed of several radiating elements, but the position of each of the elements is not rigidly, mechanically fixed like a traditional array. This is not to say that the element placement structure is ignored or irrelevant, but each element is not always in its relative, desired location. Relative element positioning would be analogous to a flock of birds in flight or a swarm of insects. They tend to maintain a near fixed position with the group, but not always. In the antenna array analog, it would be desirable to maintain a fixed formation, but due to other random processes, it is not always possible to maintain perfect formation. This type of antenna array is referred to as a distributed antenna array. A distributed antenna array's inability to maintain perfect formation causes degradations in the antenna factor pattern of the array. Directivity, beamwidth, sidelobe level and beam pointing error are all adversely affected by element relative position error. This impact is studied as a function of element relative position error for linear antenna arrays. The study is performed over several nominal array element spacings, from lambda to lambda, several sidelobe levels (20 to 50 dB) and across multiple array illumination tapers. Knowing the variation in performance, work is also performed to utilize a minimum

  4. Genomic distribution of AFLP markers relative to gene locations for different eukaryotic species

    PubMed Central

    2013-01-01

    Background Amplified fragment length polymorphism (AFLP) markers are frequently used for a wide range of studies, such as genome-wide mapping, population genetic diversity estimation, hybridization and introgression studies, phylogenetic analyses, and detection of signatures of selection. An important issue to be addressed for some of these fields is the distribution of the markers across the genome, particularly in relation to gene sequences. Results Using in-silico restriction fragment analysis of the genomes of nine eukaryotic species we characterise the distribution of AFLP fragments across the genome and, particularly, in relation to gene locations. First, we identify the physical position of markers across the chromosomes of all species. An observed accumulation of fragments around (peri) centromeric regions in some species is produced by repeated sequences, and this accumulation disappears when AFLP bands rather than fragments are considered. Second, we calculate the percentage of AFLP markers positioned within gene sequences. For the typical EcoRI/MseI enzyme pair, this ranges between 28 and 87% and is usually larger than that expected by chance because of the higher GC content of gene sequences relative to intergenic ones. In agreement with this, the use of enzyme pairs with GC-rich restriction sites substantially increases the above percentages. For example, using the enzyme system SacI/HpaII, 86% of AFLP markers are located within gene sequences in A. thaliana, and 100% of markers in Plasmodium falciparun. We further find that for a typical trait controlled by 50 genes of average size, if 1000 AFLPs are used in a study, the number of those within 1 kb distance from any of the genes would be only about 1–2, and only about 50% of the genes would have markers within that distance. Conclusions The high coverage of AFLP markers across the genomes and the high proportion of markers within or close to gene sequences make them suitable for genome scans and

  5. Verification of the efficiency of chemical disinfection and sanitation measures in in-building distribution systems.

    PubMed

    Lenz, J; Linke, S; Gemein, S; Exner, M; Gebel, J

    2010-06-01

    Previous investigations of biofilms, generated in a silicone tube model have shown that the number of colony forming units (CFU) can reach 10(7)/cm(2), the total cell count (TCC) of microorganisms can be up to 10(8)cells/cm(2). The present study focuses on the situation in in-building distribution systems. Different chemical disinfectants were tested for their efficacy on drinking water biofilms in silicone tubes: free chlorine (electrochemically activated), chlorine dioxide, hydrogen peroxide (H(2)O(2)), silver, and fruit acids. With regard to the widely differing manufacturers' instructions for the usage of their disinfectants three different variations of the silicone tube model were developed to simulate practical use conditions. First the continuous treatment, second the intermittent treatment, third the efficacy of external disinfection treatment and the monitoring for possible biofilm formation with the Hygiene-Monitor. The working experience showed that it is important to know how to handle the individual disinfectants. Every active ingredient has its own optimal application concerning its concentration, exposure time, physical parameters like pH, temperature or redox potential. When used correctly all products tested were able to reduce the CFU to a value below the detection limit. Most of the active ingredients could not significantly reduce the TCC/cm(2), which means that viable microorganisms may still be present in the system. Thus the question arises what happened with these cells? In some cases SEM pictures of the biofilm matrix after a successful disinfection still showed biofilm residues. According to these results, no general correlation between CFU/cm(2), TCC/cm(2) and the visualised biofilm matrix on the silicone tube surface (SEM) could be demonstrated after a treatment with disinfectants. PMID:20472500

  6. Optimal Sensor Placement for Leak Location in Water Distribution Networks Using Genetic Algorithms

    PubMed Central

    Casillas, Myrna V.; Puig, Vicenç; Garza-Castañón, Luis E.; Rosich, Albert

    2013-01-01

    This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach. PMID:24193099

  7. Impact detection, location, and characterization using spatially weighted distributed fiber optic sensors

    NASA Astrophysics Data System (ADS)

    Spillman, William B., Jr.; Huston, Dryver R.

    1996-11-01

    The ability to detect, localize and characterize impacts in real time is of critical importance for the safe operation of aircraft, spacecraft and other vehicles, particularly in light of the increasing use of high performance composite materials with unconventional and often catastrophic failure modes. Although a number of systems based on fiber optic sensors have been proposed or demonstrated, they have generally proved not to be useful due to difficulty of implementation, limited accuracy or high cost. In this paper, we present the results of an investigation using two spatially weighted distributed fiber optic sensors to detect, localize and characterize impacts along an extended linear region. By having the sensors co-located with one having sensitivity to impacts ranging from low to high along its length while the other sensor has sensitivity ranging from high to low along the same path, impacts can be localized and their magnitudes determined using a very simple algorithm. A theoretical description of the techniques is given and compared with experimental results.

  8. Benign epithelial gastric polyps--frequency, location, and age and sex distribution.

    PubMed

    Ljubicić, N; Kujundzić, M; Roić, G; Banić, M; Cupić, H; Doko, M; Zovak, M

    2002-06-01

    Prospective investigation has been undertaken with the aim to study the frequency, location and age and sex distribution of various histological types of benign gastric epithelial polyps. Histological type--adenomatous, hyperplastic and fundic gland polyps--was diagnosed on the basis of at least three histological samples taken from the polyp. Biopsy samples were also taken from the antrum and the body of the stomach so that gastritis could be graded and classified, and the presence of H. pylori could be determined by histology. All 6,700 patients, who had undergone upper gastrointestinal endoscopy in a one-year period, participated in this study. Among them 42 benign gastric epithelial polyp were found in 31 patients: adenomatous gastric polyps in 7 patients, hyperplastic gastric polyp in 21 and fundic gland polyp in 3 patients. All patients with hyperplastic polyps had chronic active superficial gastritis, whereas most of the patients with adenomatous polyps had a chronic atrophic gastritis with high prevalence of intestinal metaplasia. Among 21 patients with hyperplastic gastric polyps, 16 (76%) patients were positive for H. pylori infection in contrast to only 2 patients (29%) with adenomatous gastric polyps and 1 patient (33%) with fundic gland polyp. Presented data indicates that hyperplastic gastric polyps are the most common and they are associated with the presence of chronic active superficial gastritis and concomitant H. pylori infection. Adenomatous polyps are rarer and they tend to be associated with chronic atrophic gastritis and intestinal metaplasia. Fundic gland polyp is the rarest type of gastric polyps. PMID:12137323

  9. Lexical distributional cues, but not situational cues, are readily used to learn abstract locative verb-structure associations.

    PubMed

    Twomey, Katherine E; Chang, Franklin; Ambridge, Ben

    2016-08-01

    Children must learn the structural biases of locative verbs in order to avoid making overgeneralisation errors (e.g., (∗)I filled water into the glass). It is thought that they use linguistic and situational information to learn verb classes that encode structural biases. In addition to situational cues, we examined whether children and adults could use the lexical distribution of nouns in the post-verbal noun phrase of transitive utterances to assign novel verbs to locative classes. In Experiment 1, children and adults used lexical distributional cues to assign verb classes, but were unable to use situational cues appropriately. In Experiment 2, adults generalised distributionally-learned classes to novel verb arguments, demonstrating that distributional information can cue abstract verb classes. Taken together, these studies show that human language learners can use a lexical distributional mechanism that is similar to that used by computational linguistic systems that use large unlabelled corpora to learn verb meaning. PMID:27183399

  10. Spatial distribution of soil water repellency in a grassland located in Lithuania

    NASA Astrophysics Data System (ADS)

    Pereira, Paulo; Novara, Agata

    2014-05-01

    Soil water repellency (SWR) it is recognized to be very heterogeneous in time in space and depends on soil type, climate, land use, vegetation and season (Doerr et al., 2002). It prevents or reduces water infiltration, with important impacts on soil hydrology, influencing the mobilization and transport of substances into the soil profile. The reduced infiltration increases surface runoff and soil erosion. SWR reduce also the seed emergency and plant growth due the reduced amount of water in the root zone. Positive aspects of SWR are the increase of soil aggregate stability, organic carbon sequestration and reduction of water evaporation (Mataix-Solera and Doerr, 2004; Diehl, 2013). SWR depends on the soil aggregate size. In fire affected areas it was founded that SWR was more persistent in small size aggregates (Mataix-Solera and Doerr, 2004; Jordan et al., 2011). However, little information is available about SWR spatial distribution according to soil aggregate size. The aim of this work is study the spatial distribution of SWR in fine earth (<2 mm) and different aggregate sizes, 2-1 mm, 1-0.5 mm, 0.5-0.25 mm and <0.25 mm. The studied area is located near Vilnius (Lithuania) at 54° 42' N, 25° 08 E, 158 masl. A plot with 400 m2 (20 x 20 m with 5 m space between sampling points) and 25 soil samples were collected in the top soil (0-5 cm) and taken to the laboratory. Previously to SWR assessment, the samples were air dried. The persistence of SWR was analysed according to the Water Drop Penetration Method, which involves placing three drops of distilled water onto the soil surface and registering the time in seconds (s) required for the drop complete penetration (Wessel, 1988). Data did not respected Gaussian distribution, thus in order to meet normality requirements it was log-normal transformed. Spatial interpolations were carried out using Ordinary Kriging. The results shown that SWR was on average in fine earth 2.88 s (Coeficient of variation % (CV%)=44.62), 2

  11. Responses of European precipitation distributions and regimes to different blocking locations

    NASA Astrophysics Data System (ADS)

    Sousa, Pedro M.; Trigo, Ricardo M.; Barriopedro, David; Soares, Pedro M. M.; Ramos, Alexandre M.; Liberato, Margarida L. R.

    2016-04-01

    In this work we performed an analysis on the impacts of blocking episodes on seasonal and annual European precipitation and the associated physical mechanisms. Distinct domains were considered in detail taking into account different blocking center positions spanning between the Atlantic and western Russia. Significant positive precipitation anomalies are found for southernmost areas while generalized negative anomalies (up to 75 % in some areas) occur in large areas of central and northern Europe. This dipole of anomalies is reversed when compared to that observed during episodes of strong zonal flow conditions. We illustrate that the location of the maximum precipitation anomalies follows quite well the longitudinal positioning of the blocking centers and discuss regional and seasonal differences in the precipitation responses. To better understand the precipitation anomalies, we explore the blocking influence on cyclonic activity. The results indicate a split of the storm-tracks north and south of blocking systems, leading to an almost complete reduction of cyclonic centers in northern and central Europe and increases in southern areas, where cyclone frequency doubles during blocking episodes. However, the underlying processes conductive to the precipitation anomalies are distinct between northern and southern European regions, with a significant role of atmospheric instability in southern Europe, and moisture availability as the major driver at higher latitudes. This distinctive underlying process is coherent with the characteristic patterns of latent heat release from the ocean associated with blocked and strong zonal flow patterns. We also analyzed changes in the full range of the precipitation distribution of several regional sectors during blocked and zonal days. Results show that precipitation reductions in the areas under direct blocking influence are driven by a substantial drop in the frequency of moderate rainfall classes. Contrarily, southwards of

  12. Where exactly am I? Self-location judgements distribute between head and torso.

    PubMed

    Alsmith, Adrian J T; Longo, Matthew R

    2014-02-01

    I am clearly located where my body is located. But is there one particular place inside my body where I am? Recent results have provided apparently contradictory findings about this question. Here, we addressed this issue using a more direct approach than has been used in previous studies. Using a simple pointing task, we asked participants to point directly at themselves, either by manual manipulation of the pointer whilst blindfolded or by visually discerning when the pointer was in the correct position. Self-location judgements in haptic and visual modalities were highly similar, and were clearly modulated by the starting location of the pointer. Participants most frequently chose to point to one of two likely regions, the upper face or the upper torso, according to which they reached first. These results suggest that while the experienced self is not spread out homogeneously across the entire body, nor is it localised in any single point. Rather, two distinct regions, the upper face and upper torso, appear to be judged as where "I" am. PMID:24457520

  13. On intra-supply chain system with an improved distribution plan, multiple sales locations and quality assurance.

    PubMed

    Chiu, Singa Wang; Huang, Chao-Chih; Chiang, Kuo-Wei; Wu, Mei-Fang

    2015-01-01

    Transnational companies, operating in extremely competitive global markets, always seek to lower different operating costs, such as inventory holding costs in their intra- supply chain system. This paper incorporates a cost reducing product distribution policy into an intra-supply chain system with multiple sales locations and quality assurance studied by [Chiu et al., Expert Syst Appl, 40:2669-2676, (2013)]. Under the proposed cost reducing distribution policy, an added initial delivery of end items is distributed to multiple sales locations to meet their demand during the production unit's uptime and rework time. After rework when the remaining production lot goes through quality assurance, n fixed quantity installments of finished items are then transported to sales locations at a fixed time interval. Mathematical modeling and optimization techniques are used to derive closed-form optimal operating policies for the proposed system. Furthermore, the study demonstrates significant savings in stock holding costs for both the production unit and sales locations. Alternative of outsourcing product delivery task to an external distributor is analyzed to assist managerial decision making in potential outsourcing issues in order to facilitate further reduction in operating costs. PMID:26576330

  14. The hemodynamic effects of the LVAD outflow cannula location on the thrombi distribution in the aorta: A primary numerical study.

    PubMed

    Zhang, Yage; Gao, Bin; Yu, Chang

    2016-09-01

    Although a growing number of patients undergo LVAD implantation for heart failure treatment, thrombi are still the devastating complication for patients who used LVAD. LVAD outflow cannula location and thrombi generation sources were hypothesized to affect the thrombi distribution in the aorta. To test this hypothesis, numerical studies were conducted by using computational fluid dynamic (CFD) theory. Two anastomotic configurations, in which the LVAD outflow cannula is anastomosed to the anterior and lateral ascending aortic wall (named as anterior configurations and lateral configurations, respectively), are designed. The particles, whose sized are same as those of thrombi, are released at the LVAD output cannula and the aortic valve (named as thrombiP and thrombiL, respectively) to calculate the distribution of thrombi. The simulation results demonstrate that the thrombi distribution in the aorta is significantly affected by the LVAD outflow cannula location. In anterior configuration, the thrombi probability of entering into the three branches is 23.60%, while that in lateral configuration is 36.68%. Similarly, in anterior configuration, the thrombi probabilities of entering into brachiocephalic artery, left common carotid artery and left subclavian artery, is 8.51%, 9.64%, 5.45%, respectively, while that in lateral configuration it is 11.39%, 3.09%, 22.20% respectively. Moreover, the origins of thrombi could affect their distributions in the aorta. In anterior configuration, the thrombiP has a lower probability to enter into the three branches than thrombiL (12% vs. 25%). In contrast, in lateral configuration, the thrombiP has a higher probability to enter into the three branches than thrombiL (47% vs. 35%). In brief, the LVAD outflow cannula location significantly affects the distribution of thrombi in the aorta. Thus, in the clinical practice, the selection of outflow location of LVAD and the risk of thrombi formed in the left ventricle should be paid more

  15. Condensation of earthquake location distributions: Optimal spatial information encoding and application to multifractal analysis of south Californian seismicity.

    PubMed

    Kamer, Yavor; Ouillon, Guy; Sornette, Didier; Wössner, Jochen

    2015-08-01

    We present the "condensation" method that exploits the heterogeneity of the probability distribution functions (PDFs) of event locations to improve the spatial information content of seismic catalogs. As its name indicates, the condensation method reduces the size of seismic catalogs while improving the access to the spatial information content of seismic catalogs. The PDFs of events are first ranked by decreasing location errors and then successively condensed onto better located and lower variance event PDFs. The obtained condensed catalog differs from the initial catalog by attributing different weights to each event, the set of weights providing an optimal spatial representation with respect to the spatially varying location capability of the seismic network. Synthetic tests on fractal distributions perturbed with realistic location errors show that condensation improves spatial information content of the original catalog, which is quantified by the likelihood gain per event. Applied to Southern California seismicity, the new condensed catalog highlights major mapped fault traces and reveals possible additional structures while reducing the catalog length by ∼25%. The condensation method allows us to account for location error information within a point based spatial analysis. We demonstrate this by comparing the multifractal properties of the condensed catalog locations with those of the original catalog. We evidence different spatial scaling regimes characterized by distinct multifractal spectra and separated by transition scales. We interpret the upper scale as to agree with the thickness of the brittle crust, while the lower scale (2.5 km) might depend on the relocation procedure. Accounting for these new results, the epidemic type aftershock model formulation suggests that, contrary to previous studies, large earthquakes dominate the earthquake triggering process. This implies that the limited capability of detecting small magnitude events cannot be used

  16. Condensation of earthquake location distributions: Optimal spatial information encoding and application to multifractal analysis of south Californian seismicity

    NASA Astrophysics Data System (ADS)

    Kamer, Yavor; Ouillon, Guy; Sornette, Didier; Wössner, Jochen

    2015-08-01

    We present the "condensation" method that exploits the heterogeneity of the probability distribution functions (PDFs) of event locations to improve the spatial information content of seismic catalogs. As its name indicates, the condensation method reduces the size of seismic catalogs while improving the access to the spatial information content of seismic catalogs. The PDFs of events are first ranked by decreasing location errors and then successively condensed onto better located and lower variance event PDFs. The obtained condensed catalog differs from the initial catalog by attributing different weights to each event, the set of weights providing an optimal spatial representation with respect to the spatially varying location capability of the seismic network. Synthetic tests on fractal distributions perturbed with realistic location errors show that condensation improves spatial information content of the original catalog, which is quantified by the likelihood gain per event. Applied to Southern California seismicity, the new condensed catalog highlights major mapped fault traces and reveals possible additional structures while reducing the catalog length by ˜25 % . The condensation method allows us to account for location error information within a point based spatial analysis. We demonstrate this by comparing the multifractal properties of the condensed catalog locations with those of the original catalog. We evidence different spatial scaling regimes characterized by distinct multifractal spectra and separated by transition scales. We interpret the upper scale as to agree with the thickness of the brittle crust, while the lower scale (2.5 km) might depend on the relocation procedure. Accounting for these new results, the epidemic type aftershock model formulation suggests that, contrary to previous studies, large earthquakes dominate the earthquake triggering process. This implies that the limited capability of detecting small magnitude events cannot be

  17. Distributed fiber optic sensor employing phase generate carrier for disturbance detection and location

    NASA Astrophysics Data System (ADS)

    Xu, Haiyan; Wu, Hongyan; Zhang, Xuewu; Zhang, Zhuo; Li, Min

    2015-05-01

    Distributed optic fiber sensor is a new type of system, which could be used in the long-distance and strong-EMI condition for monitoring and inspection. A method of external modulation with a phase modulator is proposed in this paper to improve the positioning accuracy of the disturbance in a distributed optic-fiber sensor. We construct distributed disturbance detecting system based on Michelson interferometer, and a phase modulator has been attached to the fiber sensor in front of the Faraday rotation mirror (FRM), to elevate the signal produced by interfering of the two lights reflected by the Faraday rotation Mirror to a high frequency, while other signals remain in the low frequency. Through a high pass filter and phase retrieve circus, a signal which is proportional to the external disturbance is acquired. The accuracy of disturbance positioning with this signal can be largely improved. The method is quite simple and easy to achieve. Theoretical analysis and experimental results show that, this method can effectively improve the positioning accuracy.

  18. Power approximation for the van Elteren test based on location-scale family of distributions.

    PubMed

    Zhao, Yan D; Qu, Yongming; Rahardja, Dewi

    2006-01-01

    The van Elteren test, as a type of stratified Wilcoxon-Mann-Whitney test for comparing two treatments accounting for stratum effects, has been used to replace the analysis of variance when the normality assumption was seriously violated. The sample size estimation methods for the van Elteren test have been proposed and evaluated previously. However, in designing an active-comparator trial where a sample of responses from the new treatment is available but the patient response data to the comparator are limited to summary statistics, the existing methods are either inapplicable or poorly behaved. In this paper we develop a new method for active-comparator trials assuming the responses from both treatments are from the same location-scale family. Theories and simulations have shown that the new method performs well when the location-scale assumption holds and works reasonably when the assumption does not hold. Thus, the new method is preferred when computing sample sizes for the van Elteren test in active-comparator trials. PMID:17146980

  19. Syringe filtration methods for examining dissolved and colloidal trace element distributions in remote field locations

    NASA Technical Reports Server (NTRS)

    Shiller, Alan M.

    2003-01-01

    It is well-established that sampling and sample processing can easily introduce contamination into dissolved trace element samples if precautions are not taken. However, work in remote locations sometimes precludes bringing bulky clean lab equipment into the field and likewise may make timely transport of samples to the lab for processing impossible. Straightforward syringe filtration methods are described here for collecting small quantities (15 mL) of 0.45- and 0.02-microm filtered river water in an uncontaminated manner. These filtration methods take advantage of recent advances in analytical capabilities that require only small amounts of waterfor analysis of a suite of dissolved trace elements. Filter clogging and solute rejection artifacts appear to be minimal, although some adsorption of metals and organics does affect the first approximately 10 mL of water passing through the filters. Overall the methods are clean, easy to use, and provide reproducible representations of the dissolved and colloidal fractions of trace elements in river waters. Furthermore, sample processing materials can be prepared well in advance in a clean lab and transported cleanly and compactly to the field. Application of these methods is illustrated with data from remote locations in the Rocky Mountains and along the Yukon River. Evidence from field flow fractionation suggests that the 0.02-microm filters may provide a practical cutoff to distinguish metals associated with small inorganic and organic complexes from those associated with silicate and oxide colloids.

  20. MPL-Net Measurements of Aerosol and Cloud Vertical Distributions at Co-Located AERONET Sites

    NASA Technical Reports Server (NTRS)

    Welton, Ellsworth J.; Campbell, James R.; Berkoff, Timothy A.; Spinhirne, James D.; Tsay, Si-Chee; Holben, Brent; Starr, David OC. (Technical Monitor)

    2002-01-01

    In the early 1990s, the first small, eye-safe, and autonomous lidar system was developed, the Micropulse Lidar (MPL). The MPL acquires signal profiles of backscattered laser light from aerosols and clouds. The signals are analyzed to yield multiple layer heights, optical depths of each layer, average extinction-to-backscatter ratios for each layer, and profiles of extinction in each layer. In 2000, several MPL sites were organized into a coordinated network, called MPL-Net, by the Cloud and Aerosol Lidar Group at NASA Goddard Space Flight Center (GSFC) using funding provided by the NASA Earth Observing System. tn addition to the funding provided by NASA EOS, the NASA CERES Ground Validation Group supplied four MPL systems to the project, and the NASA TOMS group contributed their MPL for work at GSFC. The Atmospheric Radiation Measurement Program (ARM) also agreed to make their data available to the MPL-Net project for processing. In addition to the initial NASA and ARM operated sites, several other independent research groups have also expressed interest in joining the network using their own instruments. Finally, a limited amount of EOS funding was set aside to participate in various field experiments each year. The NASA Sensor Intercomparison and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) project also provides funds to deploy their MPL during ocean research cruises. All together, the MPL-Net project has participated in four major field experiments since 2000. Most MPL-Net sites and field experiment locations are also co-located with sunphotometers in the NASA Aerosol Robotic Network. (AERONET). Therefore, at these locations data is collected on both aerosol and cloud vertical structure as well as column optical depth and sky radiance. Real-time data products are now available from most MPL-Net sites. Our real-time products are generated at times of AERONET aerosol optical depth (AOD) measurements. The AERONET AOD is used as input to our

  1. Distribution of deciduous stands in villages located in coniferous forest landscapes in Sweden.

    PubMed

    Mikusiński, Grzegorz; Angelstam, Per; Sporrong, Ulf

    2003-12-01

    Termination of fire along with active removal of deciduous trees in favor of conifers together with anthropogenic transformation of productive forest into agricultural land, have transformed northern European coniferous forests and reduced their deciduous component. Locally, however, in the villages, deciduous trees and stands were maintained, and have more recently regenerated on abandoned agricultural land. We hypothesize that the present distribution of the deciduous component is related to the village in-field/out-field zonation in different regions, which emerges from physical conditions and recent economic development expressed as land-use change. We analyzed the spatial distribution of deciduous stands in in-field and out-field zones of villages in 6 boreal/hemiboreal Swedish regions (Norrbotten, Angermanland, Jämtland, Dalarna, Bergslagen, Småland). In each region 6 individual quadrates 5 x 5 km centered on village areas were selected. We found significant regional differences in the deciduous component (DEC) in different village zones. At the scale of villages Angermanland had the highest mean proportion of DEC (17%) and Jämtland the lowest (2%). However, the amounts of the DEC varied systematically in in-field and out-field zones. DEC was highest in the in-field in the south (Småland), but generally low further north. By contrast, the amount of DEC in the out-field was highest in the north. The relative amount of DEC in the forest edge peaked in landscapes with the strongest decline in active agriculture (Angermanland, Dalarna, Bergslagen). Because former and present local villages are vital for biodiversity linked to the deciduous component, our results indicate a need for integrated management of deciduous forest within entire landscapes. This study shows that simplified satellite data are useful for estimating the spatial distribution of deciduous trees and stands at the landscape scale. However, for detailed studies better thematic resolution is

  2. Circumferential distribution and location of Mallory-Weiss tears: recent trends

    PubMed Central

    Okada, Mayumi; Ishimura, Norihisa; Shimura, Shino; Mikami, Hironobu; Okimoto, Eiko; Aimi, Masahito; Uno, Goichi; Oshima, Naoki; Yuki, Takafumi; Ishihara, Shunji; Kinoshita, Yoshikazu

    2015-01-01

    Background and study aims: Mallory-Weiss tears (MWTs) are not only a common cause of acute nonvariceal gastrointestinal bleeding but also an iatrogenic adverse event related to endoscopic procedures. However, changes in the clinical characteristics and endoscopic features of MWTs over the past decade have not been reported. The aim of this study was to investigate recent trends in the etiology and endoscopic features of MWTs. Patients and methods: We retrospectively reviewed the medical records of patients with a diagnosis of MWT at our university hospital between August 2003 and September 2013. The information regarding etiology, clinical parameters, endoscopic findings, therapeutic interventions, and outcome was reviewed. Results: A total of 190 patients with MWTs were evaluated. More than half (n = 100) of the cases occurred during endoscopic procedures; cases related to alcohol consumption were less frequent (n = 13). MWTs were most frequently located in the lesser curvature of the stomach and right lateral wall (2 – to 4-o’clock position) of the esophagus, irrespective of the cause. The condition of more than 90 % of the patients (n = 179) was improved by conservative or endoscopic treatment, whereas 11 patients (5.8 %) required blood transfusion. Risk factors for blood transfusion were a longer laceration (odds ratio [OR] 2.3) and a location extending from the esophagus to the stomach (OR 5.3). Conclusions: MWTs were frequently found on the right lateral wall (2 – to 4-o’clock position) of the esophagus aligned with the lesser curvature of the stomach, irrespective of etiology. Longer lacerations extending from the esophagus to the gastric cardia were associated with an elevated risk for bleeding and requirement for blood transfusion. PMID:26528495

  3. Optimal location through distributed algorithm to avoid energy hole in mobile sink WSNs.

    PubMed

    Qing-hua, Li; Wei-hua, Gui; Zhi-gang, Chen

    2014-01-01

    In multihop data collection sensor network, nodes near the sink need to relay on remote data and, thus, have much faster energy dissipation rate and suffer from premature death. This phenomenon causes energy hole near the sink, seriously damaging the network performance. In this paper, we first compute energy consumption of each node when sink is set at any point in the network through theoretical analysis; then we propose an online distributed algorithm, which can adjust sink position based on the actual energy consumption of each node adaptively to get the actual maximum lifetime. Theoretical analysis and experimental results show that the proposed algorithms significantly improve the lifetime of wireless sensor network. It lowers the network residual energy by more than 30% when it is dead. Moreover, the cost for moving the sink is relatively smaller. PMID:24895668

  4. Arsenic distribution in soils and rye plants of a cropland located in an abandoned mining area.

    PubMed

    Álvarez-Ayuso, Esther; Abad-Valle, Patricia; Murciego, Ascensión; Villar-Alonso, Pedro

    2016-01-15

    A mining impacted cropland was studied in order to assess its As pollution level and the derived environmental and health risks. Profile soil samples (0-50 cm) and rye plant samples were collected at different distances (0-150 m) from the near mine dump and analyzed for their As content and distribution. These cropland soils were sandy, acidic and poor in organic matter and Fe/Al oxides. The soil total As concentrations (38-177 mg kg(-1)) and, especially, the soil soluble As concentrations (0.48-4.1 mg kg(-1)) importantly exceeded their safe limits for agricultural use of soils. Moreover, the soil As contents more prone to be mobilized could rise up to 25-69% of total As levels as determined using (NH4)2SO4, NH4H2PO4 and (NH4)2C2O4·H2O as sequential extractants. Arsenic in rye plants was primarily distributed in roots (3.4-18.8 mg kg(-1)), with restricted translocation to shoots (TF=0.05-0.26) and grains (TF=<0.02-0.14). The mechanism for this excluder behavior should be likely related to arsenate reduction to arsenite in roots, followed by its complexation with thiols, as suggested by the high arsenite level in rye roots (up to 95% of the total As content) and the negative correlation between thiol concentrations in rye roots and As concentrations in rye shoots (|R|=0.770; p<0.01). Accordingly, in spite of the high mobile and mobilizable As contents in soils, As concentrations in rye above-ground tissues comply with the European regulation on undesirable substances in animal feed. Likewise, rye grain As concentrations were below its maximum tolerable concentration in cereals established by international legislation. PMID:26519583

  5. Noninvasive determination of the location and distribution of DNAPL using advanced seismic reflection techniques.

    PubMed

    Temples, T J; Waddell, M G; Domoracki, W J; Eyer, J

    2001-01-01

    Recent advances in seismic reflection amplitude analysis (e.g., amplitude versus offset-AVO, bright spot mapping) technology to directly detect the presence of subsurface DNAPL (e.g., CCl4) were applied to 216-Z-9 crib, 200 West Area, DOE Hanford Site, Washington. Modeling to determine what type of anomaly might be present was performed. Model results were incorporated in the interpretation of the seismic data to determine the location of any seismic amplitude anomalies associated with the presence of high concentrations of CCl4. Seismic reflection profiles were collected and analyzed for the presence of DNAPL. Structure contour maps of the contact between the Hanford fine unit and the Plio/Pleistocene unit and between the Plio/Pleistocene unit and the caliche layer were interpreted to determine potential DNAPL flow direction. Models indicate that the contact between the Plio/Pleistocene unit and the caliche should have a positive reflection coefficient. When high concentrations of CCl4 are present, the reflection coefficient of this interface displays a noticeable positive increase in the seismic amplitude (i.e., bright spot). Amplitude data contoured on the Plio/Pleistocene-caliche boundary display high values indicating the presence of DNAPL to the north and east of the crib area. The seismic data agree well with the well control in areas of high concentrations of CCl4. PMID:11341013

  6. Impacts to the chest of PMHSs - Influence of impact location and load distribution on chest response.

    PubMed

    Holmqvist, Kristian; Svensson, Mats Y; Davidsson, Johan; Gutsche, Andreas; Tomasch, Ernst; Darok, Mario; Ravnik, Dean

    2016-02-01

    The chest response of the human body has been studied for several load conditions, but is not well known in the case of steering wheel rim-to-chest impact in heavy goods vehicle frontal collisions. The aim of this study was to determine the response of the human chest in a set of simulated steering wheel impacts. PMHS tests were carried out and analysed. The steering wheel load pattern was represented by a rigid pendulum with a straight bar-shaped front. A crash test dummy chest calibration pendulum was utilised for comparison. In this study, a set of rigid bar impacts were directed at various heights of the chest, spanning approximately 120mm around the fourth intercostal space. The impact energy was set below a level estimated to cause rib fracture. The analysed results consist of responses, evaluated with respect to differences in the impacting shape and impact heights on compression and viscous criteria chest injury responses. The results showed that the bar impacts consistently produced lesser scaled chest compressions than the hub; the Middle bar responses were around 90% of the hub responses. A superior bar impact provided lesser chest compression; the average response was 86% of the Middle bar response. For inferior bar impacts, the chest compression response was 116% of the chest compression in the middle. The damping properties of the chest caused the compression to decrease in the high speed bar impacts to 88% of that in low speed impacts. From the analysis it could be concluded that the bar impact shape provides lower chest criteria responses compared to the hub. Further, the bar responses are dependent on the impact location of the chest. Inertial and viscous effects of the upper body affect the responses. The results can be used to assess the responses of human substitutes such as anthropomorphic test devices and finite element human body models, which will benefit the development process of heavy goods vehicle safety systems. PMID:26687541

  7. Drop size distributions and related properties of fog for five locations measured from aircraft

    NASA Technical Reports Server (NTRS)

    Zak, J. Allen

    1994-01-01

    Fog drop size distributions were collected from aircraft as part of the Synthetic Vision Technology Demonstration Program. Three west coast marine advection fogs, one frontal fog, and a radiation fog were sampled from the top of the cloud to the bottom as the aircraft descended on a 3-degree glideslope. Drop size versus altitude versus concentration are shown in three dimensional plots for each 10-meter altitude interval from 1-minute samples. Also shown are median volume radius and liquid water content. Advection fogs contained the largest drops with median volume radius of 5-8 micrometers, although the drop sizes in the radiation fog were also large just above the runway surface. Liquid water content increased with height, and the total number of drops generally increased with time. Multimodal variations in number density and particle size were noted in most samples where there was a peak concentration of small drops (2-5 micrometers) at low altitudes, midaltitude peak of drops 5-11 micrometers, and high-altitude peak of the larger drops (11-15 micrometers and above). These observations are compared with others and corroborate previous results in fog gross properties, although there is considerable variation with time and altitude even in the same type of fog.

  8. Estimation of hydrothermal deposits location from magnetization distribution and magnetic properties in the North Fiji Basin

    NASA Astrophysics Data System (ADS)

    Choi, S.; Kim, C.; Park, C.; Kim, H.

    2013-12-01

    The North Fiji Basin is belong to one of the youngest basins of back-arc basins in the southwest Pacific (from 12 Ma ago). We performed the marine magnetic and the bathymetry survey in the North Fiji Basin for finding the submarine hydrothermal deposits in April 2012. We acquired magnetic and bathymetry datasets by using Multi-Beam Echo Sounder EM120 (Kongsberg Co.) and Overhouser Proton Magnetometer SeaSPY (Marine Magnetics Co.). We conducted the data processing to obtain detailed seabed topography, magnetic anomaly, reduce to the pole(RTP), analytic signal and magnetization. The study areas composed of the two areas(KF-1(longitude : 173.5 ~ 173.7 and latitude : -16.2 ~ -16.5) and KF-3(longitude : 173.4 ~ 173.6 and latitude : -18.7 ~ -19.1)) in Central Spreading Ridge(CSR) and one area(KF-2(longitude : 173.7 ~ 174 and latitude : -16.8 ~ -17.2)) in Triple Junction(TJ). The seabed topography of KF-1 existed thin horst in two grabens that trends NW-SE direction. The magnetic properties of KF-1 showed high magnetic anomalies in center part and magnetic lineament structure of trending E-W direction. In the magnetization distribution of KF-1, the low magnetization zone matches well with a strong analytic signal in the northeastern part. KF-2 area has TJ. The seabed topography formed like Y-shape and showed a high feature in the center of TJ. The magnetic properties of KF-2 displayed high magnetic anomalies in N-S spreading ridge center and northwestern part. In the magnetization distribution of KF-2, the low magnetization zone matches well with a strong analytic signal in the northeastern part. The seabed topography of KF-3 presented a flat and high topography like dome structure at center axis and some seamounts scattered around the axis. The magnetic properties of KF-3 showed high magnetic anomalies in N-S spreading ridge center part. In the magnetization of KF-2, the low magnetization zone mismatches to strong analytic signal in this area. The difference of KF-3

  9. Characteristics of size distributions at urban and rural locations in New York

    NASA Astrophysics Data System (ADS)

    Bae, M.-S.; Schwab, J. J.; Hogrefe, O.; Frank, B. P.; Lala, G. G.; Demerjian, K. L.

    2010-01-01

    Paired nano- and long-tube Scanning Mobility Particle Sizer (SMPS) systems were operated for four different intensive field campaigns in New York State. Two of these campaigns were at Queens College in New York City, during the summer of 2001 and the winter of 2004. The other field campaigns were at rural sites in New York State. The data with the computed diffusion loss corrections for the sampling lines and the SMPS instruments were examined and the combined SMPS data sets for each campaign were obtained. The diffusion corrections significantly affect total number concentrations, and in New York City, affect the mode structure of the size distributions. The relationship between merged and integrated SMPS total number concentrations with the diffusion loss corrections and the CPC number concentrations yield statistically significant increases (closer to 1) in the slope and correlation coefficient compared to the uncorrected values. The measurements are compared to PM2.5 mass concentrations and ion balance indications of aerosol acidity. Periods of low observed PM2.5 mass, high number concentration, and low median diameter due to small fresh particles are associated with primary emissions for the urban sites; and with particle nucleation and growth for the rural sites. The observations of high PM2.5 mass, lower number concentrations, and higher median diameter are mainly due to an enhancement of coagulation and/or condensation processes in relatively aged air. There are statistically different values for the condensation sink (CS) between urban and rural areas. While there is good association (r2>0.5) between the condensation sink (CS) in the range of 8.35-283.9 nm and PM2.5 mass in the urban areas, there is no discernable association in the rural areas. The average (±standard deviation) of CS lies in the range 6.5(±3.3)×10-3-2.4(±0.9)×10-2.

  10. Characteristics of size distributions at urban and rural locations in New York

    NASA Astrophysics Data System (ADS)

    Bae, M.-S.; Schwab, J. J.; Hogrefe, O.; Frank, B. P.; Lala, G. G.; Demerjian, K. L.

    2010-05-01

    Paired nano- and long-tube Scanning Mobility Particle Sizer (SMPS) systems were operated for four different intensive field campaigns in New York State. Two of these campaigns were at Queens College in New York City, during the summer of 2001 and the winter of 2004. The other field campaigns were at rural sites in New York State. The data with the computed diffusion loss corrections for the sampling lines and the SMPS instruments were examined and the combined SMPS data sets for each campaign were obtained. The diffusion corrections significantly affect total number concentrations, and in New York City, affect the mode structure of the size distributions. The relationship between merged and integrated SMPS total number concentrations with the diffusion loss corrections and the CPC number concentrations yield statistically significant increases (closer to 1) in the slope and correlation coefficient compared to the uncorrected values. The measurements are compared to PM2.5 mass concentrations and ion balance indications of aerosol acidity. Analysis of particle growth rate in comparison to other observations can classify the events and illustrate that urban and rural new particle formation and growth are the result of different causes. Periods of low observed PM2.5 mass, high number concentration, and low median diameter due to small fresh particles are associated with primary emissions for the urban sites; and with particle nucleation and growth for the rural sites. The observations of high PM2.5 mass, lower number concentrations, and higher median diameter are mainly due to an enhancement of photochemical reactions leading to condensation processes in relatively aged air. There are statistically different values for the condensation sink (CS) between urban and rural areas. While there is good association (r2>0.5) between the condensation sink (CS) in the range of 8.35-283.9 nm and PM2.5 mass in the urban areas, there is no discernable association in the rural areas

  11. Using Distributed Temperature Sensing to Locate and Quantify Thermal Refugia: Insights Into Radiative & Hydrologic Processes

    NASA Astrophysics Data System (ADS)

    Bond, R. M.; Stubblefield, A. P.

    2012-12-01

    Stream temperature plays a critical role in determining the overall structure and function of stream ecosystems. Aquatic fauna are particularly vulnerable to projected increases in the magnitude and duration of elevated stream temperatures from global climate change. Northern California cold water salmon and trout fisheries have been declared thermally impacted by the California State Water Resources Control Board. This study employed Distributed Temperature Sensing (DTS) to detect stream heating and cooling at one meter resolution along a one kilometer section of the North Fork of the Salmon River, a tributary of the Klamath River, northern California, USA. The Salmon River has an extensive legacy of hydraulic gold mining tailing which have been reworked into large gravel bars; creating shallow wide runs, possibly filling in pools and disrupting riparian vegetation recruitment. Eight days of temperature data were collected at 15 minute intervals during July 2012. Three remote weather stations were deployed during the study period. The main objectives of this research were: one, quantify thermal inputs that create and maintain thermal refugia for cold water fishes; two, investigate the role of riparian and topographic shading in buffering peak summer temperatures; and three, create and validate a physically based stream heating model to predict effects of riparian management, drought, and climate change on stream temperature. DTS was used to spatially identify cold water seeps and quantify their contribution to the stream's thermal regime. Along the one kilometer reach, hyporheic flow was identified using DTS. The spring was between 16-18°C while the peak mainstem temperature above the spring reached a maximum of 23°C. The study found a diel heating cycle of 5°C with a Maximum Weekly Average Temperature (MWAT) of over 22°C; exceeding salmon and trout protective temperature standards set by USEPA Region 10. Twenty intensive fish counts over five days were

  12. What influences national and foreign physicians’ geographic distribution? An analysis of medical doctors’ residence location in Portugal

    PubMed Central

    2012-01-01

    Background The debate over physicians’ geographical distribution has attracted the attention of the economic and public health literature over the last forty years. Nonetheless, it is still to date unclear what influences physicians’ location, and whether foreign physicians contribute to fill the geographical gaps left by national doctors in any given country. The present research sets out to investigate the current distribution of national and international physicians in Portugal, with the objective to understand its determinants and provide an evidence base for policy-makers to identify policies to influence it. Methods A cross-sectional study of physicians currently registered in Portugal was conducted to describe the population and explore the association of physician residence patterns with relevant personal and municipality characteristics. Data from the Portuguese Medical Council on physicians’ residence and characteristics were analysed, as well as data from the National Institute of Statistics on municipalities’ population, living standards and health care network. Descriptive statistics, chi-square tests, negative binomial and logistic regression modelling were applied to determine: (a) municipality characteristics predicting Portuguese and International physicians’ geographical distribution, and; (b) doctors’ characteristics that could increase the odds of residing outside the country’s metropolitan areas. Results There were 39,473 physicians in Portugal in 2008, 51.1% of whom male, and 40.2% between 41 and 55 years of age. They were predominantly Portuguese (90.5%), with Spanish, Brazilian and African nationalities also represented. Population, Population’s Purchasing Power, Nurses per capita and Municipality Development Index (MDI) were the municipality characteristics displaying the strongest association with national physicians’ location. For foreign physicians, the MDI was not statistically significant, while municipalities

  13. Dip distribution of Oita-Kumamoto Tectonic Line located in central Kyushu, Japan, estimated by eigenvectors of gravity gradient tensor

    NASA Astrophysics Data System (ADS)

    Kusumoto, Shigekazu

    2016-09-01

    We estimated the dip distribution of Oita-Kumamoto Tectonic Line located in central Kyushu, Japan, by using the dip of the maximum eigenvector of the gravity gradient tensor. A series of earthquakes in Kumamoto and Oita beginning on 14 April 2016 occurred along this tectonic line, the largest of which was M = 7.3. Because a gravity gradiometry survey has not been conducted in the study area, we calculated the gravity gradient tensor from the Bouguer gravity anomaly and employed it to the analysis. The general dip distribution of the Oita-Kumamoto Tectonic Line was found to be about 65° and tends to be higher towards its eastern end. In addition, we estimated the dip around the largest earthquake to be about 60° from the gravity gradient tensor. This result agrees with the dip of the earthquake source fault obtained by Global Navigation Satellite System data analysis.[Figure not available: see fulltext.

  14. Levels and spatial distribution of airborne chemical elements in a heavy industrial area located in the north of Spain.

    PubMed

    Lage, J; Almeida, S M; Reis, M A; Chaves, P C; Ribeiro, T; Garcia, S; Faria, J P; Fernández, B G; Wolterbeek, H T

    2014-01-01

    The adverse health effects of airborne particles have been subjected to intense investigation in recent years; however, more studies on the chemical characterization of particles from pollution emissions are needed to (1) identify emission sources, (2) better understand the relative toxicity of particles, and (3) pinpoint more targeted emission control strategies and regulations. The main objective of this study was to assess the levels and spatial distribution of airborne chemical elements in a heavy industrial area located in the north of Spain. Instrumental and biomonitoring techniques were integrated and analytical methods for k0 instrumental neutron activation analysis and particle-induced x-ray emission were used to determine element content in aerosol filters and lichens. Results indicated that in general local industry contributed to the emissions of As, Sb, Cu, V, and Ni, which are associated with combustion processes. In addition, the steelwork emitted significant quantities of Fe and Mn and the cement factory was associated with Ca emissions. The spatial distribution of Zn and Al also indicated an important contribution of two industries located outside the studied area. PMID:25072718

  15. Mechanics of the Compression Wood Response: II. On the Location, Action, and Distribution of Compression Wood Formation.

    PubMed

    Archer, R R; Wilson, B F

    1973-04-01

    A new method for simulation of cross-sectional growth provided detailed information on the location of normal wood and compression wood increments in two tilted white pine (Pinus strobus L.) leaders. These data were combined with data on stiffness, slope, and curvature changes over a 16-week period to make the mechanical analysis. The location of compression wood changed from the under side to a flank side and then to the upper side of the leader as the geotropic stimulus decreased, owing to compression wood action. Its location shifted back to a flank side when the direction of movement of the leader reversed. A model for this action, based on elongation strains, was developed and predicted the observed curvature changes with elongation strains of 0.3 to 0.5%, or a maximal compressive stress of 60 to 300 kilograms per square centimeter. After tilting, new wood formation was distributed so as to maintain consistent strain levels along the leaders in bending under gravitational loads. The computed effective elastic moduli were about the same for the two leaders throughout the season. PMID:16658408

  16. Verification of patient-specific dose distributions in proton therapy using a commercial two-dimensional ion chamber array

    SciTech Connect

    Arjomandy, Bijan; Sahoo, Narayan; Ciangaru, George; Zhu, Ronald; Song Xiaofei; Gillin, Michael

    2010-11-15

    Purpose: The purpose of this study was to determine whether a two-dimensional (2D) ion chamber array detector quickly and accurately measures patient-specific dose distributions in treatment with passively scattered and spot scanning proton beams. Methods: The 2D ion chamber array detector MatriXX was used to measure the dose distributions in plastic water phantom from passively scattered and spot scanning proton beam fields planned for patient treatment. Planar dose distributions were measured using MatriXX, and the distributions were compared to those calculated using a treatment-planning system. The dose distributions generated by the treatment-planning system and a film dosimetry system were similarly compared. Results: For passively scattered proton beams, the gamma index for the dose-distribution comparison for treatment fields for three patients with prostate cancer and for one patient with lung cancer was less than 1.0 for 99% and 100% of pixels for a 3% dose tolerance and 3 mm distance-to-dose agreement, respectively. For spot scanning beams, the mean ({+-} standard deviation) percentages of pixels with gamma indices meeting the passing criteria were 97.1%{+-}1.4% and 98.8%{+-}1.4% for MatriXX and film dosimetry, respectively, for 20 fields used to treat patients with prostate cancer. Conclusions: Unlike film dosimetry, MatriXX provides not only 2D dose-distribution information but also absolute dosimetry in fractions of minutes with acceptable accuracy. The results of this study indicate that MatriXX can be used to verify patient-field specific dose distributions in proton therapy.

  17. Experimental Verification of Application of Looped System and Centralized Voltage Control in a Distribution System with Renewable Energy Sources

    NASA Astrophysics Data System (ADS)

    Hanai, Yuji; Hayashi, Yasuhiro; Matsuki, Junya

    The line voltage control in a distribution network is one of the most important issues for a penetration of Renewable Energy Sources (RES). A loop distribution network configuration is an effective solution to resolve voltage and distribution loss issues concerned about a penetration of RES. In this paper, for a loop distribution network, the authors propose a voltage control method based on tap change control of LRT and active/reactive power control of RES. The tap change control of LRT takes a major role of the proposed voltage control. Additionally the active/reactive power control of RES supports the voltage control when voltage deviation from the upper or lower voltage limit is unavoidable. The proposed method adopts SCADA system based on measured data from IT switches, which are sectionalizing switch with sensor installed in distribution feeder. In order to check the validity of the proposed voltage control method, experimental simulations using a distribution system analog simulator “ANSWER” are carried out. In the simulations, the voltage maintenance capability in the normal and the emergency is evaluated.

  18. ECOLOGICAL STUDIES AND MATHEMATICAL MODELING OF 'CLADOPHORA' IN LAKE HURON: 7. MODEL VERIFICATION AND SYSTEM RESPONSE

    EPA Science Inventory

    This manuscript describes the verification of a calibrated mathematical model designed to predict the spatial and temporal distribution of Cladophora about a point source of nutrients. The study site was located at Harbor Beach, Michigan, on Lake Huron. The model is intended to h...

  19. Modes in the size distributions and neutralization extent of fog-processed ammonium salt aerosols observed at Canadian rural locations

    NASA Astrophysics Data System (ADS)

    Yao, X. H.; Zhang, L.

    2012-02-01

    Among the 192 samples of size-segregated water-soluble inorganic ions collected using a Micro-Orifice Uniform Deposit Impactor (MOUDI) at eight rural locations in Canada, ten samples were identified to have gone through fog processing. The supermicron particle modes of ammonium salt aerosols were found to be the fingerprint of fog processed aerosols. However, the patterns and the sizes of the supermicron modes varied with ambient temperature (T) and particle acidity and also differed between inland and coastal locations. Under T > 0 °C condition, fog-processed ammonium salt aerosols were completely neutralized and had a dominant mode at 1-2 μm and a minor mode at 5-10 μm if particles were in neutral condition, and ammonium sulfate was incompletely neutralized and only had a 1-2 μm mode if particles were in acidic conditions. Under T < 0 °C at the coastal site, fog-processed aerosols exhibited a bi-modal size distribution with a dominant mode of incompletely-neutralized ammonium sulfate at about 3 μm and a minor mode of completely-neutralized ammonium sulfate at 8-9 μm. Under T < 0 °C condition at the inland sites, fog-processed ammonium salt aerosols were sometimes completely neutralized and sometimes incompletely neutralized, and the size of the supermicron mode was in the range from 1 to 5 μm. Overall, fog-processed ammonium salt aerosols under T < 0 °C condition were generally distributed at larger size (e.g., 2-5 μm) than those under T > 0 °C condition (e.g., 1-2 μm).

  20. Radar prediction of absolute rain fade distributions for earth-satellite paths and general methods for extrapolation of fade statistics to other locations

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1982-01-01

    The first absolute rain fade distribution method described establishes absolute fade statistics at a given site by means of a sampled radar data base. The second method extrapolates absolute fade statistics from one location to another, given simultaneously measured fade and rain rate statistics at the former. Both methods employ similar conditional fade statistic concepts and long term rain rate distributions. Probability deviations in the 2-19% range, with an 11% average, were obtained upon comparison of measured and predicted levels at given attenuations. The extrapolation of fade distributions to other locations at 28 GHz showed very good agreement with measured data at three sites located in the continental temperate region.

  1. Implementation of a novel double-side technique for partial discharge detection and location in covered conductor overhead distribution networks

    NASA Astrophysics Data System (ADS)

    He, Weisheng; Li, Hongjie; Liang, Deliang; Sun, Haojie; Yang, Chenbo; Wei, Jinqu; Yuan, Zhijian

    2015-12-01

    Partial discharge (PD) detection has proven to be one of the most acceptable techniques for on-line condition monitoring and predictive maintenance of power apparatus. A powerful tool for detecting PD in covered-conductor (CC) lines is urgently needed to improve the asset management of CC overhead distribution lines. In this paper, an appropriate, portable and simple system designed to detect PD activity in CC lines and ultimately pinpoint the PD source is developed and tested. The system is based on a novel double-side synchronised PD measurement technique driven by pulse injection. Emphasis is placed on the proposed PD-location mechanism and hardware structure, with descriptions of the pulse-injection process, detection device, synchronisation principle and PD-location algorithm. The system is simulated using ATP-EMTP, and the simulated results are found to be consistent with the actual simulation layout. For further validation, the capability of the system is tested in a high-voltage laboratory experiment using a 10-kV CC line with cross-linked polyethylene insulation.

  2. SU-D-BRF-02: In Situ Verification of Radiation Therapy Dose Distributions From High-Energy X-Rays Using PET Imaging

    SciTech Connect

    Zhang, Q; Kai, L; Wang, X; Hua, B; Chui, L; Wang, Q; Ma, C

    2014-06-01

    Purpose: To study the possibility of in situ verification of radiation therapy dose distributions using PET imaging based on the activity distribution of 11C and 15O produced via photonuclear reactions in patient irradiated by 45MV x-rays. Methods: The method is based on the photonuclear reactions in the most elemental composition {sup 12}C and {sup 16}O in body tissues irradiated by bremsstrahlung photons with energies up to 45 MeV, resulting primarily in {sup 11}C and {sup 15}O, which are positron-emitting nuclei. The induced positron activity distributions were obtained with a PET scanner in the same room of a LA45 accelerator (Top Grade Medical, Beijing, China). The experiments were performed with a brain phantom using realistic treatment plans. The phantom was scanned at 20min and 2-5min after irradiation for {sup 11}C and {sup 15}, respectively. The interval between the two scans was 20 minutes. The activity distributions of {sup 11}C and {sup 15}O within the irradiated volume can be separated from each other because the half-life is 20min and 2min for {sup 11}C and {sup 15}O, respectively. Three x-ray energies were used including 10MV, 25MV and 45MV. The radiation dose ranged from 1.0Gy to 10.0Gy per treatment. Results: It was confirmed that no activity was detected at 10 MV beam energy, which was far below the energy threshold for photonuclear reactions. At 25 MV x-ray activity distribution images were observed on PET, which needed much higher radiation dose in order to obtain good quality. For 45 MV photon beams, good quality activation images were obtained with 2-3Gy radiation dose, which is the typical daily dose for radiation therapy. Conclusion: The activity distribution of {sup 15}O and {sup 11}C could be used to derive the dose distribution of 45MV x-rays at the regular daily dose level. This method can potentially be used to verify in situ dose distributions of patients treated on the LA45 accelerator.

  3. Frequency Distribution of Second Solid Cancer Locations in Relation to the Irradiated Volume Among 115 Patients Treated for Childhood Cancer

    SciTech Connect

    Diallo, Ibrahima Haddy, Nadia; Adjadj, Elisabeth; Samand, Akhtar; Quiniou, Eric; Chavaudra, Jean; Alziar, Iannis; Perret, Nathalie; Guerin, Sylvie; Lefkopoulos, Dimitri; Vathaire, Florent de

    2009-07-01

    Purpose: To provide better estimates of the frequency distribution of second malignant neoplasm (SMN) sites in relation to previous irradiated volumes, and better estimates of the doses delivered to these sites during radiotherapy (RT) of the first malignant neoplasm (FMN). Methods and Materials: The study focused on 115 patients who developed a solid SMN among a cohort of 4581 individuals. The homemade software package Dos{sub E}G was used to estimate the radiation doses delivered to SMN sites during RT of the FMN. Three-dimensional geometry was used to evaluate the distances between the irradiated volume, for RT delivered to each FMN, and the site of the subsequent SMN. Results: The spatial distribution of SMN relative to the irradiated volumes in our cohort was as follows: 12% in the central area of the irradiated volume, which corresponds to the planning target volume (PTV), 66% in the beam-bordering region (i.e., the area surrounding the PTV), and 22% in regions located more than 5 cm from the irradiated volume. At the SMN site, all dose levels ranging from almost zero to >75 Gy were represented. A peak SMN frequency of approximately 31% was identified in volumes that received <2.5 Gy. Conclusion: A greater volume of tissues receives low or intermediate doses in regions bordering the irradiated volume with modern multiple-beam RT arrangements. These results should be considered for risk-benefit evaluations of RT.

  4. Exomars Mission Verification Approach

    NASA Astrophysics Data System (ADS)

    Cassi, Carlo; Gilardi, Franco; Bethge, Boris

    According to the long-term cooperation plan established by ESA and NASA in June 2009, the ExoMars project now consists of two missions: A first mission will be launched in 2016 under ESA lead, with the objectives to demonstrate the European capability to safely land a surface package on Mars, to perform Mars Atmosphere investigation, and to provide communi-cation capability for present and future ESA/NASA missions. For this mission ESA provides a spacecraft-composite, made up of an "Entry Descent & Landing Demonstrator Module (EDM)" and a Mars Orbiter Module (OM), NASA provides the Launch Vehicle and the scientific in-struments located on the Orbiter for Mars atmosphere characterisation. A second mission with it launch foreseen in 2018 is lead by NASA, who provides spacecraft and launcher, the EDL system, and a rover. ESA contributes the ExoMars Rover Module (RM) to provide surface mobility. It includes a drill system allowing drilling down to 2 meter, collecting samples and to investigate them for signs of past and present life with exobiological experiments, and to investigate the Mars water/geochemical environment, In this scenario Thales Alenia Space Italia as ESA Prime industrial contractor is in charge of the design, manufacturing, integration and verification of the ESA ExoMars modules, i.e.: the Spacecraft Composite (OM + EDM) for the 2016 mission, the RM for the 2018 mission and the Rover Operations Control Centre, which will be located at Altec-Turin (Italy). The verification process of the above products is quite complex and will include some pecu-liarities with limited or no heritage in Europe. Furthermore the verification approach has to be optimised to allow full verification despite significant schedule and budget constraints. The paper presents the verification philosophy tailored for the ExoMars mission in line with the above considerations, starting from the model philosophy, showing the verification activities flow and the sharing of tests

  5. Spatially distributed energy balance snowmelt modeling in a mountainous river basin: estimation of meteorological inputs and verification of model results

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A spatially distributed energy balance snowmelt model has been applied to a 2150 km2 drainage basin in the Boise River, ID, USA, to simulate the accumulation and melt of the snowpack for the years 1998–2000. The simulation was run at a 3 h time step and a spatial resolution of 250 m. Spatial field t...

  6. TESTING AND VERIFICATION OF REAL-TIME WATER QUALITY MONITORING SENSORS IN A DISTRIBUTION SYSTEM AGAINST INTRODUCED CONTAMINATION

    EPA Science Inventory

    Drinking water distribution systems reach the majority of American homes, business and civic areas, and are therefore an attractive target for terrorist attack via direct contamination, or backflow events. Instrumental monitoring of such systems may be used to signal the prese...

  7. Development and verification of an analytical algorithm to predict absorbed dose distributions in ocular proton therapy using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Koch, Nicholas C.; Newhauser, Wayne D.

    2010-02-01

    Proton beam radiotherapy is an effective and non-invasive treatment for uveal melanoma. Recent research efforts have focused on improving the dosimetric accuracy of treatment planning and overcoming the present limitation of relative analytical dose calculations. Monte Carlo algorithms have been shown to accurately predict dose per monitor unit (D/MU) values, but this has yet to be shown for analytical algorithms dedicated to ocular proton therapy, which are typically less computationally expensive than Monte Carlo algorithms. The objective of this study was to determine if an analytical method could predict absolute dose distributions and D/MU values for a variety of treatment fields like those used in ocular proton therapy. To accomplish this objective, we used a previously validated Monte Carlo model of an ocular nozzle to develop an analytical algorithm to predict three-dimensional distributions of D/MU values from pristine Bragg peaks and therapeutically useful spread-out Bragg peaks (SOBPs). Results demonstrated generally good agreement between the analytical and Monte Carlo absolute dose calculations. While agreement in the proximal region decreased for beams with less penetrating Bragg peaks compared with the open-beam condition, the difference was shown to be largely attributable to edge-scattered protons. A method for including this effect in any future analytical algorithm was proposed. Comparisons of D/MU values showed typical agreement to within 0.5%. We conclude that analytical algorithms can be employed to accurately predict absolute proton dose distributions delivered by an ocular nozzle.

  8. Atmospheric aerosols size distribution properties in winter and pre-monsoon over western Indian Thar Desert location

    NASA Astrophysics Data System (ADS)

    Panwar, Chhagan; Vyas, B. M.

    2016-05-01

    The first ever experimental results over Indian Thar Desert region concerning to height integrated aerosols size distribution function in particles size ranging between 0.09 to 2 µm such as, aerosols columnar size distribution (CSD), effective radius (Reff), integrated content of total aerosols (Nt), columnar content of accumulation and coarse size aerosols particles concentration (Na) (size < 0.5 µm) and (Nc) (size between 0.5 to 2 µm) have been described specifically during winter (a stable weather condition and intense anthropogenic pollution activity period) and pre-monsoon (intense dust storms of natural mineral aerosols as well as unstable atmospheric weather condition period) at Jaisalmer (26.90°N, 69.90°E, 220 m above surface level (asl)) located in central Thar desert vicinity of western Indian site. The CSD and various derived other aerosols size parameters are retrieved from their average spectral characteristics of Aerosol Optical Thickness (AOT) from UV to Infrared wavelength spectrum measured from Multi-Wavelength solar Radiometer (MWR). The natures of CSD are, in general, bio-modal character, instead of uniformly distributed character and power law distributions. The observed primary peaks in CSD plots are seen around about 1013 m2 μm-1 at radius range 0.09-0.20 µm during both the seasons. But, in winter months, secondary peaks of relatively lower CSD values of 1010 to 1011 m2/μm-1 occur within a lower radius size range 0.4 to 0.6 µm. In contrast to this, while in dust dominated and hot season, the dominated secondary maxima of the higher CSD of about 1012 m2μm-3 is found of bigger aerosols size particles in a rage of 0.6 to 1.0 µm which is clearly demonstrating the characteristics of higher aerosols laden of bigger size aerosols in summer months relative to their prevailed lower aerosols loading of smaller size aerosols particles (0.4 to 0.6 µm) in cold months. Several other interesting features of changing nature of monthly spectral AOT

  9. Sub-micron particle number size distributions characteristics at an urban location, Kanpur, in the Indo-Gangetic Plain

    NASA Astrophysics Data System (ADS)

    Kanawade, V. P.; Tripathi, S. N.; Bhattu, Deepika; Shamjad, P. M.

    2014-10-01

    We present long-term measurements of sub-micron particle number size distributions (PNSDs) conducted at an urban location, Kanpur, in India, from September 2007 to July 2011. The mean Aitken mode (NAIT), accumulation mode (NACCU), the total particle (NTOT), and black carbon (BC) mass concentrations were 12.4 × 103 cm- 3, 18.9 × 103 cm- 3, 31.9 × 103 cm- 3, and 7.96 μg m- 3, respectively, within the observed range at other urban locations worldwide, but much higher than those reported at urban sites in the developed nations. The total particle volume concentration appears to be dominated mainly by the accumulation mode particles, except during the monsoon months, perhaps due to efficient wet deposition of accumulation mode particles by precipitation. At Kanpur, the diurnal variation of particle number concentrations was very distinct, with highest during morning and late evening hours, and lowest during the afternoon hours. This behavior could be attributed to the large primary emissions of aerosol particles and temporal evolution of the planetary boundary layer. A distinct seasonal variation in the total particle number and BC mass concentrations was observed, with the maximum in winter and minimum during the rainy season, however, the Aitken mode particles did not show a clear seasonal fluctuation. The ratio of Aitken to accumulation mode particles, NAIT/NACCU, was varied from 0.1 to 14.2, with maximum during April to September months, probably suggesting the importance of new particle formation processes and subsequent particle growth. This finding suggests that dedicated long-term measurements of PNSDs (from a few nanometer to one micron) are required to systematically characterize new particle formation over the Indian subcontinent that has been largely unstudied so far. Contrarily, the low NAIT/NACCU during post-monsoon and winter indicated the dominance of biomass/biofuel burning aerosol emissions at this site.

  10. Spatial patterns of Transit-Time Distributions using δ18O-isotope tracer simulations at ungauged river locations

    NASA Astrophysics Data System (ADS)

    Stockinger, Michael; Bogena, Heye; Lücke, Andreas; Diekkrüger, Bernd; Weiler, Markus; Vereecken, Harry

    2013-04-01

    Knowledge of catchment response times to a precipitation forcing and of isotope tracer transit times can be used to characterize a catchment's hydrological behavior. The aim of this study was to use one gauging station together with multiple δ18O-isotope monitoring locations along the main stream to characterize the spatial heterogeneity of a catchment's hydrological behavior in the context of transit times. We present a method suitable for small catchments to estimate the Transit-Time Distribution (TTD) of precipitation to any stream point using δ18O tracer data, no matter if the stream point is gauged or ungauged. Hourly runoff and precipitation data were used to determine the effective precipitation under base flow conditions at Wüstebach (Eifel, Germany), a small, forested TERENO/TR32 test site. Modeling was focused on base flow due to the weekly measurement intervals of δ18O. The modeling period of 2.5 years was split up in six different hydrological seasons, based on average soil water content, in order to ensure a good fit of the model. Due to the small size of the Wüstebach catchment (27 ha) we assumed the derived effective precipitation to be applicable for the whole catchment. For subsequent modeling of stream water δ18O data we used effective precipitation as an input variable and corrected in a two-step process for canopy evaporation and soil evaporation. Thus we derived base flow TTDs for the ungauged stream and tributary locations. Results show a different behavior of the catchment's response time for different catchment wetness conditions with respect to base flow formation. Winter seasons show similar response times, as well as summer seasons, with the exception of one summer with a considerable higher response time. The transit time of water across the isotope observation points shows points more influenced by shallow source waters than other points, where a higher contribution of groundwater is observable.

  11. Simple Syringe Filtration Methods for Reliably Examining Dissolved and Colloidal Trace Element Distributions in Remote Field Locations

    NASA Astrophysics Data System (ADS)

    Shiller, A. M.

    2002-12-01

    Methods for obtaining reliable dissolved trace element samples frequently utilize clean labs, portable laminar flow benches, or other equipment not readily transportable to remote locations. In some cases unfiltered samples can be obtained in a remote location and transported back to a lab for filtration. However, this may not always be possible or desirable. Additionally, methods for obtaining information on colloidal composition are likewise frequently too cumbersome for remote locations as well as being time-consuming. For that reason I have examined clean methods for collecting samples filtered through 0.45 and 0.02 micron syringe filters. With this methodology, only small samples are collected (typically 15 mL). However, with the introduction of the latest generation of ICP-MS's and microflow nebulizers, sample requirements for elemental analysis are much lower than just a few years ago. Thus, a determination of a suite of first row transition elements is frequently readily obtainable with samples of less than 1 mL. To examine the "traditional" (<0.45 micron) dissolved phase, 25 mm diameter polypropylene syringe filters and all polyethylene/polypropylene syringes are utilized. Filters are pre-cleaned in the lab using 40 mL of approx. 1 M HCl followed by a clean water rinse. Syringes are pre-cleaned by leaching with hot 1 M HCl followed by a clean water rinse. Sample kits are packed in polyethylene bags for transport to the field. Results are similar to results obtained using 0.4 micron polycarbonate screen filters, though concentrations may differ somewhat depending on the extent of sample pre-rinsing of the filter. Using this method, a multi-year time series of dissolved metals in a remote Rocky Mountain stream has been obtained. To examine the effect of colloidal material on dissolved metal concentrations, 0.02 micron alumina syringe filters have been utilized. Other workers have previously used these filters for examining colloidal Fe distributions in lake

  12. TPS verification with UUT simulation

    NASA Astrophysics Data System (ADS)

    Wang, Guohua; Meng, Xiaofeng; Zhao, Ruixian

    2006-11-01

    TPS's (Test Program Set) verification or first article acceptance test commonly depends on fault insertion experiment on UUT (Unit Under Test). However the failure modes injected on UUT is limited and it is almost infeasible when the UUT is in development or in a distributed state. To resolve this problem, a TPS verification method based on UUT interface signal simulation is putting forward. The interoperability between ATS (automatic test system) and UUT simulation platform is very important to realize automatic TPS verification. After analyzing the ATS software architecture, the approach to realize interpretability between ATS software and UUT simulation platform is proposed. And then the UUT simulation platform software architecture is proposed based on the ATS software architecture. The hardware composition and software architecture of the UUT simulation is described in details. The UUT simulation platform has been implemented in avionics equipment TPS development, debug and verification.

  13. KAT-7 SCIENCE VERIFICATION: USING H I OBSERVATIONS OF NGC 3109 TO UNDERSTAND ITS KINEMATICS AND MASS DISTRIBUTION

    SciTech Connect

    Carignan, C.; Frank, B. S.; Hess, K. M.; Lucero, D. M.; Randriamampandry, T. H.; Goedhart, S.; Passmoor, S. S.

    2013-09-15

    H I observations of the Magellanic-type spiral NGC 3109, obtained with the seven dish Karoo Array Telescope (KAT-7), are used to analyze its mass distribution. Our results are compared to those obtained using Very Large Array (VLA) data. KAT-7 is a pathfinder of the Square Kilometer Array precursor MeerKAT, which is under construction. The short baselines and low system temperature of the telescope make it sensitive to large-scale, low surface brightness emission. The new observations with KAT-7 allow the measurement of the rotation curve (RC) of NGC 3109 out to 32', doubling the angular extent of existing measurements. A total H I mass of 4.6 Multiplication-Sign 10{sup 8} M{sub Sun} is derived, 40% more than what is detected by the VLA observations. The observationally motivated pseudo-isothermal dark matter (DM) halo model can reproduce the observed RC very well, but the cosmologically motivated Navarro-Frenk-White DM model gives a much poorer fit to the data. While having a more accurate gas distribution has reduced the discrepancy between the observed RC and the MOdified Newtonian Dynamics (MOND) models, this is done at the expense of having to use unrealistic mass-to-light ratios for the stellar disk and/or very large values for the MOND universal constant a{sub 0}. Different distances or H I contents cannot reconcile MOND with the observed kinematics, in view of the small errors on these two quantities. As with many slowly rotating gas-rich galaxies studied recently, the present result for NGC 3109 continues to pose a serious challenge to the MOND theory.

  14. Verification of Anderson superexchange in MnO via magnetic pair distribution function analysis and ab initio theory

    DOE PAGESBeta

    Benjamin A. Frandsen; Brunelli, Michela; Page, Katharine; Uemura, Yasutomo J.; Staunton, Julie B.; Billinge, Simon J. L.

    2016-05-11

    Here, we present a temperature-dependent atomic and magnetic pair distribution function (PDF) analysis of neutron total scattering measurements of antiferromagnetic MnO, an archetypal strongly correlated transition-metal oxide. The known antiferromagnetic ground-state structure fits the low-temperature data closely with refined parameters that agree with conventional techniques, confirming the reliability of the newly developed magnetic PDF method. The measurements performed in the paramagnetic phase reveal significant short-range magnetic correlations on a ~1 nm length scale that differ substantially from the low-temperature long-range spin arrangement. Ab initio calculations using a self-interaction-corrected local spin density approximation of density functional theory predict magnetic interactions dominatedmore » by Anderson superexchange and reproduce the measured short-range magnetic correlations to a high degree of accuracy. Further calculations simulating an additional contribution from a direct exchange interaction show much worse agreement with the data. Furthermore, the Anderson superexchange model for MnO is thus verified by experimentation and confirmed by ab initio theory.« less

  15. Verification of Anderson Superexchange in MnO via Magnetic Pair Distribution Function Analysis and ab initio Theory

    NASA Astrophysics Data System (ADS)

    Frandsen, Benjamin A.; Brunelli, Michela; Page, Katharine; Uemura, Yasutomo J.; Staunton, Julie B.; Billinge, Simon J. L.

    2016-05-01

    We present a temperature-dependent atomic and magnetic pair distribution function (PDF) analysis of neutron total scattering measurements of antiferromagnetic MnO, an archetypal strongly correlated transition-metal oxide. The known antiferromagnetic ground-state structure fits the low-temperature data closely with refined parameters that agree with conventional techniques, confirming the reliability of the newly developed magnetic PDF method. The measurements performed in the paramagnetic phase reveal significant short-range magnetic correlations on a ˜1 nm length scale that differ substantially from the low-temperature long-range spin arrangement. Ab initio calculations using a self-interaction-corrected local spin density approximation of density functional theory predict magnetic interactions dominated by Anderson superexchange and reproduce the measured short-range magnetic correlations to a high degree of accuracy. Further calculations simulating an additional contribution from a direct exchange interaction show much worse agreement with the data. The Anderson superexchange model for MnO is thus verified by experimentation and confirmed by ab initio theory.

  16. Verification of Anderson Superexchange in MnO via Magnetic Pair Distribution Function Analysis and ab initio Theory.

    PubMed

    Frandsen, Benjamin A; Brunelli, Michela; Page, Katharine; Uemura, Yasutomo J; Staunton, Julie B; Billinge, Simon J L

    2016-05-13

    We present a temperature-dependent atomic and magnetic pair distribution function (PDF) analysis of neutron total scattering measurements of antiferromagnetic MnO, an archetypal strongly correlated transition-metal oxide. The known antiferromagnetic ground-state structure fits the low-temperature data closely with refined parameters that agree with conventional techniques, confirming the reliability of the newly developed magnetic PDF method. The measurements performed in the paramagnetic phase reveal significant short-range magnetic correlations on a ∼1  nm length scale that differ substantially from the low-temperature long-range spin arrangement. Ab initio calculations using a self-interaction-corrected local spin density approximation of density functional theory predict magnetic interactions dominated by Anderson superexchange and reproduce the measured short-range magnetic correlations to a high degree of accuracy. Further calculations simulating an additional contribution from a direct exchange interaction show much worse agreement with the data. The Anderson superexchange model for MnO is thus verified by experimentation and confirmed by ab initio theory. PMID:27232042

  17. The prevalence and distribution of gastrointestinal parasites of stray and refuge dogs in four locations in India.

    PubMed

    Traub, Rebecca J; Pednekar, Riddhi P; Cuttell, Leigh; Porter, Ronald B; Abd Megat Rani, Puteri Azaziah; Gatne, Mukulesh L

    2014-09-15

    A gastrointestinal parasite survey of 411 stray and refuge dogs sampled from four geographical and climactically distinct locations in India revealed these animals to represent a significant source of environmental contamination for parasites that pose a zoonotic risk to the public. Hookworms were the most commonly identified parasite in dogs in Sikkim (71.3%), Mumbai (48.8%) and Delhi (39.1%). In Ladakh, which experiences harsh extremes in climate, a competitive advantage was observed for parasites such as Sarcocystis spp. (44.2%), Taenia hydatigena (30.3%) and Echinococcus granulosus (2.3%) that utilise intermediate hosts for the completion of their life cycle. PCR identified Ancylostoma ceylanicum and Ancylostoma caninum to occur sympatrically, either as single or mixed infections in Sikkim (Northeast) and Mumbai (West). In Delhi, A. caninum was the only species identified in dogs, probably owing to its ability to evade unfavourable climatic conditions by undergoing arrested development in host tissue. The expansion of the known distribution of A. ceylanicum to the west, as far as Mumbai, justifies the renewed interest in this emerging zoonosis and advocates for its surveillance in future human parasite surveys. Of interest was the absence of Trichuris vulpis in dogs, in support of previous canine surveys in India. This study advocates the continuation of birth control programmes in stray dogs that will undoubtedly have spill-over effects on reducing the levels of environmental contamination with parasite stages. In particular, owners of pet animals exposed to these environments must be extra vigilant in ensuring their animals are regularly dewormed and maintaining strict standards of household and personal hygiene. PMID:25139393

  18. ETV - VERIFICATION TESTING (ENVIRONMENTAL TECHNOLOGY VERIFICATION PROGRAM)

    EPA Science Inventory

    Verification testing is a major component of the Environmental Technology Verification (ETV) program. The ETV Program was instituted to verify the performance of innovative technical solutions to problems that threaten human health or the environment and was created to substantia...

  19. Swarm Verification

    NASA Technical Reports Server (NTRS)

    Holzmann, Gerard J.; Joshi, Rajeev; Groce, Alex

    2008-01-01

    Reportedly, supercomputer designer Seymour Cray once said that he would sooner use two strong oxen to plow a field than a thousand chickens. Although this is undoubtedly wise when it comes to plowing a field, it is not so clear for other types of tasks. Model checking problems are of the proverbial "search the needle in a haystack" type. Such problems can often be parallelized easily. Alas, none of the usual divide and conquer methods can be used to parallelize the working of a model checker. Given that it has become easier than ever to gain access to large numbers of computers to perform even routine tasks it is becoming more and more attractive to find alternate ways to use these resources to speed up model checking tasks. This paper describes one such method, called swarm verification.

  20. Development of Distributed System for Informational Location and Control on the Corporate Web Portal "Analytical Chemistry in Russia"

    NASA Astrophysics Data System (ADS)

    Shirokova, V. I.; Kolotov, V. P.; Alenina, M. V.

    A new Internet portal developed by community of Russian analysts has been launched in 2001 (http://www.geokhi.ru/~rusanalytchem, http://www.rusanalytchem.org) Corporate Web Portal information, "Analytical Chemistry in Russia" , Corporate Web Portal information, "Analytical Chemistry in Russia" ). Now the portal contains a large amount of information, great part of it is stored in the form of SQL data base (MS SQL). The information retrieval is made by means of ASP pages, containing VB Scripts. The obtained experience of work with such topical portal has detected some weak points, related with its centralized administration and updating. It has been found that urgent supporting of all requests from different persons/organizations on information allocation on the portal's server takes a lot of efforts and time. That is why, the further development of portal we relate with development of a distributed system for information allocation and control, under preserving of centralized administration for ensuring of security and stable working of the portal. Analysis and testing of some available technologies lead us to conclusion to apply MS Share Point technologies. A MS Share Point Team Services (SPTS) has been selected as a technology supporting relatively small groups, where MS SQL is used for storage data and metadata. The last feature was considered as decisive one for SPTS selection, allowing easy integration with data base of the whole portal. SPTS was launched as an independent Internet site accessible from home page of the portal. It serves as a root site to exit to dozens of subsites serving different bodies of Russian Scientific Council on analytical chemistry and external organizations located over the whole Russia. The secure functioning of such hierarchical system, which includes a lot of remote information suppliers, based on use of roles to manage user rights

  1. Generic Verification Protocol for Verification of Online Turbidimeters

    EPA Science Inventory

    This protocol provides generic procedures for implementing a verification test for the performance of online turbidimeters. The verification tests described in this document will be conducted under the Environmental Technology Verification (ETV) Program. Verification tests will...

  2. VIABLE BACTERIAL AEROSOL PARTICLE SIZE DISTRIBUTIONS IN THE MIDSUMMER ATMOSPHERE AT AN ISOLATED LOCATION IN THE HIGH DESERT CHAPARRAL

    EPA Science Inventory

    The viable bacterial particle size distribution in the atmosphere at the Hanford Nuclear Reservation, Richland, WA during two 1-week periods in June 1992, was observed at three intervals during the day (morning, midday and evening) and at three heights (2, 4, and 8 m) above groun...

  3. Distribution of Foraminifera in the Core Samples of Kollidam and Marakanam Mangrove Locations, Tamil Nadu, Southeast Coast of India

    NASA Astrophysics Data System (ADS)

    Nowshath, M.

    2013-05-01

    In order to study the distribution of Foraminifera in the subsurface sediments of mangrove environment, two core samples have been collected i) near boating house, Pitchavaram, from Kollidam estuary (C2) and ii) backwaters of Marakanam (C2)with the help of PVC corer. The length of the core varies from a total of 25 samples from both cores were obtained and they were subjected to standard micropaleontological and sedimentological analyses for the evaluation of different sediment characteristics. The core sample No.C1 (Pitchavaram) yielded only foraminifera whereas the other one core no.C2 (Marakanam) has yielded discussed only the down core distribution of foraminifera. The widely utilized classification proposed by Loeblich and Tappan (1987) has been followed in the present study for Foraminiferal taxonomy and accordingly 23 foraminiferal species belonging to 18 genera, 10 families, 8 superfamilies and 4 suborders have been reported and illustrated. The foraminiferal species recorded are characteristic of shallow innershelf to marginal marine and tropical in nature. Sedimentological parameters such as CaCO3, Organic matter and sand-silt-clay ratio was estimated and their down core distribution is discussed. An attempt has been made to evaluate the favourable substrate for the Foraminifera population abundance in the present area of study. From the overall distribution of foraminifera in different samples of Kollidam estuary (Pitchavaram area), and Marakanam estuary it is observed that siltysand and sandysilt are more accommodative substrate for the population of foraminifera, respectively. The distribution of foraminifera in the core samples indicate that the sediments were deposited under normal oxygenated environment conditions.;

  4. Verification of VENTSAR

    SciTech Connect

    Simpkins, A.A.

    1995-01-01

    The VENTSAR code is an upgraded and improved version of the VENTX code, which estimates concentrations on or near a building from a release at a nearby location. The code calculates the concentrations either for a given meteorological exceedance probability or for a given stability and wind speed combination. A single building can be modeled which lies in the path of the plume, or a penthouse can be added to the top of the building. Plume rise may also be considered. Release types can be either chemical or radioactive. Downwind concentrations are determined at user-specified incremental distances. This verification report was prepared to demonstrate that VENTSAR is properly executing all algorithms and transferring data. Hand calculations were also performed to ensure proper application of methodologies.

  5. Verification of VLSI designs

    NASA Technical Reports Server (NTRS)

    Windley, P. J.

    1991-01-01

    In this paper we explore the specification and verification of VLSI designs. The paper focuses on abstract specification and verification of functionality using mathematical logic as opposed to low-level boolean equivalence verification such as that done using BDD's and Model Checking. Specification and verification, sometimes called formal methods, is one tool for increasing computer dependability in the face of an exponentially increasing testing effort.

  6. Measuring location, size, distribution, and loading of NiO crystallites in individual SBA-15 pores by electron tomography.

    PubMed

    Friedrich, Heiner; Sietsma, Jelle R A; de Jongh, Petra E; Verkleij, Arie J; de Jong, Krijn P

    2007-08-22

    By the combination of electron tomography with image segmentation, the properties of 299 NiO crystallites contained in 6 SBA-15 pores were studied. A statistical analysis of the particle size showed that crystallites between 2 and 6 nm were present with a distribution maximum at 3 and 4 nm, for the number-weighted and volume-weighted curves, respectively. Interparticle distances between nearest neighbors were 1-3 nm with very few isolated crystallites. In the examined pores, a local loading twice the applied average of 24 wt % NiO was found. This suggests that a very high local loading combined with a high dispersion is achievable. PMID:17655305

  7. Dependence of the continuum energy distribution of T Tauri stars on the location of the temperature minimum

    NASA Astrophysics Data System (ADS)

    Calvet, N.

    1981-12-01

    The influence of the position of the temperature minimum on the continuum flux produced by theoretical models of T Tauri stars is investigated. In particular, continuum fluxes for models with similar temperature profiles which differ in the position of the temperature minimum are calculated. Assumed temperature profiles are presented and the transfer and equilibrium equations for a 5-level plus continuum representation for the hydrogen atom is solved using the complete linearization scheme of Auer and Mihalas (1969). This calculation gives the electron density and departure coefficients for the first five levels of the hydrogen atom, which are then used to calculate non-LTE source functions for the continuum produced by these levels. The resulting continuum fluxes are shown and discussed. It is concluded that the discrepancy between theoretical models and observations in the blue and UV regions of the spectrum found in Calvet (1981) cannot be diminished by changing the location of the temperature minimum.

  8. Requirements, Verification, and Compliance (RVC) Database Tool

    NASA Technical Reports Server (NTRS)

    Rainwater, Neil E., II; McDuffee, Patrick B.; Thomas, L. Dale

    2001-01-01

    This paper describes the development, design, and implementation of the Requirements, Verification, and Compliance (RVC) database used on the International Space Welding Experiment (ISWE) project managed at Marshall Space Flight Center. The RVC is a systems engineer's tool for automating and managing the following information: requirements; requirements traceability; verification requirements; verification planning; verification success criteria; and compliance status. This information normally contained within documents (e.g. specifications, plans) is contained in an electronic database that allows the project team members to access, query, and status the requirements, verification, and compliance information from their individual desktop computers. Using commercial-off-the-shelf (COTS) database software that contains networking capabilities, the RVC was developed not only with cost savings in mind but primarily for the purpose of providing a more efficient and effective automated method of maintaining and distributing the systems engineering information. In addition, the RVC approach provides the systems engineer the capability to develop and tailor various reports containing the requirements, verification, and compliance information that meets the needs of the project team members. The automated approach of the RVC for capturing and distributing the information improves the productivity of the systems engineer by allowing that person to concentrate more on the job of developing good requirements and verification programs and not on the effort of being a "document developer".

  9. Verification of Adaptive Systems

    SciTech Connect

    Pullum, Laura L; Cui, Xiaohui; Vassev, Emil; Hinchey, Mike; Rouff, Christopher; Buskens, Richard

    2012-01-01

    Adaptive systems are critical for future space and other unmanned and intelligent systems. Verification of these systems is also critical for their use in systems with potential harm to human life or with large financial investments. Due to their nondeterministic nature and extremely large state space, current methods for verification of software systems are not adequate to provide a high level of assurance for them. The combination of stabilization science, high performance computing simulations, compositional verification and traditional verification techniques, plus operational monitors, provides a complete approach to verification and deployment of adaptive systems that has not been used before. This paper gives an overview of this approach.

  10. Lunar Pickup Ions Observed by ARTEMIS: Spatial and Temporal Distribution and Constraints on Species and Source Locations

    NASA Technical Reports Server (NTRS)

    Halekas, Jasper S.; Poppe, A. R.; Delory, G. T.; Sarantos, M.; Farrell, W. M.; Angelopoulos, V.; McFadden, J. P.

    2012-01-01

    ARTEMIS observes pickup ions around the Moon, at distances of up to 20,000 km from the surface. The observed ions form a plume with a narrow spatial and angular extent, generally seen in a single energy/angle bin of the ESA instrument. Though ARTEMIS has no mass resolution capability, we can utilize the analytically describable characteristics of pickup ion trajectories to constrain the possible ion masses that can reach the spacecraft at the observation location in the correct energy/angle bin. We find that most of the observations are consistent with a mass range of approx. 20-45 amu, with a smaller fraction consistent with higher masses, and very few consistent with masses below 15 amu. With the assumption that the highest fluxes of pickup ions come from near the surface, the observations favor mass ranges of approx. 20-24 and approx. 36-40 amu. Although many of the observations have properties consistent with a surface or near-surface release of ions, some do not, suggesting that at least some of the observed ions have an exospheric source. Of all the proposed sources for ions and neutrals about the Moon, the pickup ion flux measured by ARTEMIS correlates best with the solar wind proton flux, indicating that sputtering plays a key role in either directly producing ions from the surface, or producing neutrals that subsequently become ionized.

  11. Investigation of Reflectance Distribution and Trend for the Double Ray Located in the Northwest of Tycho Crater

    NASA Astrophysics Data System (ADS)

    Yi, Eung Seok; Kim, Kyeong Ja; Choi, Yi Re; Kim, Yong Ha; Lee, Sung Soon; Lee, Seung Ryeol

    2015-06-01

    Analysis of lunar samples returned by the US Apollo missions revealed that the lunar highlands consist of anorthosite, plagioclase, pyroxene, and olivine; also, the lunar maria are composed of materials such as basalt and ilmenite. More recently, the remote sensing approach has enabled reduction of the time required to investigate the entire lunar surface, compared to the approach of returning samples. Moreover, remote sensing has also made it possible to determine the existence of specific minerals and to examine wide areas. In this paper, an investigation was performed on the reflectance distribution and its trend. The results were applied to the example of the double ray stretched in parallel lines from the Tycho crater to the third-quadrant of Mare Nubium. Basic research and background information for the investigation of lunar surface characteristics is also presented. For this research, resources aboard the SELenological and ENgineering Explorer (SELENE), a Japanese lunar probe, were used. These included the Multiband Imager (MI) in the Lunar Imager / Spectrometer (LISM). The data of these instruments were edited through the toolkit, an image editing and analysis tool, Exelis Visual Information Solution (ENVI).

  12. A statistical study of the spatial distribution of Co-operative UK Twin Located Auroral Sounding System (CUTLASS) backscatter power during EISCAT heater beam-sweeping experiments

    NASA Astrophysics Data System (ADS)

    Shergill, H.; Robinson, T. R.; Dhillon, R. S.; Lester, M.; Milan, S. E.; Yeoman, T. K.

    2010-05-01

    High-power electromagnetic waves can excite a variety of plasma instabilities in Earth's ionosphere. These lead to the growth of plasma waves and plasma density irregularities within the heated volume, including patches of small-scale field-aligned electron density irregularities. This paper reports a statistical study of intensity distributions in patches of these irregularities excited by the European Incoherent Scatter (EISCAT) heater during beam-sweeping experiments. The irregularities were detected by the Co-operative UK Twin Located Auroral Sounding System (CUTLASS) coherent scatter radar located in Finland. During these experiments the heater beam direction is steadily changed from northward to southward pointing. Comparisons are made between statistical parameters of CUTLASS backscatter power distributions and modeled heater beam power distributions provided by the EZNEC version 4 software. In general, good agreement between the statistical parameters and the modeled beam is observed, clearly indicating the direct causal connection between the heater beam and the irregularities, despite the sometimes seemingly unpredictable nature of unaveraged results. The results also give compelling evidence in support of the upper hybrid theory of irregularity excitation.

  13. Study of scattering from a sphere with an eccentrically located spherical inclusion by generalized Lorenz-Mie theory: internal and external field distribution.

    PubMed

    Wang, J J; Gouesbet, G; Han, Y P; Gréhan, G

    2011-01-01

    Based on the recent results in the generalized Lorenz-Mie theory, solutions for scattering problems of a sphere with an eccentrically located spherical inclusion illuminated by an arbitrary shaped electromagnetic beam in an arbitrary orientation are obtained. Particular attention is paid to the description and application of an arbitrary shaped beam in an arbitrary orientation to the scattering problem under study. The theoretical formalism is implemented in a homemade computer program written in FORTRAN. Numerical results concerning spatial distributions of both internal and external fields are displayed in different formats in order to properly display exemplifying results. More specifically, as an example, we consider the case of a focused fundamental Gaussian beam (TEM(00) mode) illuminating a glass sphere (having a real refractive index equal to 1.50) with an eccentrically located spherical water inclusion (having a real refractive index equal to 1.33). Displayed results are for various parameters of the incident electromagnetic beam (incident orientation, beam waist radius, location of the beam waist center) and of the scatterer system (location of the inclusion inside the host sphere and relative diameter of the inclusion to the host sphere). PMID:21200408

  14. Light dose verification for pleural PDT

    NASA Astrophysics Data System (ADS)

    Sandell, Julia L.; Liang, Xing; Zhu, Timothy

    2012-02-01

    The ability to deliver uniform light dose in Photodynamic therapy (PDT) is critical to treatment efficacy. Current protocol in pleural photodynamic therapy uses 7 isotropic detectors placed at discrete locations within the pleural cavity to monitor light dose throughout treatment. While effort is made to place the detectors uniformly through the cavity, measurements do not provide an overall uniform measurement of delivered dose. A real-time infrared (IR) tracking camera is development to better deliver and monitor a more uniform light distribution during treatment. It has been shown previously that there is good agreement between fluence calculated using IR tracking data and isotropic detector measurements for direct light phantom experiments. This study presents the results of an extensive phantom study which uses variable, patient-like geometries and optical properties (both absorption and scattering). Position data of the treatment is collected from the IR navigation system while concurrently light distribution measurements are made using the aforementioned isotropic detectors. These measurements are compared to fluence calculations made using data from the IR navigation system to verify our light distribution theory is correct and applicable in patient-like settings. The verification of this treatment planning technique is an important step in bringing real-time fluence monitoring into the clinic for more effective treatment.

  15. Environmental Technology Verification Report - Electric Power and Heat Production Using Renewable Biogas at Patterson Farms

    EPA Science Inventory

    The U.S. EPA operates the Environmental Technology Verification program to facilitate the deployment of innovative technologies through performance verification and information dissemination. A technology area of interest is distributed electrical power generation, particularly w...

  16. Simulation verification techniques study

    NASA Technical Reports Server (NTRS)

    Schoonmaker, P. B.; Wenglinski, T. H.

    1975-01-01

    Results are summarized of the simulation verification techniques study which consisted of two tasks: to develop techniques for simulator hardware checkout and to develop techniques for simulation performance verification (validation). The hardware verification task involved definition of simulation hardware (hardware units and integrated simulator configurations), survey of current hardware self-test techniques, and definition of hardware and software techniques for checkout of simulator subsystems. The performance verification task included definition of simulation performance parameters (and critical performance parameters), definition of methods for establishing standards of performance (sources of reference data or validation), and definition of methods for validating performance. Both major tasks included definition of verification software and assessment of verification data base impact. An annotated bibliography of all documents generated during this study is provided.

  17. Software verification and testing

    NASA Technical Reports Server (NTRS)

    1985-01-01

    General procedures for software verification and validation are provided as a guide for managers, programmers, and analysts involved in software development. The verification and validation procedures described are based primarily on testing techniques. Testing refers to the execution of all or part of a software system for the purpose of detecting errors. Planning, execution, and analysis of tests are outlined in this document. Code reading and static analysis techniques for software verification are also described.

  18. How do wetland type and location affect their hydrological services? - A distributed hydrological modelling study of the contribution of isolated and riparian wetlands

    NASA Astrophysics Data System (ADS)

    Fossey, Maxime; Rousseau, Alain N.; Savary, Stéphane; Royer, Alain

    2015-04-01

    Wetlands play a significant role on the hydrological cycle, reducing peak flows through water storage functions and sustaining low flows through slow release of water. However, their impacts on water resource availability and flood control are mainly driven by wetland types and locations within a watershed. So, despite the general agreement about these major hydrological functions, little is known about their spatial and typological influences. Consequently, assessing the quantitative impact of wetlands on hydrological regimes has become a relevant issue for both the scientific community and the decision-maker community. To investigate the hydrologic response at the watershed scale, mathematical modelling has been a well-accepted framework. Specific isolated and riparian wetland modules were implemented in the PHYSITEL/HYDROTEL distributed hydrological modelling platform to assess the impact of the spatial distribution of isolated and riparian wetlands on the stream flows of the Becancour River watershed, Quebec, Canada. More specifically, the focus was on assessing whether stream flow parameters, including peak flow and low flow, were related to: (i) geographic location of wetlands, (ii) typology of wetlands, and (iii) season of the year. Preliminary results suggest that isolated and riparian wetlands have individual space- and time-dependent impacts on the hydrologic response of the study watershed and provide relevant information for the design of wetland protection and restoration programs.

  19. The discrimination filters to increase the reliability of EEW association on the location using geometric distribution of triggered stations with upgrading a travel time model.

    NASA Astrophysics Data System (ADS)

    Chi, H. C.; Park, J. H.; Lim, I. S.; Seong, Y. J.

    2015-12-01

    In operation of Earthquake Early Warning System (EEWS), the alerting criteria are one of the most important parameters in optimizing acceptable warning system. During early stage of testing EEW systems from 2011 to 2013, we adapted ElarmS by UC Berkeley BSL to Korean seismic network and applied very simple criteria for event alerting with the combination of the numbers of station and magnitude. As a result of the testing we found out that the real-time test result of Earthquake Early Warning (EEW) system in Korea showed that all events located within seismic network with bigger than magnitude 3.0 were well detected. However, two events located at sea between land and island gave false results with magnitude over 4.0 related to the teleseismic waves and one event located in land gave false results with magnitude over 3.0 related to the teleseismic waves. These teleseismic-relevant false events were caused by logical co-relation during association procedure and the corresponding geometric distribution of associated stations is crescent-shaped. Seismic stations are not deployed uniformly, so the expected bias ratio varies with evaluated epicentral location. This ratio is calculated in advance and stored into database, called as TrigDB, for the discrimination of teleseismic-origin false alarm. We developed a method, so called 'TrigDB back filling', updating location with supplementary association of stations comparing triggered times between sandwiched stations which was not associated previously based on predefined criteria such as travel-time. Because EEW program assume that all events are local, teleseismic-relevant events can give more triggered stations by using back filling of the unassociated stations than the normal association. And we also developed a travel time curve (K-SEIS-1DTT2015) to reduce split event for EEWS. After applying the K-SEIS-1DTT2015 model, these teleseismic-relevant false events are reduced. As a result of these methods we could get more

  20. Proton Therapy Verification with PET Imaging

    PubMed Central

    Zhu, Xuping; Fakhri, Georges El

    2013-01-01

    Proton therapy is very sensitive to uncertainties introduced during treatment planning and dose delivery. PET imaging of proton induced positron emitter distributions is the only practical approach for in vivo, in situ verification of proton therapy. This article reviews the current status of proton therapy verification with PET imaging. The different data detecting systems (in-beam, in-room and off-line PET), calculation methods for the prediction of proton induced PET activity distributions, and approaches for data evaluation are discussed. PMID:24312147

  1. 10 CFR 72.79 - Facility information and verification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... DOC/NRC Form AP-A and associated forms; (b) Shall submit location information described in § 75.11 of this chapter on DOC/NRC Form AP-1 and associated forms; and (c) Shall permit verification thereof...

  2. 10 CFR 72.79 - Facility information and verification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... DOC/NRC Form AP-A and associated forms; (b) Shall submit location information described in § 75.11 of this chapter on DOC/NRC Form AP-1 and associated forms; and (c) Shall permit verification thereof...

  3. 10 CFR 72.79 - Facility information and verification.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... DOC/NRC Form AP-A and associated forms; (b) Shall submit location information described in § 75.11 of this chapter on DOC/NRC Form AP-1 and associated forms; and (c) Shall permit verification thereof...

  4. 10 CFR 72.79 - Facility information and verification.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... DOC/NRC Form AP-A and associated forms; (b) Shall submit location information described in § 75.11 of this chapter on DOC/NRC Form AP-1 and associated forms; and (c) Shall permit verification thereof...

  5. Influence of pH, layer charge location and crystal thickness distribution on U(VI) sorption onto heterogeneous dioctahedral smectite.

    PubMed

    Guimarães, Vanessa; Rodríguez-Castellón, Enrique; Algarra, Manuel; Rocha, Fernando; Bobos, Iuliu

    2016-11-01

    The UO2(2+) adsorption on smectite (samples BA1, PS2 and PS3) with a heterogeneous structure was investigated at pH 4 (I=0.02M) and pH 6 (I=0.2M) in batch experiments, with the aim to evaluate the influence of pH, layer charge location and crystal thickness distribution. Mean crystal thickness distribution of smectite crystallite used in sorption experiments range from 4.8nm (sample PS2), to 5.1nm (sample PS3) and, to 7.4nm (sample BA1). Smaller crystallites have higher total surface area and sorption capacity. Octahedral charge location favor higher sorption capacity. The sorption isotherms of Freundlich, Langmuir and SIPS were used to model the sorption experiments. The surface complexation and cation exchange reactions were modeled using PHREEQC-code to describe the UO2(2+) sorption on smectite. The amount of UO2(2+) adsorbed on smectite samples decreased significantly at pH 6 and higher ionic strength, where the sorption mechanism was restricted to the edge sites of smectite. Two binding energy components at 380.8±0.3 and 382.2±0.3eV, assigned to hydrated UO2(2+) adsorbed by cation exchange and by inner-sphere complexation on the external sites at pH 4, were identified after the U4f7/2 peak deconvolution by X-photoelectron spectroscopy. Also, two new binding energy components at 380.3±0.3 and 381.8±0.3eV assigned to AlOUO2(+) and SiOUO2(+) surface species were observed at pH 6. PMID:27285596

  6. Estimation of locations and migration of debris flows on Izu-Oshima Island, Japan, on 16 October 2013 by the distribution of high frequency seismic amplitudes

    NASA Astrophysics Data System (ADS)

    Ogiso, Masashi; Yomogida, Kiyoshi

    2015-06-01

    In the early morning on 16 October 2013, large debris flows resulted in over 30 people dead on Izu-Oshima Island, Japan, which were induced by heavy rainfall from the approaching Typhoon 1326 (Wipha). We successfully estimated the locations and migration processes of five large events of the debris flows, using the spatial distribution of high-frequency seismic amplitudes recorded by a seismic network on the island. The flows occurred on the western flank of the island, almost at the same place as the site where large traces of debris flows were identified after the disaster. During each event of debris flows, the estimated locations migrated downstream with increasing time, from the caldera rim of Miharayama volcano in the center of the island to its western side with a speed of up to 30 m/s. The estimated time series of source amplitudes are different from event to event, exhibiting a large variety of flow sequences while they seem to have repeated at a relatively narrow area over several tens of minutes. The present approach may be utilized for early detection and warning for prevention and reduction of the present type of disasters in the future.

  7. SU-E-J-58: Dosimetric Verification of Metal Artifact Effects: Comparison of Dose Distributions Affected by Patient Teeth and Implants

    SciTech Connect

    Lee, M; Kang, S; Lee, S; Suh, T; Lee, J; Park, J; Park, H; Lee, B

    2014-06-01

    Purpose: Implant-supported dentures seem particularly appropriate for the predicament of becoming edentulous and cancer patients are no exceptions. As the number of people having dental implants increased in different ages, critical dosimetric verification of metal artifact effects are required for the more accurate head and neck radiation therapy. The purpose of this study is to verify the theoretical analysis of the metal(streak and dark) artifact, and to evaluate dosimetric effect which cause by dental implants in CT images of patients with the patient teeth and implants inserted humanoid phantom. Methods: The phantom comprises cylinder which is shaped to simulate the anatomical structures of a human head and neck. Through applying various clinical cases, made phantom which is closely allied to human. Developed phantom can verify two classes: (i)closed mouth (ii)opened mouth. RapidArc plans of 4 cases were created in the Eclipse planning system. Total dose of 2000 cGy in 10 fractions is prescribed to the whole planning target volume (PTV) using 6MV photon beams. Acuros XB (AXB) advanced dose calculation algorithm, Analytical Anisotropic Algorithm (AAA) and progressive resolution optimizer were used in dose optimization and calculation. Results: In closed and opened mouth phantom, because dark artifacts formed extensively around the metal implants, dose variation was relatively higher than that of streak artifacts. As the PTV was delineated on the dark regions or large streak artifact regions, maximum 7.8% dose error and average 3.2% difference was observed. The averaged minimum dose to the PTV predicted by AAA was about 5.6% higher and OARs doses are also 5.2% higher compared to AXB. Conclusion: The results of this study showed that AXB dose calculation involving high-density materials is more accurate than AAA calculation, and AXB was superior to AAA in dose predictions beyond dark artifact/air cavity portion when compared against the measurements.

  8. Fuel Retrieval System (FRS) Design Verification

    SciTech Connect

    YANOCHKO, R.M.

    2000-01-27

    This document was prepared as part of an independent review to explain design verification activities already completed, and to define the remaining design verification actions for the Fuel Retrieval System. The Fuel Retrieval Subproject was established as part of the Spent Nuclear Fuel Project (SNF Project) to retrieve and repackage the SNF located in the K Basins. The Fuel Retrieval System (FRS) construction work is complete in the KW Basin, and start-up testing is underway Design modifications and construction planning are also underway for the KE Basin. An independent review of the design verification process as applied to the K Basin projects was initiated in support of preparation for the SNF Project operational readiness review (ORR).

  9. Voltage verification unit

    DOEpatents

    Martin, Edward J.

    2008-01-15

    A voltage verification unit and method for determining the absence of potentially dangerous potentials within a power supply enclosure without Mode 2 work is disclosed. With this device and method, a qualified worker, following a relatively simple protocol that involves a function test (hot, cold, hot) of the voltage verification unit before Lock Out/Tag Out and, and once the Lock Out/Tag Out is completed, testing or "trying" by simply reading a display on the voltage verification unit can be accomplished without exposure of the operator to the interior of the voltage supply enclosure. According to a preferred embodiment, the voltage verification unit includes test leads to allow diagnostics with other meters, without the necessity of accessing potentially dangerous bus bars or the like.

  10. Verification of RADTRAN

    SciTech Connect

    Kanipe, F.L.; Neuhauser, K.S.

    1995-12-31

    This document presents details of the verification process of the RADTRAN computer code which was established for the calculation of risk estimates for radioactive materials transportation by highway, rail, air, and waterborne modes.

  11. ENVIRONMENTAL TECHNOLOGY VERIFICATION--FUELCELL ENERGY, INC.: DFC 300A MOLTEN CARBONATE FUEL CELL COMBINED HEAT AND POWER SYSTEM

    EPA Science Inventory

    The U.S. EPA operates the Environmental Technology Verification program to facilitate the deployment of innovative technologies through performance verification and information dissemination. A technology area of interest is distributed electrical power generation, particularly w...

  12. NON-INVASIVE DETERMINATION OF THE LOCATION AND DISTRIBUTION OF FREE-PHASE DENSE NONAQUEOUS PHASE LIQUIDS (DNAPL) BY SEISMIC REFLECTION TECHNIQUES

    SciTech Connect

    Michael G. Waddell; William J. Domoracki; Tom J. Temples; Jerome Eyer

    2001-05-01

    The Earth Sciences and Resources Institute, University of South Carolina is conducting a 14 month proof of concept study to determine the location and distribution of subsurface Dense Nonaqueous Phase Liquid (DNAPL) carbon tetrachloride (CCl{sub 4}) contamination at the 216-Z-9 crib, 200 West area, Department of Energy (DOE) Hanford Site, Washington by use of two-dimensional high resolution seismic reflection surveys and borehole geophysical data. The study makes use of recent advances in seismic reflection amplitude versus offset (AVO) technology to directly detect the presence of subsurface DNAPL. The techniques proposed are a noninvasive means towards site characterization and direct free-phase DNAPL detection. This report covers the results of Task 3 and change of scope of Tasks 4-6. Task 1 contains site evaluation and seismic modeling studies. The site evaluation consists of identifying and collecting preexisting geological and geophysical information regarding subsurface structure and the presence and quantity of DNAPL. The seismic modeling studies were undertaken to determine the likelihood that an AVO response exists and its probable manifestation. Task 2 is the design and acquisition of 2-D seismic reflection data designed to image areas of probable high concentration of DNAPL. Task 3 is the processing and interpretation of the 2-D data. Task 4, 5, and 6 were designing, acquiring, processing, and interpretation of a three dimensional seismic survey (3D) at the Z-9 crib area at 200 west area, Hanford.

  13. Pyroclastic Eruptions in a Mars Climate Model: The Effects of Grain Size, Plume Height, Density, Geographical Location, and Season on Ash Distribution

    NASA Astrophysics Data System (ADS)

    Kerber, L. A.; Head, J. W.; Madeleine, J.; Wilson, L.; Forget, F.

    2010-12-01

    Pyroclastic volcanism has played a major role in the geologic history of the planet Mars. In addition to several highland patera features interpreted to be composed of pyroclastic material, there are a number of vast, fine-grained, friable deposits which may have a volcanic origin. The physical processes involved in the explosive eruption of magma, including the nucleation of bubbles, the fragmentation of magma, the incorporation of atmospheric gases, the formation of a buoyant plume, and the fall-out of individual pyroclasts has been modeled extensively for martian conditions [Wilson, L., J.W. Head (2007), Explosive volcanic eruptions on Mars: Tephra and accretionary lapilli formation, dispersal and recognition in the geologic record, J. Volcanol. Geotherm. Res. 163, 83-97]. We have further developed and expanded this original model in order to take into account differing temperature, pressure, and wind regimes found at different altitudes, at different geographic locations, and during different martian seasons. Using a well-established Mars global circulation model [LMD-GCM, Forget, F., F. Hourdin, R. Fournier, C. Hourdin, O. Talagrand (1999), Improved general circulation models of the martian atmosphere from the surface to above 80 km, J. Geophys. Res. 104, 24,155-24,176] we are able to link the volcanic eruption model of Wilson and Head (2007) to the spatially and temporally dynamic GCM temperature, pressure, and wind profiles to create three-dimensional maps of expected ash deposition on the surface. Here we present results exploring the effects of grain-size distribution, plume height, density of ash, latitude, season, and atmospheric pressure on the areal extent and shape of the resulting ash distribution. Our results show that grain-size distribution and plume height most strongly effect the distance traveled by the pyroclasts from the vent, while latitude and season can have a large effect on the direction in which the pyroclasts travel and the final shape

  14. Eldercare Locator

    MedlinePlus

    ... page content Skip Navigation Department of Health and Human Services Your Browser ... Welcome to the Eldercare Locator, a public service of the U.S. Administration on Aging connecting you to services for older ...

  15. Alu and L1 sequence distributions in Xq24-q28 and their comparative utility in YAC contig assembly and verification

    SciTech Connect

    Porta, G.; Zucchi, I.; Schlessinger, D.; Hillier, L.; Green, P.; Nowotny, V.; D`Urso, M.

    1993-05-01

    The contents of Alu- and L1-containing TaqI restriction fragments were assessed by Southern blot analyses across YAC contigs already assembled by other means and localized within Xq24-q28. Fingerprinting patterns of YACs in contigs were concordant. Using software based on that of M. V. Olson et al. to analyze digitized data on fragment sizes, fingerprinting itself could establish matches among about 40% of a test group of 435 YACs. At 100-kb resolution, both repetitive elements were found throughout the region, with no apparent enrichment of Alu or L1 in DNA of G compared to that found in R bands. However, consistent with a random overall distribution, delimited regions of up to 100 kb contained clusters of repetitive elements. The local concentrations may help to account for the reported differential hybridization of Alu and L1 probes to segments of metaphase chromosomes. 40 refs., 6 figs., 2 tabs.

  16. Verification of statistical method CORN for modeling of microfuel in the case of high grain concentration

    NASA Astrophysics Data System (ADS)

    Chukbar, B. K.

    2015-12-01

    Two methods of modeling a double-heterogeneity fuel are studied: the deterministic positioning and the statistical method CORN of the MCU software package. The effect of distribution of microfuel in a pebble bed on the calculation results is studied. The results of verification of the statistical method CORN for the cases of the microfuel concentration up to 170 cm-3 in a pebble bed are presented. The admissibility of homogenization of the microfuel coating with the graphite matrix is studied. The dependence of the reactivity on the relative location of fuel and graphite spheres in a pebble bed is found.

  17. Verification of statistical method CORN for modeling of microfuel in the case of high grain concentration

    SciTech Connect

    Chukbar, B. K.

    2015-12-15

    Two methods of modeling a double-heterogeneity fuel are studied: the deterministic positioning and the statistical method CORN of the MCU software package. The effect of distribution of microfuel in a pebble bed on the calculation results is studied. The results of verification of the statistical method CORN for the cases of the microfuel concentration up to 170 cm{sup –3} in a pebble bed are presented. The admissibility of homogenization of the microfuel coating with the graphite matrix is studied. The dependence of the reactivity on the relative location of fuel and graphite spheres in a pebble bed is found.

  18. Assessment of total and organic vanadium levels and their bioaccumulation in edible sea cucumbers: tissues distribution, inter-species-specific, locational differences and seasonal variations.

    PubMed

    Liu, Yanjun; Zhou, Qingxin; Xu, Jie; Xue, Yong; Liu, Xiaofang; Wang, Jingfeng; Xue, Changhu

    2016-02-01

    The objective of this study is to investigate the levels, inter-species-specific, locational differences and seasonal variations of vanadium in sea cucumbers and to validate further several potential factors controlling the distribution of metals in sea cucumbers. Vanadium levels were evaluated in samples of edible sea cucumbers and were demonstrated exhibit differences in different seasons, species and sampling sites. High vanadium concentrations were measured in the sea cucumbers, and all of the vanadium detected was in an organic form. Mean vanadium concentrations were considerably higher in the blood (sea cucumber) than in the other studied tissues. The highest concentration of vanadium (2.56 μg g(-1)), as well as a higher degree of organic vanadium (85.5 %), was observed in the Holothuria scabra samples compared with all other samples. Vanadium levels in Apostichopus japonicus from Bohai Bay and Yellow Sea have marked seasonal variations. Average values of 1.09 μg g(-1) of total vanadium and 0.79 μg g(-1) of organic vanadium were obtained in various species of sea cucumbers. Significant positive correlations between vanadium in the seawater and V org in the sea cucumber (r = 81.67 %, p = 0.00), as well as between vanadium in the sediment and V org in the sea cucumber (r = 77.98 %, p = 0.00), were observed. Vanadium concentrations depend on the seasons (salinity, temperature), species, sampling sites and seawater environment (seawater, sediment). Given the adverse toxicological effects of inorganic vanadium and positive roles in controlling the development of diabetes in humans, a regular monitoring programme of vanadium content in edible sea cucumbers can be recommended. PMID:25732906

  19. Secure optical verification using dual phase-only correlation

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Zhang, Yan; Xie, Zhenwei; Liu, Zhengjun; Liu, Shutian

    2015-02-01

    We introduce a security-enhanced optical verification system using dual phase-only correlation based on a novel correlation algorithm. By employing a nonlinear encoding, the inherent locks of the verification system are obtained in real-valued random distributions, and the identity keys assigned to authorized users are designed as pure phases. The verification process is implemented in two-step correlation, so only authorized identity keys can output the discriminate auto-correlation and cross-correlation signals that satisfy the reset threshold values. Compared with the traditional phase-only-correlation-based verification systems, a higher security level against counterfeiting and collisions are obtained, which is demonstrated by cryptanalysis using known attacks, such as the known-plaintext attack and the chosen-plaintext attack. Optical experiments as well as necessary numerical simulations are carried out to support the proposed verification method.

  20. Explaining Verification Conditions

    NASA Technical Reports Server (NTRS)

    Deney, Ewen; Fischer, Bernd

    2006-01-01

    The Hoare approach to program verification relies on the construction and discharge of verification conditions (VCs) but offers no support to trace, analyze, and understand the VCs themselves. We describe a systematic extension of the Hoare rules by labels so that the calculus itself can be used to build up explanations of the VCs. The labels are maintained through the different processing steps and rendered as natural language explanations. The explanations can easily be customized and can capture different aspects of the VCs; here, we focus on their structure and purpose. The approach is fully declarative and the generated explanations are based only on an analysis of the labels rather than directly on the logical meaning of the underlying VCs or their proofs. Keywords: program verification, Hoare calculus, traceability.

  1. Nuclear disarmament verification

    SciTech Connect

    DeVolpi, A.

    1993-12-31

    Arms control treaties, unilateral actions, and cooperative activities -- reflecting the defusing of East-West tensions -- are causing nuclear weapons to be disarmed and dismantled worldwide. In order to provide for future reductions and to build confidence in the permanency of this disarmament, verification procedures and technologies would play an important role. This paper outlines arms-control objectives, treaty organization, and actions that could be undertaken. For the purposes of this Workshop on Verification, nuclear disarmament has been divided into five topical subareas: Converting nuclear-weapons production complexes, Eliminating and monitoring nuclear-weapons delivery systems, Disabling and destroying nuclear warheads, Demilitarizing or non-military utilization of special nuclear materials, and Inhibiting nuclear arms in non-nuclear-weapons states. This paper concludes with an overview of potential methods for verification.

  2. Wind gust warning verification

    NASA Astrophysics Data System (ADS)

    Primo, Cristina

    2016-07-01

    Operational meteorological centres around the world increasingly include warnings as one of their regular forecast products. Warnings are issued to warn the public about extreme weather situations that might occur leading to damages and losses. In forecasting these extreme events, meteorological centres help their potential users in preventing the damage or losses they might suffer. However, verifying these warnings requires specific methods. This is due not only to the fact that they happen rarely, but also because a new temporal dimension is added when defining a warning, namely the time window of the forecasted event. This paper analyses the issues that might appear when dealing with warning verification. It also proposes some new verification approaches that can be applied to wind warnings. These new techniques are later applied to a real life example, the verification of wind gust warnings at the German Meteorological Centre ("Deutscher Wetterdienst"). Finally, the results obtained from the latter are discussed.

  3. Turbulence Modeling Verification and Validation

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.

    2014-01-01

    steps in the process. Verification insures that the CFD code is solving the equations as intended (no errors in the implementation). This is typically done either through the method of manufactured solutions (MMS) or through careful step-by-step comparisons with other verified codes. After the verification step is concluded, validation is performed to document the ability of the turbulence model to represent different types of flow physics. Validation can involve a large number of test case comparisons with experiments, theory, or DNS. Organized workshops have proved to be valuable resources for the turbulence modeling community in its pursuit of turbulence modeling verification and validation. Workshop contributors using different CFD codes run the same cases, often according to strict guidelines, and compare results. Through these comparisons, it is often possible to (1) identify codes that have likely implementation errors, and (2) gain insight into the capabilities and shortcomings of different turbulence models to predict the flow physics associated with particular types of flows. These are valuable lessons because they help bring consistency to CFD codes by encouraging the correction of faulty programming and facilitating the adoption of better models. They also sometimes point to specific areas needed for improvement in the models. In this paper, several recent workshops are summarized primarily from the point of view of turbulence modeling verification and validation. Furthermore, the NASA Langley Turbulence Modeling Resource website is described. The purpose of this site is to provide a central location where RANS turbulence models are documented, and test cases, grids, and data are provided. The goal of this paper is to provide an abbreviated survey of turbulence modeling verification and validation efforts, summarize some of the outcomes, and give some ideas for future endeavors in this area.

  4. Verification and validation benchmarks.

    SciTech Connect

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  5. Spectroscopic verification of zinc absorption and distribution in the desert plant Prosopis juliflora-velutina (velvet mesquite) treated with ZnO nanoparticles

    PubMed Central

    Hernandez-Viezcas, J.A.; Castillo-Michel, H.; Servin, A.D.; Peralta-Videa, J.R.; Gardea-Torresdey, J.L.

    2012-01-01

    The impact of metal nanoparticles (NPs) on biological systems, especially plants, is still not well understood. The aim of this research was to determine the effects of zinc oxide (ZnO) NPs in velvet mesquite (Prosopis juliflora-velutina). Mesquite seedlings were grown for 15 days in hydroponics with ZnO NPs (10 nm) at concentrations varying from 500 to 4000 mg L−1. Zinc concentrations in roots, stems and leaves were determined by inductively coupled plasma optical emission spectroscopy (ICP-OES). Plant stress was examined by the specific activity of catalase (CAT) and ascorbate peroxidase (APOX); while the biotransformation of ZnO NPs and Zn distribution in tissues was determined by X-ray absorption spectroscopy (XAS) and micro X-ray fluorescence (μXRF), respectively. ICP-OES results showed that Zn concentrations in tissues (2102 ± 87, 1135 ± 56, and 628 ± 130 mg kg−1 d wt in roots, stems, and leaves, respectively) were found at 2000 mg ZnO NPs L−1. Stress tests showed that ZnO NPs increased CAT in roots, stems, and leaves, while APOX increased only in stems and leaves. XANES spectra demonstrated that ZnO NPs were not present in mesquite tissues, while Zn was found as Zn(II), resembling the spectra of Zn(NO3)2. The μXRF analysis confirmed the presence of Zn in the vascular system of roots and leaves in ZnO NP treated plants. PMID:22820414

  6. TFE verification program

    NASA Astrophysics Data System (ADS)

    1994-01-01

    This is the final semiannual progress report for the Thermionic Fuel Elements (TFE) verification. A decision was made in August 1993 to begin a Close Out Program on October 1, 1993. Final reports summarizing the design analyses and test activities of the TFE Verification Program will be written, stand-alone documents for each task. The objective of the semiannual progress report is to summarize the technical results obtained during the latest reporting period. The information presented herein includes evaluated test data, design evaluations, the results of analyses and the significance of results.

  7. General Environmental Verification Specification

    NASA Technical Reports Server (NTRS)

    Milne, J. Scott, Jr.; Kaufman, Daniel S.

    2003-01-01

    The NASA Goddard Space Flight Center s General Environmental Verification Specification (GEVS) for STS and ELV Payloads, Subsystems, and Components is currently being revised based on lessons learned from GSFC engineering and flight assurance. The GEVS has been used by Goddard flight projects for the past 17 years as a baseline from which to tailor their environmental test programs. A summary of the requirements and updates are presented along with the rationale behind the changes. The major test areas covered by the GEVS include mechanical, thermal, and EMC, as well as more general requirements for planning, tracking of the verification programs.

  8. Requirement Assurance: A Verification Process

    NASA Technical Reports Server (NTRS)

    Alexander, Michael G.

    2011-01-01

    Requirement Assurance is an act of requirement verification which assures the stakeholder or customer that a product requirement has produced its "as realized product" and has been verified with conclusive evidence. Product requirement verification answers the question, "did the product meet the stated specification, performance, or design documentation?". In order to ensure the system was built correctly, the practicing system engineer must verify each product requirement using verification methods of inspection, analysis, demonstration, or test. The products of these methods are the "verification artifacts" or "closure artifacts" which are the objective evidence needed to prove the product requirements meet the verification success criteria. Institutional direction is given to the System Engineer in NPR 7123.1A NASA Systems Engineering Processes and Requirements with regards to the requirement verification process. In response, the verification methodology offered in this report meets both the institutional process and requirement verification best practices.

  9. Context Effects in Sentence Verification.

    ERIC Educational Resources Information Center

    Kiger, John I.; Glass, Arnold L.

    1981-01-01

    Three experiments examined what happens to reaction time to verify easy items when they are mixed with difficult items in a verification task. Subjects verification of simple arithmetic equations and sentences took longer when placed in a difficult list. Difficult sentences also slowed the verification of easy arithmetic equations. (Author/RD)

  10. Computer Graphics Verification

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Video processing creates technical animation sequences using studio quality equipment to realistically represent fluid flow over space shuttle surfaces, helicopter rotors, and turbine blades.Computer systems Co-op, Tim Weatherford, performing computer graphics verification. Part of Co-op brochure.

  11. FPGA Verification Accelerator (FVAX)

    NASA Technical Reports Server (NTRS)

    Oh, Jane; Burke, Gary

    2008-01-01

    Is Verification Acceleration Possible? - Increasing the visibility of the internal nodes of the FPGA results in much faster debug time - Forcing internal signals directly allows a problem condition to be setup very quickly center dot Is this all? - No, this is part of a comprehensive effort to improve the JPL FPGA design and V&V process.

  12. ENVIRONMENTAL TECHNOLOGY VERIFICATION PROGRAM

    EPA Science Inventory

    This presentation will be given at the EPA Science Forum 2005 in Washington, DC. The Environmental Technology Verification Program (ETV) was initiated in 1995 to speed implementation of new and innovative commercial-ready environemntal technologies by providing objective, 3rd pa...

  13. Telescope performance verification

    NASA Astrophysics Data System (ADS)

    Swart, Gerhard P.; Buckley, David A. H.

    2004-09-01

    While Systems Engineering appears to be widely applied on the very large telescopes, it is lacking in the development of many of the medium and small telescopes currently in progress. The latter projects rely heavily on the experience of the project team, verbal requirements and conjecture based on the successes and failures of other telescopes. Furthermore, it is considered an unaffordable luxury to "close-the-loop" by carefully analysing and documenting the requirements and then verifying the telescope's compliance with them. In this paper the authors contend that a Systems Engineering approach is a keystone in the development of any telescope and that verification of the telescope's performance is not only an important management tool but also forms the basis upon which successful telescope operation can be built. The development of the Southern African Large Telescope (SALT) has followed such an approach and is now in the verification phase of its development. Parts of the SALT verification process will be discussed in some detail to illustrate the suitability of this approach, including oversight by the telescope shareholders, recording of requirements and results, design verification and performance testing. Initial test results will be presented where appropriate.

  14. Multibody modeling and verification

    NASA Technical Reports Server (NTRS)

    Wiens, Gloria J.

    1989-01-01

    A summary of a ten week project on flexible multibody modeling, verification and control is presented. Emphasis was on the need for experimental verification. A literature survey was conducted for gathering information on the existence of experimental work related to flexible multibody systems. The first portion of the assigned task encompassed the modeling aspects of flexible multibodies that can undergo large angular displacements. Research in the area of modeling aspects were also surveyed, with special attention given to the component mode approach. Resulting from this is a research plan on various modeling aspects to be investigated over the next year. The relationship between the large angular displacements, boundary conditions, mode selection, and system modes is of particular interest. The other portion of the assigned task was the generation of a test plan for experimental verification of analytical and/or computer analysis techniques used for flexible multibody systems. Based on current and expected frequency ranges of flexible multibody systems to be used in space applications, an initial test article was selected and designed. A preliminary TREETOPS computer analysis was run to ensure frequency content in the low frequency range, 0.1 to 50 Hz. The initial specifications of experimental measurement and instrumentation components were also generated. Resulting from this effort is the initial multi-phase plan for a Ground Test Facility of Flexible Multibody Systems for Modeling Verification and Control. The plan focusses on the Multibody Modeling and Verification (MMV) Laboratory. General requirements of the Unobtrusive Sensor and Effector (USE) and the Robot Enhancement (RE) laboratories were considered during the laboratory development.

  15. Ada(R) Test and Verification System (ATVS)

    NASA Technical Reports Server (NTRS)

    Strelich, Tom

    1986-01-01

    The Ada Test and Verification System (ATVS) functional description and high level design are completed and summarized. The ATVS will provide a comprehensive set of test and verification capabilities specifically addressing the features of the Ada language, support for embedded system development, distributed environments, and advanced user interface capabilities. Its design emphasis was on effective software development environment integration and flexibility to ensure its long-term use in the Ada software development community.

  16. Effect of object identification algorithms on feature based verification scores

    NASA Astrophysics Data System (ADS)

    Weniger, Michael; Friederichs, Petra

    2015-04-01

    Many modern spatial verification techniques rely on feature identification algorithms. We study the importance of the choice of algorithm and its parameters for the resulting scores. SAL is used as an example to show that these choices have a statistically significant impact on the distributions of object dependent scores. Non-continuous operators used for feature identification are identified as the underlying reason for the observed stability issues, with implications for many feature based verification techniques.

  17. 28 CFR 74.6 - Location of eligible persons.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Location of eligible persons. 74.6... PROVISION Verification of Eligibility § 74.6 Location of eligible persons. The Office shall compare the... information system to determine if such persons are living or deceased and, if living, the present location...

  18. Robust verification analysis

    NASA Astrophysics Data System (ADS)

    Rider, William; Witkowski, Walt; Kamm, James R.; Wildey, Tim

    2016-02-01

    We introduce a new methodology for inferring the accuracy of computational simulations through the practice of solution verification. We demonstrate this methodology on examples from computational heat transfer, fluid dynamics and radiation transport. Our methodology is suited to both well- and ill-behaved sequences of simulations. Our approach to the analysis of these sequences of simulations incorporates expert judgment into the process directly via a flexible optimization framework, and the application of robust statistics. The expert judgment is systematically applied as constraints to the analysis, and together with the robust statistics guards against over-emphasis on anomalous analysis results. We have named our methodology Robust Verification. Our methodology is based on utilizing multiple constrained optimization problems to solve the verification model in a manner that varies the analysis' underlying assumptions. Constraints applied in the analysis can include expert judgment regarding convergence rates (bounds and expectations) as well as bounding values for physical quantities (e.g., positivity of energy or density). This approach then produces a number of error models, which are then analyzed through robust statistical techniques (median instead of mean statistics). This provides self-contained, data and expert informed error estimation including uncertainties for both the solution itself and order of convergence. Our method produces high quality results for the well-behaved cases relatively consistent with existing practice. The methodology can also produce reliable results for ill-behaved circumstances predicated on appropriate expert judgment. We demonstrate the method and compare the results with standard approaches used for both code and solution verification on well-behaved and ill-behaved simulations.

  19. RESRAD-BUILD verification.

    SciTech Connect

    Kamboj, S.; Yu, C.; Biwer, B. M.; Klett, T.

    2002-01-31

    The results generated by the RESRAD-BUILD code (version 3.0) were verified with hand or spreadsheet calculations using equations given in the RESRAD-BUILD manual for different pathways. For verification purposes, different radionuclides--H-3, C-14, Na-22, Al-26, Cl-36, Mn-54, Co-60, Au-195, Ra-226, Ra-228, Th-228, and U-238--were chosen to test all pathways and models. Tritium, Ra-226, and Th-228 were chosen because of the special tritium and radon models in the RESRAD-BUILD code. Other radionuclides were selected to represent a spectrum of radiation types and energies. Verification of the RESRAD-BUILD code was conducted with an initial check of all the input parameters for correctness against their original source documents. Verification of the calculations was performed external to the RESRAD-BUILD code with Microsoft{reg_sign} Excel to verify all the major portions of the code. In some cases, RESRAD-BUILD results were compared with those of external codes, such as MCNP (Monte Carlo N-particle) and RESRAD. The verification was conducted on a step-by-step basis and used different test cases as templates. The following types of calculations were investigated: (1) source injection rate, (2) air concentration in the room, (3) air particulate deposition, (4) radon pathway model, (5) tritium model for volume source, (6) external exposure model, (7) different pathway doses, and (8) time dependence of dose. Some minor errors were identified in version 3.0; these errors have been corrected in later versions of the code. Some possible improvements in the code were also identified.

  20. Visual inspection for CTBT verification

    SciTech Connect

    Hawkins, W.; Wohletz, K.

    1997-03-01

    On-site visual inspection will play an essential role in future Comprehensive Test Ban Treaty (CTBT) verification. Although seismic and remote sensing techniques are the best understood and most developed methods for detection of evasive testing of nuclear weapons, visual inspection can greatly augment the certainty and detail of understanding provided by these more traditional methods. Not only can visual inspection offer ``ground truth`` in cases of suspected nuclear testing, but it also can provide accurate source location and testing media properties necessary for detailed analysis of seismic records. For testing in violation of the CTBT, an offending party may attempt to conceal the test, which most likely will be achieved by underground burial. While such concealment may not prevent seismic detection, evidence of test deployment, location, and yield can be disguised. In this light, if a suspicious event is detected by seismic or other remote methods, visual inspection of the event area is necessary to document any evidence that might support a claim of nuclear testing and provide data needed to further interpret seismic records and guide further investigations. However, the methods for visual inspection are not widely known nor appreciated, and experience is presently limited. Visual inspection can be achieved by simple, non-intrusive means, primarily geological in nature, and it is the purpose of this report to describe the considerations, procedures, and equipment required to field such an inspection.

  1. TFE verification program

    NASA Astrophysics Data System (ADS)

    1990-03-01

    The information presented herein will include evaluated test data, design evaluations, the results of analyses and the significance of results. The program objective is to demonstrate the technology readiness of a Thermionic Fuel Element (TFE) suitable for use as the basic element in a thermionic reactor with electric power output in the 0.5 to 5.0 MW(e) range, and a full-power life of 7 years. The TF Verification Program builds directly on the technology and data base developed in the 1960s and 1970s in an AEC/NASA program, and in the SP-100 program conducted in 1983, 1984 and 1985. In the SP-100 program, the attractive features of thermionic power conversion technology were recognized but concern was expressed over the lack of fast reactor irradiation data. The TFE Verification Program addresses this concern. The general logic and strategy of the program to achieve its objectives is shown. Five prior programs form the basis for the TFE Verification Program: (1) AEC/NASA program of the 1960s and early 1970; (2) SP-100 concept development program; (3) SP-100 thermionic technology program; (4) Thermionic irradiations program in TRIGA in FY-88; and (5) Thermionic Program in 1986 and 1987.

  2. TFE Verification Program

    SciTech Connect

    Not Available

    1990-03-01

    The objective of the semiannual progress report is to summarize the technical results obtained during the latest reporting period. The information presented herein will include evaluated test data, design evaluations, the results of analyses and the significance of results. The program objective is to demonstrate the technology readiness of a TFE suitable for use as the basic element in a thermionic reactor with electric power output in the 0.5 to 5.0 MW(e) range, and a full-power life of 7 years. The TF Verification Program builds directly on the technology and data base developed in the 1960s and 1970s in an AEC/NASA program, and in the SP-100 program conducted in 1983, 1984 and 1985. In the SP-100 program, the attractive but concern was expressed over the lack of fast reactor irradiation data. The TFE Verification Program addresses this concern. The general logic and strategy of the program to achieve its objectives is shown on Fig. 1-1. Five prior programs form the basis for the TFE Verification Program: (1) AEC/NASA program of the 1960s and early 1970; (2) SP-100 concept development program;(3) SP-100 thermionic technology program; (4) Thermionic irradiations program in TRIGA in FY-86; (5) and Thermionic Technology Program in 1986 and 1987. 18 refs., 64 figs., 43 tabs.

  3. Calibration or verification? A balanced approach for science.

    USGS Publications Warehouse

    Myers, C.T.; Kennedy, D.M.

    1997-01-01

    The calibration of balances is routinely performed both in the laboratory and the field. This process is required to accurately determine the weight of an object or chemical. The frequency of calibration and verification of balances is mandated by their use and location. Tolerance limits for balances could not be located in any standard procedure manuals. A survey was conducted to address the issues of calibration and verification frequency and to discuss the significance of defining tolerance limits for balances. Finally, for the benefit of laboratories unfamiliar with such procedures, we provide a working model based on our laboratory, the Upper Mississippi Science Center (UMSC), in La Crosse, Wisconsin.

  4. Verification of ICESat-2/ATLAS Science Receiver Algorithm Onboard Databases

    NASA Astrophysics Data System (ADS)

    Carabajal, C. C.; Saba, J. L.; Leigh, H. W.; Magruder, L. A.; Urban, T. J.; Mcgarry, J.; Schutz, B. E.

    2013-12-01

    NASA's ICESat-2 mission will fly the Advanced Topographic Laser Altimetry System (ATLAS) instrument on a 3-year mission scheduled to launch in 2016. ATLAS is a single-photon detection system transmitting at 532nm with a laser repetition rate of 10 kHz, and a 6 spot pattern on the Earth's surface. A set of onboard Receiver Algorithms will perform signal processing to reduce the data rate and data volume to acceptable levels. These Algorithms distinguish surface echoes from the background noise, limit the daily data volume, and allow the instrument to telemeter only a small vertical region about the signal. For this purpose, three onboard databases are used: a Surface Reference Map (SRM), a Digital Elevation Model (DEM), and a Digital Relief Maps (DRMs). The DEM provides minimum and maximum heights that limit the signal search region of the onboard algorithms, including a margin for errors in the source databases, and onboard geolocation. Since the surface echoes will be correlated while noise will be randomly distributed, the signal location is found by histogramming the received event times and identifying the histogram bins with statistically significant counts. Once the signal location has been established, the onboard Digital Relief Maps (DRMs) will be used to determine the vertical width of the telemetry band about the signal. University of Texas-Center for Space Research (UT-CSR) is developing the ICESat-2 onboard databases, which are currently being tested using preliminary versions and equivalent representations of elevation ranges and relief more recently developed at Goddard Space Flight Center (GSFC). Global and regional elevation models have been assessed in terms of their accuracy using ICESat geodetic control, and have been used to develop equivalent representations of the onboard databases for testing against the UT-CSR databases, with special emphasis on the ice sheet regions. A series of verification checks have been implemented, including

  5. Continuous verification using multimodal biometrics.

    PubMed

    Sim, Terence; Zhang, Sheng; Janakiraman, Rajkumar; Kumar, Sandeep

    2007-04-01

    Conventional verification systems, such as those controlling access to a secure room, do not usually require the user to reauthenticate himself for continued access to the protected resource. This may not be sufficient for high-security environments in which the protected resource needs to be continuously monitored for unauthorized use. In such cases, continuous verification is needed. In this paper, we present the theory, architecture, implementation, and performance of a multimodal biometrics verification system that continuously verifies the presence of a logged-in user. Two modalities are currently used--face and fingerprint--but our theory can be readily extended to include more modalities. We show that continuous verification imposes additional requirements on multimodal fusion when compared to conventional verification systems. We also argue that the usual performance metrics of false accept and false reject rates are insufficient yardsticks for continuous verification and propose new metrics against which we benchmark our system. PMID:17299225

  6. Quantum money with classical verification

    SciTech Connect

    Gavinsky, Dmitry

    2014-12-04

    We propose and construct a quantum money scheme that allows verification through classical communication with a bank. This is the first demonstration that a secure quantum money scheme exists that does not require quantum communication for coin verification. Our scheme is secure against adaptive adversaries - this property is not directly related to the possibility of classical verification, nevertheless none of the earlier quantum money constructions is known to possess it.

  7. Quantum money with classical verification

    NASA Astrophysics Data System (ADS)

    Gavinsky, Dmitry

    2014-12-01

    We propose and construct a quantum money scheme that allows verification through classical communication with a bank. This is the first demonstration that a secure quantum money scheme exists that does not require quantum communication for coin verification. Our scheme is secure against adaptive adversaries - this property is not directly related to the possibility of classical verification, nevertheless none of the earlier quantum money constructions is known to possess it.

  8. Surfactants in the sea-surface microlayer and sub-surface water at estuarine locations: Their concentration, distribution, enrichment, and relation to physicochemical characteristics.

    PubMed

    Huang, Yun-Jie; Brimblecombe, Peter; Lee, Chon-Lin; Latif, Mohd Talib

    2015-08-15

    Samples of sea-surface microlayer (SML) and sub-surface water (SSW) were collected from two areas-Kaohsiung City (Taiwan) and the southwest coast of Peninsular Malaysia to study the influence of SML on enrichment and distribution and to compare SML with the SSW. Anionic surfactants (MBAS) predominated in this study and were significantly higher in Kaohsiung than in Malaysia. Industrial areas in Kaohsiung were enriched with high loads of anthropogenic sources, accounted for higher surfactant amounts, and pose higher environmental disadvantages than in Malaysia, where pollutants were associated with agricultural activities. The dissolved organic carbon (DOC), MBAS, and cationic surfactant (DBAS) concentrations in the SML correlated to the SSW, reflecting exchanges between the SML and SSW in Kaohsiung. The relationships between surfactants and the physiochemical parameters indicated that DOC and saltwater dilution might affect the distributions of MBAS and DBAS in Kaohsiung. In Malaysia, DOC might be the important factor controlling DBAS. PMID:26093815

  9. Distribution of polychlorinated biphenyls and organochlorine pesticides in human breast milk from various locations in Tunisia: levels of contamination, influencing factors, and infant risk assessment.

    PubMed

    Ennaceur, S; Gandoura, N; Driss, M R

    2008-09-01

    The concentrations of dichlorodiphenytrichloroethane and its metabolites (DDTs), hexachlorobenzene (HCB), hexachlorocyclohexane isomers (HCHs), dieldrin, and 20 polychlorinated biphenyls (PCBs) were determined in 237 human breast milk samples collected from 12 locations in Tunisia. Gas chromatography with electron capture detector (GC-ECD) was used to identify and quantify residue levels on a lipid basis of organochlorine compounds (OCs). The predominant OCs in human breast milk were PCBs, p,p'-DDE, p,p'-DDT, HCHs, and HCB. Concentrations of DDTs in human breast milk from rural areas were significantly higher than those from urban locations (p<0.05). With regard to PCBs, we observed the predominance of mid-chlorinated congeners due to the presence of PCBs with high K(ow) such as PCB 153, 138, and 180. Positive correlations were found between concentrations of OCs in human breast milk and age of mothers and number of parities, suggesting the influence of such factors on OC burdens in lactating mothers. The comparison of daily intakes of PCBs, DDTs, HCHs, and HCB to infants through human breast milk with guidelines proposed by WHO and Health Canada shows that some individuals accumulated OCs in breast milk close to or higher than these guidelines. PMID:18614165

  10. Distribution of polychlorinated biphenyls and organochlorine pesticides in human breast milk from various locations in Tunisia: Levels of contamination, influencing factors, and infant risk assessment

    SciTech Connect

    Ennaceur, S. Gandoura, N.; Driss, M.R.

    2008-09-15

    The concentrations of dichlorodiphenytrichloroethane and its metabolites (DDTs), hexachlorobenzene (HCB), hexachlorocyclohexane isomers (HCHs), dieldrin, and 20 polychlorinated biphenyls (PCBs) were determined in 237 human breast milk samples collected from 12 locations in Tunisia. Gas chromatography with electron capture detector (GC-ECD) was used to identify and quantify residue levels on a lipid basis of organochlorine compounds (OCs). The predominant OCs in human breast milk were PCBs, p,p'-DDE, p,p'-DDT, HCHs, and HCB. Concentrations of DDTs in human breast milk from rural areas were significantly higher than those from urban locations (p<0.05). With regard to PCBs, we observed the predominance of mid-chlorinated congeners due to the presence of PCBs with high K{sub ow} such as PCB 153, 138, and 180. Positive correlations were found between concentrations of OCs in human breast milk and age of mothers and number of parities, suggesting the influence of such factors on OC burdens in lactating mothers. The comparison of daily intakes of PCBs, DDTs, HCHs, and HCB to infants through human breast milk with guidelines proposed by WHO and Health Canada shows that some individuals accumulated OCs in breast milk close to or higher than these guidelines.

  11. Requirements of Operational Verification of the NWSRFS-ESP Forecasts

    NASA Astrophysics Data System (ADS)

    Imam, B.; Werner, K.; Hartmann, H.; Sorooshian, S.; Pritchard, E.

    2006-12-01

    National Weather Service River Forecast System (NWSRFS). We focus on short (1- 15 days) ensemble forecasts and investigate the utility of both simple "single forecast" graphical approaches, and analytical "distribution" based measures and their associated diagrams. The presentation also addresses the role of both observation and historical-simulation, which is used in initializing hindcasts (retrospective forecasts) for diagnostic verification studies in operational procedures.

  12. Automating engineering verification in ALMA subsystems

    NASA Astrophysics Data System (ADS)

    Ortiz, José; Castillo, Jorge

    2014-08-01

    The Atacama Large Millimeter/submillimeter Array is an interferometer comprising 66 individual high precision antennas located over 5000 meters altitude in the north of Chile. Several complex electronic subsystems need to be meticulously tested at different stages of an antenna commissioning, both independently and when integrated together. First subsystem integration takes place at the Operations Support Facilities (OSF), at an altitude of 3000 meters. Second integration occurs at the high altitude Array Operations Site (AOS), where also combined performance with Central Local Oscillator (CLO) and Correlator is assessed. In addition, there are several other events requiring complete or partial verification of instrument specifications compliance, such as parts replacements, calibration, relocation within AOS, preventive maintenance and troubleshooting due to poor performance in scientific observations. Restricted engineering time allocation and the constant pressure of minimizing downtime in a 24/7 astronomical observatory, impose the need to complete (and report) the aforementioned verifications in the least possible time. Array-wide disturbances, such as global power interruptions and following recovery, generate the added challenge of executing this checkout on multiple antenna elements at once. This paper presents the outcome of the automation of engineering verification setup, execution, notification and reporting in ALMA and how these efforts have resulted in a dramatic reduction of both time and operator training required. Signal Path Connectivity (SPC) checkout is introduced as a notable case of such automation.

  13. TFE Verification Program

    SciTech Connect

    Not Available

    1993-05-01

    The objective of the semiannual progress report is to summarize the technical results obtained during the latest reporting period. The information presented herein will include evaluated test data, design evaluations, the results of analyses and the significance of results. The program objective is to demonstrate the technology readiness of a TFE (thermionic fuel element) suitable for use as the basic element in a thermionic reactor with electric power output in the 0.5 to 5.0 MW(e) range, and a full-power life of 7 years. The TFE Verification Program builds directly on the technology and data base developed in the 1960s and early 1970s in an AEC/NASA program, and in the SP-100 program conducted in 1983, 1984 and 1985. In the SP-100 program, the attractive features of thermionic power conversion technology were recognized but concern was expressed over the lack of fast reactor irradiation data. The TFE Verification Program addresses this concern.

  14. Software Verification and Validation Procedure

    SciTech Connect

    Olund, Thomas S.

    2008-09-15

    This Software Verification and Validation procedure provides the action steps for the Tank Waste Information Network System (TWINS) testing process. The primary objective of the testing process is to provide assurance that the software functions as intended, and meets the requirements specified by the client. Verification and validation establish the primary basis for TWINS software product acceptance.

  15. Deductive Verification of Cryptographic Software

    NASA Technical Reports Server (NTRS)

    Almeida, Jose Barcelar; Barbosa, Manuel; Pinto, Jorge Sousa; Vieira, Barbara

    2009-01-01

    We report on the application of an off-the-shelf verification platform to the RC4 stream cipher cryptographic software implementation (as available in the openSSL library), and introduce a deductive verification technique based on self-composition for proving the absence of error propagation.

  16. HDL to verification logic translator

    NASA Astrophysics Data System (ADS)

    Gambles, J. W.; Windley, P. J.

    The increasingly higher number of transistors possible in VLSI circuits compounds the difficulty in insuring correct designs. As the number of possible test cases required to exhaustively simulate a circuit design explodes, a better method is required to confirm the absence of design faults. Formal verification methods provide a way to prove, using logic, that a circuit structure correctly implements its specification. Before verification is accepted by VLSI design engineers, the stand alone verification tools that are in use in the research community must be integrated with the CAD tools used by the designers. One problem facing the acceptance of formal verification into circuit design methodology is that the structural circuit descriptions used by the designers are not appropriate for verification work and those required for verification lack some of the features needed for design. We offer a solution to this dilemma: an automatic translation from the designers' HDL models into definitions for the higher-ordered logic (HOL) verification system. The translated definitions become the low level basis of circuit verification which in turn increases the designer's confidence in the correctness of higher level behavioral models.

  17. HDL to verification logic translator

    NASA Technical Reports Server (NTRS)

    Gambles, J. W.; Windley, P. J.

    1992-01-01

    The increasingly higher number of transistors possible in VLSI circuits compounds the difficulty in insuring correct designs. As the number of possible test cases required to exhaustively simulate a circuit design explodes, a better method is required to confirm the absence of design faults. Formal verification methods provide a way to prove, using logic, that a circuit structure correctly implements its specification. Before verification is accepted by VLSI design engineers, the stand alone verification tools that are in use in the research community must be integrated with the CAD tools used by the designers. One problem facing the acceptance of formal verification into circuit design methodology is that the structural circuit descriptions used by the designers are not appropriate for verification work and those required for verification lack some of the features needed for design. We offer a solution to this dilemma: an automatic translation from the designers' HDL models into definitions for the higher-ordered logic (HOL) verification system. The translated definitions become the low level basis of circuit verification which in turn increases the designer's confidence in the correctness of higher level behavioral models.

  18. Method and system for determining depth distribution of radiation-emitting material located in a source medium and radiation detector system for use therein

    DOEpatents

    Benke, Roland R.; Kearfott, Kimberlee J.; McGregor, Douglas S.

    2003-03-04

    A method, system and a radiation detector system for use therein are provided for determining the depth distribution of radiation-emitting material distributed in a source medium, such as a contaminated field, without the need to take samples, such as extensive soil samples, to determine the depth distribution. The system includes a portable detector assembly with an x-ray or gamma-ray detector having a detector axis for detecting the emitted radiation. The radiation may be naturally-emitted by the material, such as gamma-ray-emitting radionuclides, or emitted when the material is struck by other radiation. The assembly also includes a hollow collimator in which the detector is positioned. The collimator causes the emitted radiation to bend toward the detector as rays parallel to the detector axis of the detector. The collimator may be a hollow cylinder positioned so that its central axis is perpendicular to the upper surface of the large area source when positioned thereon. The collimator allows the detector to angularly sample the emitted radiation over many ranges of polar angles. This is done by forming the collimator as a single adjustable collimator or a set of collimator pieces having various possible configurations when connected together. In any one configuration, the collimator allows the detector to detect only the radiation emitted from a selected range of polar angles measured from the detector axis. Adjustment of the collimator or the detector therein enables the detector to detect radiation emitted from a different range of polar angles. The system further includes a signal processor for processing the signals from the detector wherein signals obtained from different ranges of polar angles are processed together to obtain a reconstruction of the radiation-emitting material as a function of depth, assuming, but not limited to, a spatially-uniform depth distribution of the material within each layer. The detector system includes detectors having

  19. Distribution and abundance of zooplankton at selected locations on the Savannah River and from tributaries of the Savannah River Plant: December 1984--August 1985

    SciTech Connect

    Chimney, M.J.; Cody, W.R.

    1986-11-01

    Spatial and temporal differences in the abundance and composition of the zooplankton community occurred at Savannah River and SRP creek/swamp sampling locations. Stations are grouped into four categories based on differences in community structure: Savannah River; thermally influenced stations on Four Mile Creek and Pen Branch; closed-canopy stations in the Steel Creek system; and open-canopy Steel Creek stations, non-thermally influenced stations on Pen Branch and Beaver Dam Creek. Differences among stations were little related to water temperature, dissolved oxygen concentration, conductivity or pH at the tine of collection. None of these parameters appeared to be limiting. Rather, past thermal history and habitat structure seemed to be important controlling factors. 66 refs.

  20. Towards an in-situ measurement of wave velocity in buried plastic water distribution pipes for the purposes of leak location

    NASA Astrophysics Data System (ADS)

    Almeida, Fabrício C. L.; Brennan, Michael J.; Joseph, Phillip F.; Dray, Simon; Whitfield, Stuart; Paschoalini, Amarildo T.

    2015-12-01

    Water companies are under constant pressure to ensure that water leakage is kept to a minimum. Leak noise correlators are often used to help find and locate leaks. These devices correlate acoustic or vibration signals from sensors which are placed either side the location of a suspected leak. The peak in the cross-correlation function of the measured signals gives the time difference between the arrival times of the leak noise at the sensors. To convert the time delay into a distance, the speed at which the leak noise propagates along the pipe (wave-speed) needs to be known. Often, this is estimated from historical wave-speed data measured on other pipes obtained at various times and under various conditions, or it is estimated from tables which are calculated using simple formula. Usually, the wave-speed is not measured directly at the time of the correlation measurement and is therefore potentially a source of significant error in the localisation of the leak. In this paper, a new method of measuring the wave-speed in-situ in the presence of a leak, that is robust and simple, is explored. Experiments were conducted on a bespoke large scale buried pipe test-rig, in which a leak was also induced in the pipe between the measurement positions to simulate a condition that is likely to occur in practice. It is shown that even in conditions where the signal to noise ratio is very poor, the wave-speed estimate calculated using the new method is less than 5% different from the best estimate of 387 m s-1.

  1. Cleanup Verification Package for the 118-F-6 Burial Ground

    SciTech Connect

    H. M. Sulloway

    2008-10-02

    This cleanup verification package documents completion of remedial action for the 118-F-6 Burial Ground located in the 100-FR-2 Operable Unit of the 100-F Area on the Hanford Site. The trenches received waste from the 100-F Experimental Animal Farm, including animal manure, animal carcasses, laboratory waste, plastic, cardboard, metal, and concrete debris as well as a railroad tank car.

  2. Cancelable face verification using optical encryption and authentication.

    PubMed

    Taheri, Motahareh; Mozaffari, Saeed; Keshavarzi, Parviz

    2015-10-01

    In a cancelable biometric system, each instance of enrollment is distorted by a transform function, and the output should not be retransformed to the original data. This paper presents a new cancelable face verification system in the encrypted domain. Encrypted facial images are generated by a double random phase encoding (DRPE) algorithm using two keys (RPM1 and RPM2). To make the system noninvertible, a photon counting (PC) method is utilized, which requires a photon distribution mask for information reduction. Verification of sparse images that are not recognizable by direct visual inspection is performed by unconstrained minimum average correlation energy filter. In the proposed method, encryption keys (RPM1, RPM2, and PDM) are used in the sender side, and the receiver needs only encrypted images and correlation filters. In this manner, the system preserves privacy if correlation filters are obtained by an adversary. Performance of PC-DRPE verification system is evaluated under illumination variation, pose changes, and facial expression. Experimental results show that utilizing encrypted images not only increases security concerns but also enhances verification performance. This improvement can be attributed to the fact that, in the proposed system, the face verification problem is converted to key verification tasks. PMID:26479930

  3. NON-INVASIVE DETERMINATION OF THE LOCATION AND DISTRIBUTION OF FREE-PHASE DENSE NONAQUEOUS PHASE LIQUIDS (DNAPL) BY SEISMIC REFLECTION TECHNIQUES

    SciTech Connect

    Michael G. Waddell; William J. Domoracki; Tom J. Temples

    2001-12-01

    This annual technical progress report is for part of Task 4 (site evaluation), Task 5 (2D seismic design, acquisition, and processing), and Task 6 (2D seismic reflection, interpretation, and AVO analysis) on DOE contact number DE-AR26-98FT40369. The project had planned one additional deployment to another site other than Savannah River Site (SRS) or DOE Hanford Site. After the SUBCON midyear review in Albuquerque, NM, it was decided that two additional deployments would be performed. The first deployment is to test the feasibility of using non-invasive seismic reflection and AVO analysis as a monitoring tool to assist in determining the effectiveness of Dynamic Underground Stripping (DUS) in removal of DNAPL. The second deployment is to the Department of Defense (DOD) Charleston Naval Weapons Station Solid Waste Management Unit 12 (SWMU-12), Charleston, SC to further test the technique to detect high concentrations of DNAPL. The Charleston Naval Weapons Station SWMU-12 site was selected in consultation with National Energy Technology Laboratory (NETL) and DOD Naval Facilities Engineering Command Southern Division (NAVFAC) personnel. Based upon the review of existing data and due to the shallow target depth, the project team collected three Vertical Seismic Profiles (VSP) and an experimental P-wave seismic reflection line. After preliminary data analysis of the VSP data and the experimental reflection line data, it was decided to proceed with Task 5 and Task 6. Three high resolution P-wave reflection profiles were collected with two objectives; (1) design the reflection survey to image a target depth of 20 feet below land surface to assist in determining the geologic controls on the DNAPL plume geometry, and (2) apply AVO analysis to the seismic data to locate the zone of high concentration of DNAPL. Based upon the results of the data processing and interpretation of the seismic data, the project team was able to map the channel that is controlling the DNAPL plume

  4. Trajectory Based Behavior Analysis for User Verification

    NASA Astrophysics Data System (ADS)

    Pao, Hsing-Kuo; Lin, Hong-Yi; Chen, Kuan-Ta; Fadlil, Junaidillah

    Many of our activities on computer need a verification step for authorized access. The goal of verification is to tell apart the true account owner from intruders. We propose a general approach for user verification based on user trajectory inputs. The approach is labor-free for users and is likely to avoid the possible copy or simulation from other non-authorized users or even automatic programs like bots. Our study focuses on finding the hidden patterns embedded in the trajectories produced by account users. We employ a Markov chain model with Gaussian distribution in its transitions to describe the behavior in the trajectory. To distinguish between two trajectories, we propose a novel dissimilarity measure combined with a manifold learnt tuning for catching the pairwise relationship. Based on the pairwise relationship, we plug-in any effective classification or clustering methods for the detection of unauthorized access. The method can also be applied for the task of recognition, predicting the trajectory type without pre-defined identity. Given a trajectory input, the results show that the proposed method can accurately verify the user identity, or suggest whom owns the trajectory if the input identity is not provided.

  5. 28 CFR 74.6 - Location of eligible persons.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Location of eligible persons. 74.6 Section 74.6 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) CIVIL LIBERTIES ACT REDRESS PROVISION Verification of Eligibility § 74.6 Location of eligible persons. The Office shall compare...

  6. 28 CFR 74.6 - Location of eligible persons.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Location of eligible persons. 74.6 Section 74.6 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) CIVIL LIBERTIES ACT REDRESS PROVISION Verification of Eligibility § 74.6 Location of eligible persons. The Office shall compare...

  7. Online fingerprint verification.

    PubMed

    Upendra, K; Singh, S; Kumar, V; Verma, H K

    2007-01-01

    As organizations search for more secure authentication methods for user access, e-commerce, and other security applications, biometrics is gaining increasing attention. With an increasing emphasis on the emerging automatic personal identification applications, fingerprint based identification is becoming more popular. The most widely used fingerprint representation is the minutiae based representation. The main drawback with this representation is that it does not utilize a significant component of the rich discriminatory information available in the fingerprints. Local ridge structures cannot be completely characterized by minutiae. Also, it is difficult quickly to match two fingerprint images containing different number of unregistered minutiae points. In this study filter bank based representation, which eliminates these weakness, is implemented and the overall performance of the developed system is tested. The results have shown that this system can be used effectively for secure online verification applications. PMID:17365425

  8. Using multi-scale distribution and movement effects along a montane highway to identify optimal crossing locations for a large-bodied mammal community

    PubMed Central

    Römer, Heinrich; Germain, Ryan R.

    2013-01-01

    Roads are a major cause of habitat fragmentation that can negatively affect many mammal populations. Mitigation measures such as crossing structures are a proposed method to reduce the negative effects of roads on wildlife, but the best methods for determining where such structures should be implemented, and how their effects might differ between species in mammal communities is largely unknown. We investigated the effects of a major highway through south-eastern British Columbia, Canada on several mammal species to determine how the highway may act as a barrier to animal movement, and how species may differ in their crossing-area preferences. We collected track data of eight mammal species across two winters, along both the highway and pre-marked transects, and used a multi-scale modeling approach to determine the scale at which habitat characteristics best predicted preferred crossing sites for each species. We found evidence for a severe barrier effect on all investigated species. Freely-available remotely-sensed habitat landscape data were better than more costly, manually-digitized microhabitat maps in supporting models that identified preferred crossing sites; however, models using both types of data were better yet. Further, in 6 of 8 cases models which incorporated multiple spatial scales were better at predicting preferred crossing sites than models utilizing any single scale. While each species differed in terms of the landscape variables associated with preferred/avoided crossing sites, we used a multi-model inference approach to identify locations along the highway where crossing structures may benefit all of the species considered. By specifically incorporating both highway and off-highway data and predictions we were able to show that landscape context plays an important role for maximizing mitigation measurement efficiency. Our results further highlight the need for mitigation measures along major highways to improve connectivity between mammal

  9. Experiments for locating damaged truss members in a truss structure

    NASA Technical Reports Server (NTRS)

    Mcgowan, Paul E.; Smith, Suzanne W.; Javeed, Mehzad

    1991-01-01

    Locating damaged truss members in large space structures will involve a combination of sensing and diagnostic techniques. Methods developed for damage location require experimental verification prior to on-orbit applications. To this end, a series of experiments for locating damaged members using a generic, ten bay truss structure were conducted. A 'damaged' member is a member which has been removed entirely. Previously developed identification methods are used in conjunction with the experimental data to locate damage. Preliminary results to date are included, and indicate that mode selection and sensor location are important issues for location performance. A number of experimental data sets representing various damage configurations were compiled using the ten bay truss. The experimental data and the corresponding finite element analysis models are available to researchers for verification of various methods of structure identification and damage location.

  10. NON-INVASIVE DETERMINATION OF THE LOCATION AND DISTRIBUTION OF FREE-PHASE DENSE NONAQUEOUS PHASE LIQUIDS (DNAPL) BY SEISMIC REFLECTION TECHNIQUES

    SciTech Connect

    Michael G. Waddell; William J. Domoracki; Tom J. Temples

    2001-05-01

    This semi-annual technical progress report is for Task 4 site evaluation, Task 5 seismic reflection design and acquisition, and Task 6 seismic reflection processing and interpretation on DOE contact number DE-AR26-98FT40369. The project had planned one additional deployment to another site other than Savannah River Site (SRS) or DOE Hanford. During this reporting period the project had an ASME peer review. The findings and recommendation of the review panel, as well at the project team response to comments, are in Appendix A. After the SUBCON midyear review in Albuquerque, NM and the peer review it was decided that two additional deployments would be performed. The first deployment is to test the feasibility of using non-invasive seismic reflection and AVO analysis as monitoring to assist in determining the effectiveness of Dynamic Underground Stripping (DUS) in removal of DNAPL. Under the rescope of the project, Task 4 would be performed at the Charleston Navy Weapons Station, Charleston, SC and not at the Dynamic Underground Stripping (DUS) project at SRS. The project team had already completed Task 4 at the M-area seepage basin, only a few hundred yards away from the DUS site. Because the geology is the same, Task 4 was not necessary. However, a Vertical Seismic Profile (VSP) was conducted in one well to calibrate the geology to the seismic data. The first deployment to the DUS Site (Tasks 5 and 6) has been completed. Once the steam has been turned off these tasks will be performed again to compare the results to the pre-steam data. The results from the first deployment to the DUS site indicated a seismic amplitude anomaly at the location and depths of the known high concentrations of DNAPL. The deployment to another site with different geologic conditions was supposed to occur during this reporting period. The first site selected was DOE Paducah, Kentucky. After almost eight months of negotiation, site access was denied requiring the selection of another site

  11. 40 CFR 1065.390 - PM balance verifications and weighing process verification.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 34 2013-07-01 2013-07-01 false PM balance verifications and weighing... § 1065.390 PM balance verifications and weighing process verification. (a) Scope and frequency. This section describes three verifications. (1) Independent verification of PM balance performance within...

  12. 40 CFR 1065.390 - PM balance verifications and weighing process verification.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false PM balance verifications and weighing... § 1065.390 PM balance verifications and weighing process verification. (a) Scope and frequency. This section describes three verifications. (1) Independent verification of PM balance performance within...

  13. 40 CFR 1065.390 - PM balance verifications and weighing process verification.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 34 2012-07-01 2012-07-01 false PM balance verifications and weighing... § 1065.390 PM balance verifications and weighing process verification. (a) Scope and frequency. This section describes three verifications. (1) Independent verification of PM balance performance within...

  14. 40 CFR 1065.390 - PM balance verifications and weighing process verification.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 33 2011-07-01 2011-07-01 false PM balance verifications and weighing... § 1065.390 PM balance verifications and weighing process verification. (a) Scope and frequency. This section describes three verifications. (1) Independent verification of PM balance performance within...

  15. 40 CFR 1065.390 - PM balance verifications and weighing process verification.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 33 2014-07-01 2014-07-01 false PM balance verifications and weighing... § 1065.390 PM balance verifications and weighing process verification. (a) Scope and frequency. This section describes three verifications. (1) Independent verification of PM balance performance within...

  16. Distributed computing

    SciTech Connect

    Chambers, F.B.; Duce, D.A.; Jones, G.P.

    1984-01-01

    CONTENTS: The Dataflow Approach: Fundamentals of dataflow. Architecture and performance. Assembler level programming. High level dataflow programming. Declarative systems: Functional programming. Logic programming and prolog. The ''language first'' approach. Towards a successor to von Neumann. Loosely-coupled systems: Architectures. Communications. Distributed filestores. Mechanisms for distributed control. Distributed operating systems. Programming languages. Closely-coupled systems: Architecture. Programming languages. Run-time support. Development aids. Cyba-M. Polyproc. Modeling and verification: Using algebra for concurrency. Reasoning about concurrent systems. Each chapter includes references. Index.

  17. 7 CFR 272.13 - Prisoner verification system (PVS).

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 4 2014-01-01 2014-01-01 false Prisoner verification system (PVS). 272.13 Section 272.13 Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE FOOD STAMP AND FOOD DISTRIBUTION PROGRAM REQUIREMENTS FOR PARTICIPATING...

  18. 7 CFR 272.13 - Prisoner verification system (PVS).

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 4 2013-01-01 2013-01-01 false Prisoner verification system (PVS). 272.13 Section 272.13 Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE FOOD STAMP AND FOOD DISTRIBUTION PROGRAM REQUIREMENTS FOR PARTICIPATING...

  19. Biometric verification with correlation filters.

    PubMed

    Vijaya Kumar, B V K; Savvides, Marios; Xie, Chunyan; Venkataramani, Krithika; Thornton, Jason; Mahalanobis, Abhijit

    2004-01-10

    Using biometrics for subject verification can significantly improve security over that of approaches based on passwords and personal identification numbers, both of which people tend to lose or forget. In biometric verification the system tries to match an input biometric (such as a fingerprint, face image, or iris image) to a stored biometric template. Thus correlation filter techniques are attractive candidates for the matching precision needed in biometric verification. In particular, advanced correlation filters, such as synthetic discriminant function filters, can offer very good matching performance in the presence of variability in these biometric images (e.g., facial expressions, illumination changes, etc.). We investigate the performance of advanced correlation filters for face, fingerprint, and iris biometric verification. PMID:14735958

  20. Biometric verification with correlation filters

    NASA Astrophysics Data System (ADS)

    Vijaya Kumar, B. V. K.; Savvides, Marios; Xie, Chunyan; Venkataramani, Krithika; Thornton, Jason; Mahalanobis, Abhijit

    2004-01-01

    Using biometrics for subject verification can significantly improve security over that of approaches based on passwords and personal identification numbers, both of which people tend to lose or forget. In biometric verification the system tries to match an input biometric (such as a fingerprint, face image, or iris image) to a stored biometric template. Thus correlation filter techniques are attractive candidates for the matching precision needed in biometric verification. In particular, advanced correlation filters, such as synthetic discriminant function filters, can offer very good matching performance in the presence of variability in these biometric images (e.g., facial expressions, illumination changes, etc.). We investigate the performance of advanced correlation filters for face, fingerprint, and iris biometric verification.

  1. Verification and Validation for Flight-Critical Systems (VVFCS)

    NASA Technical Reports Server (NTRS)

    Graves, Sharon S.; Jacobsen, Robert A.

    2010-01-01

    On March 31, 2009 a Request for Information (RFI) was issued by NASA s Aviation Safety Program to gather input on the subject of Verification and Validation (V & V) of Flight-Critical Systems. The responses were provided to NASA on or before April 24, 2009. The RFI asked for comments in three topic areas: Modeling and Validation of New Concepts for Vehicles and Operations; Verification of Complex Integrated and Distributed Systems; and Software Safety Assurance. There were a total of 34 responses to the RFI, representing a cross-section of academic (26%), small & large industry (47%) and government agency (27%).

  2. Evaluation of 3D pre-treatment verification for volumetric modulated arc therapy plan in head region

    NASA Astrophysics Data System (ADS)

    Ruangchan, S.; Oonsiri, S.; Suriyapee, S.

    2016-03-01

    The development of pre-treatment QA tools contributes to the three dimension (3D) dose verification using the calculation software with the measured planar dose distribution. This research is aimed to evaluate the Sun Nuclear 3DVH software with Thermo luminescence dosimeter (TLD) measurement. The two VMAT patient plans (2.5 arcs) of 6 MV photons with different PTV locations were transferred to the Rando phantom images. The PTV of the first plan located in homogeneous area and vice versa in the second plan. For treatment planning process, the Rando phantom images were employed in optimization and calculation with the PTV, brain stem, lens and TLD position contouring. The verification plans were created, transferred to the ArcCHECK for measurement and calculated the 3D dose using 3DVH software. The range of the percent dose differences in both PTV and organ at risk (OAR) between TLD and 3DVH software of the first and the second plans were -2.09 to 3.87% and -1.39 to 6.88%, respectively. The mean percent dose differences for the PTV were 1.62% and 3.93% for the first and the second plans, respectively. In conclusion, the 3DVH software results show good agreement with TLD when the tumor located in the homogeneous area.

  3. Numerical modelling and verification of Polish ventricular assist device.

    PubMed

    Milenin, Andrzej; Kopernik, Magdalena; Jurkojć, Dorota; Gawlikowski, Maciej; Rusin, Tomasz; Darłak, Maciej; Kustosz, Roman

    2012-01-01

    The developed multiscale model of blood chamber of POLVAD (Polish ventricular assist device) was introduced. The tension test for polymer and digital image correlation (DIC) were performed for verification of the strains and displacements obtained in the numerical model of POLVAD_EXT. The numerical simulations were carried out in conditions given in the experiment to compare the results obtained on external surfaces of blood chamber of the POLVAD_EXT. The examined polymer applied in the POLVADs is sensitive to changes of temperature and this observation is considered in all prepared numerical models. The comparison of experimental and numerical results shows acceptable coincidence. There are some heterogeneous distributions of strains in experiment with respect to analysis of computed parameters. The comparison of two versions of blood chambers (POLVAD and POLVAD_EXT) in numerical analysis shows that POLVAD_EXT construction is better with respect to analysis of strain and stress. The maximum values of computed parameters are located in the regions between connectors on the internal surfaces of blood chambers of POLVAD. PMID:23140381

  4. Expose : procedure and results of the joint experiment verification tests

    NASA Astrophysics Data System (ADS)

    Panitz, C.; Rettberg, P.; Horneck, G.; Rabbow, E.; Baglioni, P.

    The International Space Station will carry the EXPOSE facility accommodated at the universal workplace URM-D located outside the Russian Service Module. The launch will be affected in 2005 and it is planned to stay in space for 1.5 years. The tray like structure will accomodate 2 chemical and 6 biological PI-experiments or experiment systems of the ROSE (Response of Organisms to Space Environment) consortium. EXPOSE will support long-term in situ studies of microbes in artificial meteorites, as well as of microbial communities from special ecological niches, such as endolithic and evaporitic ecosystems. The either vented or sealed experiment pockets will be covered by an optical filter system to control intensity and spectral range of solar UV irradiation. Control of sun exposure will be achieved by the use of individual shutters. To test the compatibility of the different biological systems and their adaptation to the opportunities and constraints of space conditions a profound ground support program has been developed. The procedure and first results of this joint Experiment Verification Tests (EVT) will be presented. The results will be essential for the success of the EXPOSE mission and have been done in parallel with the development and construction of the final hardware design of the facility. The results of the mission will contribute to the understanding of the organic chemistry processes in space, the biological adaptation strategies to extreme conditions, e.g. on early Earth and Mars, and the distribution of life beyond its planet of origin.

  5. Thoughts on Verification of Nuclear Disarmament

    SciTech Connect

    Dunlop, W H

    2007-09-26

    perfect and it was expected that occasionally there might be a verification measurement that was slightly above 150 kt. But the accuracy was much improved over the earlier seismic measurements. In fact some of this improvement was because as part of this verification protocol the US and Soviet Union provided the yields of several past tests to improve seismic calibrations. This actually helped provide a much needed calibration for the seismic measurements. It was also accepted that since nuclear tests were to a large part R&D related, it was also expected that occasionally there might be a test that was slightly above 150 kt, as you could not always predict the yield with high accuracy in advance of the test. While one could hypothesize that the Soviets could do a test at some other location than their test sites, if it were even a small fraction of 150 kt it would clearly be observed and would be a violation of the treaty. So the issue of clandestine tests of significance was easily covered for this particular treaty.

  6. Organics Verification Study for Sinclair and Dyes Inlets, Washington

    SciTech Connect

    Kohn, Nancy P.; Brandenberger, Jill M.; Niewolny, Laurie A.; Johnston, Robert K.

    2006-09-28

    Sinclair and Dyes Inlets near Bremerton, Washington, are on the State of Washington 1998 303(d) list of impaired waters because of fecal coliform contamination in marine water, metals in sediment and fish tissue, and organics in sediment and fish tissue. Because significant cleanup and source control activities have been conducted in the inlets since the data supporting the 1998 303(d) listings were collected, two verification studies were performed to address the 303(d) segments that were listed for metal and organic contaminants in marine sediment. The Metals Verification Study (MVS) was conducted in 2003; the final report, Metals Verification Study for Sinclair and Dyes Inlets, Washington, was published in March 2004 (Kohn et al. 2004). This report describes the Organics Verification Study that was conducted in 2005. The study approach was similar to the MVS in that many surface sediment samples were screened for the major classes of organic contaminants, and then the screening results and other available data were used to select a subset of samples for quantitative chemical analysis. Because the MVS was designed to obtain representative data on concentrations of contaminants in surface sediment throughout Sinclair Inlet, Dyes Inlet, Port Orchard Passage, and Rich Passage, aliquots of the 160 MVS sediment samples were used in the analysis for the Organics Verification Study. However, unlike metals screening methods, organics screening methods are not specific to individual organic compounds, and are not available for some target organics. Therefore, only the quantitative analytical results were used in the organics verification evaluation. The results of the Organics Verification Study showed that sediment quality outside of Sinclair Inlet is unlikely to be impaired because of organic contaminants. Similar to the results for metals, in Sinclair Inlet, the distribution of residual organic contaminants is generally limited to nearshore areas already within the

  7. LOCATING MONITORING STATIONS IN WATER DISTRIBUTION SYSTEMS

    EPA Science Inventory

    Water undergoes changes in quality between the time it leaves the treatment plant and the time it reaches the customer's tap, making it important to select monitoring stations that will adequately monitor these changers. But because there is no uniform schedule or framework for ...

  8. Transfer function verification and block diagram simplification of a very high-order distributed pole closed-loop servo by means of non-linear time-response simulation

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, A. K.

    1975-01-01

    Linear frequency domain methods are inadequate in analyzing the 1975 Viking Orbiter (VO75) digital tape recorder servo due to dominant nonlinear effects such as servo signal limiting, unidirectional servo control, and static/dynamic Coulomb friction. The frequency loop (speed control) servo of the VO75 tape recorder is used to illustrate the analytical tools and methodology of system redundancy elimination and high order transfer function verification. The paper compares time-domain performance parameters derived from a series of nonlinear time responses with the available experimental data in order to select the best possible analytical transfer function representation of the tape transport (mechanical segment of the tape recorder) from several possible candidates. The study also shows how an analytical time-response simulation taking into account most system nonlinearities can pinpoint system redundancy and overdesign stemming from a strictly empirical design approach. System order reduction is achieved through truncation of individual transfer functions and elimination of redundant blocks.

  9. Verification of Advances in a Coupled Snow-runoff Modeling Framework for Operational Streamflow Forecasts

    NASA Astrophysics Data System (ADS)

    Barik, M. G.; Hogue, T. S.; Franz, K. J.; He, M.

    2011-12-01

    The National Oceanic and Atmospheric Administration's (NOAA's) River Forecast Centers (RFCs) issue hydrologic forecasts related to flood events, reservoir operations for water supply, streamflow regulation, and recreation on the nation's streams and rivers. The RFCs use the National Weather Service River Forecast System (NWSRFS) for streamflow forecasting which relies on a coupled snow model (i.e. SNOW17) and rainfall-runoff model (i.e. SAC-SMA) in snow-dominated regions of the US. Errors arise in various steps of the forecasting system from input data, model structure, model parameters, and initial states. The goal of the current study is to undertake verification of potential improvements in the SNOW17-SAC-SMA modeling framework developed for operational streamflow forecasts. We undertake verification for a range of parameters sets (i.e. RFC, DREAM (Differential Evolution Adaptive Metropolis)) as well as a data assimilation (DA) framework developed for the coupled models. Verification is also undertaken for various initial conditions to observe the influence of variability in initial conditions on the forecast. The study basin is the North Fork America River Basin (NFARB) located on the western side of the Sierra Nevada Mountains in northern California. Hindcasts are verified using both deterministic (i.e. Nash Sutcliffe efficiency, root mean square error, and joint distribution) and probabilistic (i.e. reliability diagram, discrimination diagram, containing ratio, and Quantile plots) statistics. Our presentation includes comparison of the performance of different optimized parameters and the DA framework as well as assessment of the impact associated with the initial conditions used for streamflow forecasts for the NFARB.

  10. Hard and Soft Safety Verifications

    NASA Technical Reports Server (NTRS)

    Wetherholt, Jon; Anderson, Brenda

    2012-01-01

    The purpose of this paper is to examine the differences between and the effects of hard and soft safety verifications. Initially, the terminology should be defined and clarified. A hard safety verification is datum which demonstrates how a safety control is enacted. An example of this is relief valve testing. A soft safety verification is something which is usually described as nice to have but it is not necessary to prove safe operation. An example of a soft verification is the loss of the Solid Rocket Booster (SRB) casings from Shuttle flight, STS-4. When the main parachutes failed, the casings impacted the water and sank. In the nose cap of the SRBs, video cameras recorded the release of the parachutes to determine safe operation and to provide information for potential anomaly resolution. Generally, examination of the casings and nozzles contributed to understanding of the newly developed boosters and their operation. Safety verification of SRB operation was demonstrated by examination for erosion or wear of the casings and nozzle. Loss of the SRBs and associated data did not delay the launch of the next Shuttle flight.

  11. CTBT integrated verification system evaluation model supplement

    SciTech Connect

    EDENBURN,MICHAEL W.; BUNTING,MARCUS; PAYNE JR.,ARTHUR C.; TROST,LAWRENCE C.

    2000-03-02

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia's Monitoring Systems and Technology Center and has been funded by the U.S. Department of Energy's Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, ''top-level,'' modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM's unique features is that it integrates results from the various CTBT sensor technologies (seismic, in sound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection), location accuracy, and identification capability of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system's performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. The original IVSEM report, CTBT Integrated Verification System Evaluation Model, SAND97-25 18, described version 1.2 of IVSEM. This report describes the changes made to IVSEM version 1.2 and the addition of identification capability estimates that have been incorporated into IVSEM version 2.0.

  12. Structural verification for GAS experiments

    NASA Technical Reports Server (NTRS)

    Peden, Mark Daniel

    1992-01-01

    The purpose of this paper is to assist the Get Away Special (GAS) experimenter in conducting a thorough structural verification of its experiment structural configuration, thus expediting the structural review/approval process and the safety process in general. Material selection for structural subsystems will be covered with an emphasis on fasteners (GSFC fastener integrity requirements) and primary support structures (Stress Corrosion Cracking requirements and National Space Transportation System (NSTS) requirements). Different approaches to structural verifications (tests and analyses) will be outlined especially those stemming from lessons learned on load and fundamental frequency verification. In addition, fracture control will be covered for those payloads that utilize a door assembly or modify the containment provided by the standard GAS Experiment Mounting Plate (EMP). Structural hazard assessment and the preparation of structural hazard reports will be reviewed to form a summation of structural safety issues for inclusion in the safety data package.

  13. Evaluation of Mesoscale Model Phenomenological Verification Techniques

    NASA Technical Reports Server (NTRS)

    Lambert, Winifred

    2006-01-01

    Forecasters at the Spaceflight Meteorology Group, 45th Weather Squadron, and National Weather Service in Melbourne, FL use mesoscale numerical weather prediction model output in creating their operational forecasts. These models aid in forecasting weather phenomena that could compromise the safety of launch, landing, and daily ground operations and must produce reasonable weather forecasts in order for their output to be useful in operations. Considering the importance of model forecasts to operations, their accuracy in forecasting critical weather phenomena must be verified to determine their usefulness. The currently-used traditional verification techniques involve an objective point-by-point comparison of model output and observations valid at the same time and location. The resulting statistics can unfairly penalize high-resolution models that make realistic forecasts of a certain phenomena, but are offset from the observations in small time and/or space increments. Manual subjective verification can provide a more valid representation of model performance, but is time-consuming and prone to personal biases. An objective technique that verifies specific meteorological phenomena, much in the way a human would in a subjective evaluation, would likely produce a more realistic assessment of model performance. Such techniques are being developed in the research community. The Applied Meteorology Unit (AMU) was tasked to conduct a literature search to identify phenomenological verification techniques being developed, determine if any are ready to use operationally, and outline the steps needed to implement any operationally-ready techniques into the Advanced Weather Information Processing System (AWIPS). The AMU conducted a search of all literature on the topic of phenomenological-based mesoscale model verification techniques and found 10 different techniques in various stages of development. Six of the techniques were developed to verify precipitation forecasts, one

  14. Space Telescope performance and verification

    NASA Technical Reports Server (NTRS)

    Wright, W. F.

    1980-01-01

    The verification philosophy for the Space Telescope (ST) has evolved from years of experience with multispacecraft programs modified by the new factors introduced by the Space Transportation System. At the systems level of test, the ST will undergo joint qualification/acceptance tests with environment simulation using Lockheed's large spacecraft test facilities. These tests continue the process of detecting workmanship defects and module interface incompatibilities. The test program culminates in an 'all up' ST environmental test verification program resulting in a 'ready to launch' ST.

  15. Formal verification of mathematical software

    NASA Technical Reports Server (NTRS)

    Sutherland, D.

    1984-01-01

    Methods are investigated for formally specifying and verifying the correctness of mathematical software (software which uses floating point numbers and arithmetic). Previous work in the field was reviewed. A new model of floating point arithmetic called the asymptotic paradigm was developed and formalized. Two different conceptual approaches to program verification, the classical Verification Condition approach and the more recently developed Programming Logic approach, were adapted to use the asymptotic paradigm. These approaches were then used to verify several programs; the programs chosen were simplified versions of actual mathematical software.

  16. Retrospective analysis of 2D patient-specific IMRT verifications

    SciTech Connect

    Childress, Nathan L.; White, R. Allen; Bloch, Charles; Salehpour, Mohammad; Dong, Lei; Rosen, Isaac I.

    2005-04-01

    We performed 858 two-dimensional (2D) patient-specific intensity modulated radiotherapy verifications over a period of 18 months. Multifield, composite treatment plans were measured in phantom using calibrated Kodak EDR2 film and compared with the calculated dose extracted from two treatment planning systems. This research summarizes our findings using the normalized agreement test (NAT) index and the percent of pixels failing the gamma index as metrics to represent the agreement between measured and computed dose distributions. An in-house dose comparison software package was used to register and compare all verifications. We found it was important to use an automatic positioning algorithm to achieve maximum registration accuracy, and that our automatic algorithm agreed well with anticipated results from known phantom geometries. We also measured absolute dose for each case using an ion chamber. Because the computed distributions agreed with ion chamber measurements better than the EDR2 film doses, we normalized EDR2 data to the computed distributions. The distributions of both the NAT indices and the percentage of pixels failing the gamma index were found to be exponential distributions. We continue to use both the NAT index and percent of pixels failing gamma with 5%/3 mm criteria to evaluate future verifications, as these two metrics were found to be complementary. Our data showed that using 2%/2 mm or 3%/3 mm criteria produces results similar to those using 5%/3 mm criteria. Normalized comparisons that have a NAT index greater than 45 and/or more than 20% of the pixels failing gamma for 5%/3 mm criteria represent outliers from our clinical data set and require further analysis. Because our QA verification results were exponentially distributed, rather than a tight grouping of similar results, we continue to perform patient-specific QA in order to identify and correct outliers in our verifications. The data from this work could be useful as a reference for

  17. A verification system of RMAP protocol controller

    NASA Astrophysics Data System (ADS)

    Khanov, V. Kh; Shakhmatov, A. V.; Chekmarev, S. A.

    2015-01-01

    The functional verification problem of IP blocks of RMAP protocol controller is considered. The application of the verification method using fully- functional models of the processor and the internal bus of a system-on-chip is justified. Principles of construction of a verification system based on the given approach are proposed. The practical results of creating a system of verification of IP block of RMAP protocol controller is presented.

  18. Toward Regional Fossil Fuel CO2 Emissions Verification Using WRF-CHEM

    NASA Astrophysics Data System (ADS)

    Delle Monache, L.; Kosoviæ, B.; Cameron-Smith, P.; Bergmann, D.; Grant, K.; Guilderson, T.

    2008-12-01

    As efforts to reduce emissions of green house gases take shape it is becoming obvious that an essential component of a viable solution will involve emission verification. While detailed inventories of green house gas sources will represent important component of the solution additional verification methodologies will be necessary to reduce uncertainties in emission estimates especially for distributed sources and CO2 offsets. We developed tools for solving inverse dispersion problem for distributed emissions of green house gases. For that purpose we combine probabilistic inverse methodology based on Bayesian inversion with stochastic sampling and weather forecasting and air quality model WRF-CHEM. We demonstrate estimation of CO2 emissions associated with fossil fuel burning in California over two one-week periods in 2006. We use WRF- CHEM in tracer simulation mode to solve forward dispersion problem for emissions over eleven air basins. We first use direct inversion approach to determine optimal location for a limited number of CO2 - C14 isotope sensors. We then use Bayesian inference with stochastic sampling to determine probability distributions for emissions from California air basins. Moreover, we vary the number of sensors and frequency of measurements to study their effect on the accuracy and uncertainty level of the emission estimation. Finally, to take into account uncertainties associated with forward modeling, we combine Bayesian inference and stochastic sampling with ensemble modeling. The ensemble is created by running WRF-CHEM with different initial and boundary conditions as well as different boundary layer and surface model options. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory (LLNL) under Contract DE-AC52-07NA27344 (LLNL-ABS-406901-DRAFT). The project 07-ERD- 064 was funded by the Laboratory Directed Research and Development Program at LLNL.

  19. 40 CFR 141.605 - Subpart V compliance monitoring location recommendations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Initial Distribution... locations. You should distribute locations throughout the distribution system to the extent possible. Source water type Population size category Monitoring frequency 1 Distribution system monitoring location...

  20. 40 CFR 141.605 - Subpart V compliance monitoring location recommendations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Initial Distribution... locations. You should distribute locations throughout the distribution system to the extent possible. Source water type Population size category Monitoring frequency 1 Distribution system monitoring location...

  1. Working Memory Mechanism in Proportional Quantifier Verification

    ERIC Educational Resources Information Center

    Zajenkowski, Marcin; Szymanik, Jakub; Garraffa, Maria

    2014-01-01

    The paper explores the cognitive mechanisms involved in the verification of sentences with proportional quantifiers (e.g. "More than half of the dots are blue"). The first study shows that the verification of proportional sentences is more demanding than the verification of sentences such as: "There are seven blue and eight yellow…

  2. 78 FR 58492 - Generator Verification Reliability Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-24

    ... Energy Regulatory Commission 18 CFR Part 40 Generator Verification Reliability Standards AGENCY: Federal... Organization: MOD-025-2 (Verification and Data Reporting of Generator Real and Reactive Power Capability and Synchronous Condenser Reactive Power Capability), MOD- 026-1 (Verification of Models and Data for...

  3. [Rare location of arachnoid cysts. Extratemporal cysts].

    PubMed

    Martinez-Perez, Rafael; Hinojosa, José; Pascual, Beatriz; Panaderos, Teresa; Welter, Diego; Muñoz, María J

    2016-01-01

    The therapeutic management of arachnoid cysts depends largely on its location. Almost 50% of arachnoid cysts are located in the temporal fossa-Sylvian fissure, whereas the other half is distributed in different locations, sometimes exceptional. Under the name of infrequent location arachnoid cysts, a description is presented of those composed of 2 sheets of arachnoid membrane, which are not located in the temporal fossa, and are primary or congenital. PMID:26725189

  4. Chip connectivity verification program

    NASA Technical Reports Server (NTRS)

    Riley, Josh (Inventor); Patterson, George (Inventor)

    1999-01-01

    A method for testing electrical connectivity between conductive structures on a chip that is preferably layered with conductive and nonconductive layers. The method includes determining the layer on which each structure is located and defining the perimeter of each structure. Conductive layer connections between each of the layers are determined, and, for each structure, the points of intersection between the perimeter of that structure and the perimeter of each other structure on the chip are also determined. Finally, electrical connections between the structures are determined using the points of intersection and the conductive layer connections.

  5. METHOD OF LOCATING GROUNDS

    DOEpatents

    Macleish, K.G.

    1958-02-11

    ABS>This patent presents a method for locating a ground in a d-c circult having a number of parallel branches connected across a d-c source or generator. The complete method comprises the steps of locating the ground with reference to the mildpoint of the parallel branches by connecting a potentiometer across the terminals of the circuit and connecting the slider of the potentiometer to ground through a current indicating instrument, adjusting the slider to right or left of the mildpoint so as to cause the instrument to indicate zero, connecting the terminal of the network which is farthest from the ground as thus indicated by the potentiometer to ground through a condenser, impressing a ripple voltage on the circuit, and then measuring the ripple voltage at the midpoint of each parallel branch to find the branch in which is the lowest value of ripple voltage, and then measuring the distribution of the ripple voltage along this branch to determine the point at which the ripple voltage drops off to zero or substantially zero due to the existence of a ground. The invention has particular application where a circuit ground is present which will disappear if the normal circuit voltage is removed.

  6. Verification Challenges at Low Numbers

    SciTech Connect

    Benz, Jacob M.; Booker, Paul M.; McDonald, Benjamin S.

    2013-07-16

    This paper will explore the difficulties of deep reductions by examining the technical verification challenges. At each step on the road to low numbers, the verification required to ensure compliance of all parties will increase significantly. Looking post New START, the next step will likely include warhead limits in the neighborhood of 1000 (Pifer 2010). Further reductions will include stepping stones at 100’s of warheads, and then 10’s of warheads before final elimination could be considered of the last few remaining warheads and weapons. This paper will focus on these three threshold reduction levels, 1000, 100’s, 10’s. For each, the issues and challenges will be discussed, potential solutions will be identified, and the verification technologies and chain of custody measures that address these solutions will be surveyed. It is important to note that many of the issues that need to be addressed have no current solution. In these cases, the paper will explore new or novel technologies that could be applied. These technologies will draw from the research and development that is ongoing throughout the national lab complex, and will look at technologies utilized in other areas of industry for their application to arms control verification.

  7. Visual Attention During Sentence Verification.

    ERIC Educational Resources Information Center

    Lucas, Peter A.

    Eye movement data were collected for 28 college students reading 32 sentences with sentence verification questions. The factors observed were target sentence voice (active/passive), probe voice, and correct response (true/false). Pairs of subjects received the same set of stimuli, but with agents and objects in the sentences reversed. As expected,…

  8. Improved method for coliform verification.

    PubMed

    Diehl, J D

    1991-02-01

    Modification of a method for coliform verification presented in Standard Methods for the Examination of Water and Wastewater is described. Modification of the method, which is based on beta-galactosidase production, involves incorporation of a lactose operon inducer in medium upon which presumptive coliform isolates are cultured prior to beta-galactosidase assay. PMID:1901712

  9. Improved method for coliform verification.

    PubMed Central

    Diehl, J D

    1991-01-01

    Modification of a method for coliform verification presented in Standard Methods for the Examination of Water and Wastewater is described. Modification of the method, which is based on beta-galactosidase production, involves incorporation of a lactose operon inducer in medium upon which presumptive coliform isolates are cultured prior to beta-galactosidase assay. PMID:1901712

  10. A scheme for symmetrization verification

    NASA Astrophysics Data System (ADS)

    Sancho, Pedro

    2011-08-01

    We propose a scheme for symmetrization verification in two-particle systems, based on one-particle detection and state determination. In contrast to previous proposals, it does not follow a Hong-Ou-Mandel-type approach. Moreover, the technique can be used to generate superposition states of single particles.

  11. VERIFICATION OF WATER QUALITY MODELS

    EPA Science Inventory

    The basic concepts of water quality models are reviewed and the need to recognize calibration and verification of models with observed data is stressed. Post auditing of models after environmental control procedures are implemented is necessary to determine true model prediction ...

  12. Verification and Uncertainty Reduction of Amchitka Underground Nuclear Testing Models

    SciTech Connect

    Ahmed Hassan; Jenny Chapman

    2006-02-01

    The modeling of Amchitka underground nuclear tests conducted in 2002 is verified and uncertainty in model input parameters, as well as predictions, has been reduced using newly collected data obtained by the summer 2004 field expedition of CRESP. Newly collected data that pertain to the groundwater model include magnetotelluric (MT) surveys conducted on the island to determine the subsurface salinity and porosity structure of the subsurface, and bathymetric surveys to determine the bathymetric maps of the areas offshore from the Long Shot and Cannikin Sites. Analysis and interpretation of the MT data yielded information on the location of the transition zone, and porosity profiles showing porosity values decaying with depth. These new data sets are used to verify the original model in terms of model parameters, model structure, and model output verification. In addition, by using the new data along with the existing data (chemistry and head data), the uncertainty in model input and output is decreased by conditioning on all the available data. A Markov Chain Monte Carlo (MCMC) approach is adapted for developing new input parameter distributions conditioned on prior knowledge and new data. The MCMC approach is a form of Bayesian conditioning that is constructed in such a way that it produces samples of the model parameters that eventually converge to a stationary posterior distribution. The Bayesian MCMC approach enhances probabilistic assessment. Instead of simply propagating uncertainty forward from input parameters into model predictions (i.e., traditional Monte Carlo approach), MCMC propagates uncertainty backward from data onto parameters, and then forward from parameters into predictions. Comparisons between new data and the original model, and conditioning on all available data using MCMC method, yield the following results and conclusions: (1) Model structure is verified at Long Shot and Cannikin where the high-resolution bathymetric data collected by CRESP

  13. Verification of Loop Diagnostics

    NASA Technical Reports Server (NTRS)

    Winebarger, A.; Lionello, R.; Mok, Y.; Linker, J.; Mikic, Z.

    2014-01-01

    Many different techniques have been used to characterize the plasma in the solar corona: density-sensitive spectral line ratios are used to infer the density, the evolution of coronal structures in different passbands is used to infer the temperature evolution, and the simultaneous intensities measured in multiple passbands are used to determine the emission measure. All these analysis techniques assume that the intensity of the structures can be isolated through background subtraction. In this paper, we use simulated observations from a 3D hydrodynamic simulation of a coronal active region to verify these diagnostics. The density and temperature from the simulation are used to generate images in several passbands and spectral lines. We identify loop structures in the simulated images and calculate the loop background. We then determine the density, temperature and emission measure distribution as a function of time from the observations and compare with the true temperature and density of the loop. We find that the overall characteristics of the temperature, density, and emission measure are recovered by the analysis methods, but the details of the true temperature and density are not. For instance, the emission measure curves calculated from the simulated observations are much broader than the true emission measure distribution, though the average temperature evolution is similar. These differences are due, in part, to inadequate background subtraction, but also indicate a limitation of the analysis methods.

  14. Local ties verification based on analysis of long-term SLR and GPS solutions

    NASA Astrophysics Data System (ADS)

    Szafranek, Karolina; Schillak, Stanislaw; Araszkiewicz, Andrzej; Figurski, Mariusz; Lehmann, Marek; Lejba, Pawel

    2013-04-01

    The ITRF is determined on the basis of long-term observations by the following four techniques: GNSS, SLR, DORIS and VLBI. Analysis of the data delivered by different techniques provides a stable reference frame. Improvement of further ITRS realizations requires the advancement in the geometry of co-location network and the increase of agreement between the local ties in co-location sites, while many significant disagreements between techniques are being noticed. Ground local ties measurements are usually made once over a period of time, whereas different factors can cause change of its real value (earthquakes, change of GNSS antennas or different method of their calibration which result in different coordinates values in ITRF), which lead to the idea of co-locations vectors monitoring for the purpose of its verification. The authors analyzed solutions from several distributed globally SLR-GNSS sites. The data gathered between 1996-2011 by these two techniques were processed using coherent strategies (the same models and parameters were used). Up to now, results of coordinates and velocities determination with exemplary times series were introduced. The presentation goal is to show the results of analysis of NEU time series of SLR and GPS solutions reduced to the SLR markers positions using local ties in order to verify their values. Agreement of both types of solutions proves the good quality and timeliness of ground measurements, while high discrepancies point out that there is a need for their repetition to improve next ITRS realizations (e.g. ITRF2013).

  15. Verification of the karst flow model under laboratory controlled conditions

    NASA Astrophysics Data System (ADS)

    Gotovac, Hrvoje; Andric, Ivo; Malenica, Luka; Srzic, Veljko

    2016-04-01

    Karst aquifers are very important groundwater resources around the world as well as in coastal part of Croatia. They consist of extremely complex structure defining by slow and laminar porous medium and small fissures and usually fast turbulent conduits/karst channels. Except simple lumped hydrological models that ignore high karst heterogeneity, full hydraulic (distributive) models have been developed exclusively by conventional finite element and finite volume elements considering complete karst heterogeneity structure that improves our understanding of complex processes in karst. Groundwater flow modeling in complex karst aquifers are faced by many difficulties such as a lack of heterogeneity knowledge (especially conduits), resolution of different spatial/temporal scales, connectivity between matrix and conduits, setting of appropriate boundary conditions and many others. Particular problem of karst flow modeling is verification of distributive models under real aquifer conditions due to lack of above-mentioned information. Therefore, we will show here possibility to verify karst flow models under the laboratory controlled conditions. Special 3-D karst flow model (5.6*2.6*2 m) consists of concrete construction, rainfall platform, 74 piezometers, 2 reservoirs and other supply equipment. Model is filled by fine sand (3-D porous matrix) and drainage plastic pipes (1-D conduits). This model enables knowledge of full heterogeneity structure including position of different sand layers as well as conduits location and geometry. Moreover, we know geometry of conduits perforation that enable analysis of interaction between matrix and conduits. In addition, pressure and precipitation distribution and discharge flow rates from both phases can be measured very accurately. These possibilities are not present in real sites what this model makes much more useful for karst flow modeling. Many experiments were performed under different controlled conditions such as different

  16. Verification of a numerical simulation technique for natural convection

    SciTech Connect

    Gadgil, A.; Bauman, F.; Altmayer, E.; Kammerud, R.C.

    1983-03-01

    The present paper describes a verification of CONVEC2 for single-zone geometries by comparison with the results of two natural convection experiments performed in small-scale rectangular enclosures. These experiments were selected because of the high Rayleigh numbers obtained and the small heat loss through the insulated surfaces. Comparisons are presented for (1) heat transfer rates, (2) fluid temperature profiles, and (3) surface heat flux distributions.

  17. Location, Location, Location: Development of Spatiotemporal Sequence Learning in Infancy

    ERIC Educational Resources Information Center

    Kirkham, Natasha Z.; Slemmer, Jonathan A.; Richardson, Daniel C.; Johnson, Scott P.

    2007-01-01

    We investigated infants' sensitivity to spatiotemporal structure. In Experiment 1, circles appeared in a statistically defined spatial pattern. At test 11-month-olds, but not 8-month-olds, looked longer at a novel spatial sequence. Experiment 2 presented different color/shape stimuli, but only the location sequence was violated during test;…

  18. Bayesian Estimation of Combined Accuracy for Tests with Verification Bias

    PubMed Central

    Broemeling, Lyle D.

    2011-01-01

    This presentation will emphasize the estimation of the combined accuracy of two or more tests when verification bias is present. Verification bias occurs when some of the subjects are not subject to the gold standard. The approach is Bayesian where the estimation of test accuracy is based on the posterior distribution of the relevant parameter. Accuracy of two combined binary tests is estimated employing either “believe the positive” or “believe the negative” rule, then the true and false positive fractions for each rule are computed for two tests. In order to perform the analysis, the missing at random assumption is imposed, and an interesting example is provided by estimating the combined accuracy of CT and MRI to diagnose lung cancer. The Bayesian approach is extended to two ordinal tests when verification bias is present, and the accuracy of the combined tests is based on the ROC area of the risk function. An example involving mammography with two readers with extreme verification bias illustrates the estimation of the combined test accuracy for ordinal tests. PMID:26859487

  19. Design, Implementation, and Verification of the Reliable Multicast Protocol. Thesis

    NASA Technical Reports Server (NTRS)

    Montgomery, Todd L.

    1995-01-01

    This document describes the Reliable Multicast Protocol (RMP) design, first implementation, and formal verification. RMP provides a totally ordered, reliable, atomic multicast service on top of an unreliable multicast datagram service. RMP is fully and symmetrically distributed so that no site bears an undue portion of the communications load. RMP provides a wide range of guarantees, from unreliable delivery to totally ordered delivery, to K-resilient, majority resilient, and totally resilient atomic delivery. These guarantees are selectable on a per message basis. RMP provides many communication options, including virtual synchrony, a publisher/subscriber model of message delivery, a client/server model of delivery, mutually exclusive handlers for messages, and mutually exclusive locks. It has been commonly believed that total ordering of messages can only be achieved at great performance expense. RMP discounts this. The first implementation of RMP has been shown to provide high throughput performance on Local Area Networks (LAN). For two or more destinations a single LAN, RMP provides higher throughput than any other protocol that does not use multicast or broadcast technology. The design, implementation, and verification activities of RMP have occurred concurrently. This has allowed the verification to maintain a high fidelity between design model, implementation model, and the verification model. The restrictions of implementation have influenced the design earlier than in normal sequential approaches. The protocol as a whole has matured smoother by the inclusion of several different perspectives into the product development.

  20. Point Source Location Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Cox, J. Allen

    1986-11-01

    This paper presents the results of an analysis of point source location accuracy and sensitivity as a function of focal plane geometry, optical blur spot, and location algorithm. Five specific blur spots are treated: gaussian, diffraction-limited circular aperture with and without central obscuration (obscured and clear bessinc, respectively), diffraction-limited rectangular aperture, and a pill box distribution. For each blur spot, location accuracies are calculated for square, rectangular, and hexagonal detector shapes of equal area. The rectangular detectors are arranged on a hexagonal lattice. The two location algorithms consist of standard and generalized centroid techniques. Hexagonal detector arrays are shown to give the best performance under a wide range of conditions.

  1. Location-assured, multifactor authentication on smartphones via LTE communication

    NASA Astrophysics Data System (ADS)

    Kuseler, Torben; Lami, Ihsan A.; Al-Assam, Hisham

    2013-05-01

    With the added security provided by LTE, geographical location has become an important factor for authentication to enhance the security of remote client authentication during mCommerce applications using Smartphones. Tight combination of geographical location with classic authentication factors like PINs/Biometrics in a real-time, remote verification scheme over the LTE layer connection assures the authenticator about the client itself (via PIN/biometric) as well as the client's current location, thus defines the important aspects of "who", "when", and "where" of the authentication attempt without eaves dropping or man on the middle attacks. To securely integrate location as an authentication factor into the remote authentication scheme, client's location must be verified independently, i.e. the authenticator should not solely rely on the location determined on and reported by the client's Smartphone. The latest wireless data communication technology for mobile phones (4G LTE, Long-Term Evolution), recently being rolled out in various networks, can be employed to enhance this location-factor requirement of independent location verification. LTE's Control Plane LBS provisions, when integrated with user-based authentication and independent source of localisation factors ensures secure efficient, continuous location tracking of the Smartphone. This feature can be performed during normal operation of the LTE-based communication between client and network operator resulting in the authenticator being able to verify the client's claimed location more securely and accurately. Trials and experiments show that such algorithm implementation is viable for nowadays Smartphone-based banking via LTE communication.

  2. Realistic weather simulations and forecast verification with COSMO-EULAG

    NASA Astrophysics Data System (ADS)

    Wójcik, Damian; Piotrowski, Zbigniew; Rosa, Bogdan; Ziemiański, Michał

    2015-04-01

    Research conducted at Polish Institute of Meteorology and Water Management, National Research Institute, in collaboration with Consortium for Small Scale Modeling (COSMO) resulted in the development of a new prototype model COSMO-EULAG. The dynamical core of the new model is based on anelastic set of equation and numerics adopted from the EULAG model. The core is coupled, with the 1st degree of accuracy, to the COSMO physical parameterizations involving turbulence, friction, radiation, moist processes and surface fluxes. The tool is capable to compute weather forecast in mountainous area for the horizontal resolutions ranging from 2.2 km to 0.1 km and with slopes reaching 82 degree of inclination. An employment of EULAG allows to profit from its desirable conservative properties and numerical robustness confirmed in number of benchmark tests and widely documented in scientific literature. In this study we show a realistic case study of Alpine summer convection simulated by COSMO-EULAG. It compares the convection-permitting realization of the flow using 2.2 km horizontal grid size, typical for contemporary very high resolution regional NWP forecast, with realization of LES type using grid size of 100 m. The study presents comparison of flow, cloud and precipitation structure together with the reference results of standard compressible COSMO Runge-Kutta model forecast in 2.2 km horizontal resolution. The case study results are supplemented by COSMO-EULAG forecast verification results for Alpine domain in 2.2 km horizontal resolution. Wind, temperature, cloud, humidity and precipitation scores are being presented. Verification period covers one summer month (June 2013) and one autumn month (November 2013). Verification is based on data collected by a network of approximately 200 stations (surface data verification) and 6 stations (upper-air verification) located in the Alps and vicinity.

  3. Regional location in western China

    SciTech Connect

    Cogbill, A.H.; Steck, L.K.

    1996-10-01

    Accurately locating seismic events in western China using only regional seismic stations is a challenge. Not only is the number of seismic stations available for locating events small, but most stations available to researchers are often over 10{degree} distant. Here the authors describe the relocation, using regional stations, of both nuclear and earthquake sources near the Lop Nor test site in western China. For such relocations, they used the Earthquake Data Reports provided by the US Geological Survey (USGS) for the reported travel times. Such reports provide a listing of all phases reported to the USGS from stations throughout the world, including many stations in the People`s Republic of China. LocSAT was used as the location code. The authors systematically relocated each event int his study several times, using fewer and fewer stations at reach relocation, with the farther stations being eliminated at each step. They found that location accuracy, judged by comparing solutions from few stations to the solution provided using all available stations, remained good typically until fewer than seven stations remained.With a good station distribution, location accuracy remained surprisingly good (within 7 km) using as few as 3 stations. Because these relocations were computed without good station corrections and without source-specific station corrections (that is, path corrections), they believe that such regional locations can be substantially improved, largely using static station corrections and source-specific station corrections, at least in the Lop nor area, where sources have known locations. Elsewhere in China, one must rely upon known locations of regionally-recorded explosions. Locating such sources is clearly one of the major problems to be overcome before one can provide event locations with any assurance from regional stations.

  4. Subsurface barrier verification technologies, informal report

    SciTech Connect

    Heiser, J.H.

    1994-06-01

    One of the more promising remediation options available to the DOE waste management community is subsurface barriers. Some of the uses of subsurface barriers include surrounding and/or containing buried waste, as secondary confinement of underground storage tanks, to direct or contain subsurface contaminant plumes and to restrict remediation methods, such as vacuum extraction, to a limited area. To be most effective the barriers should be continuous and depending on use, have few or no breaches. A breach may be formed through numerous pathways including: discontinuous grout application, from joints between panels and from cracking due to grout curing or wet-dry cycling. The ability to verify barrier integrity is valuable to the DOE, EPA, and commercial sector and will be required to gain full public acceptance of subsurface barriers as either primary or secondary confinement at waste sites. It is recognized that no suitable method exists for the verification of an emplaced barrier`s integrity. The large size and deep placement of subsurface barriers makes detection of leaks challenging. This becomes magnified if the permissible leakage from the site is low. Detection of small cracks (fractions of an inch) at depths of 100 feet or more has not been possible using existing surface geophysical techniques. Compounding the problem of locating flaws in a barrier is the fact that no placement technology can guarantee the completeness or integrity of the emplaced barrier. This report summarizes several commonly used or promising technologies that have been or may be applied to in-situ barrier continuity verification.

  5. COS Internal NUV Wavelength Verification

    NASA Astrophysics Data System (ADS)

    Keyes, Charles

    2009-07-01

    This program will be executed after the uplink of the OSM2 position updates derived from the determination of the wavelength-scale zero points and desired spectral ranges for each grating in activity COS14 {program 11474 - COS NUV Internal/External Wavelength Scales}. This program will verify that the operational spectral ranges for each grating, central wavelength, and FP-POS are those desired. Subsequent to a successful verification, COS NUV ERO observations and NUV science can be enabled. An internal wavelength calibration spectrum using the default PtNe lamp {lamp 1} with each NUV grating at each central wavelength setting and each FP-POS position will be obtained for the verification. Additional exposures and waits between certain exposures will be required to avoid - and to evaluate - mechanism drifts.

  6. COS Internal FUV Wavelength Verification

    NASA Astrophysics Data System (ADS)

    Keyes, Charles

    2009-07-01

    This program will be executed after the uplink of the OSM1 position updates derived from the determination of the wavelength-scale zero points and desired spectral ranges for each grating in activity COS29 {program 11487 - COS FUV Internal/External Wavelength Scales}. This program will verify that the operational spectral ranges for each grating, central wavelength, and FP-POS are those desired. Subsequent to a successful verification, COS FUV ERO observations that require accurate wavelength scales {if any} and FUV science can be enabled. An internal wavelength calibration spectrum using the default PtNe lamp {lamp 1} with each FUV grating at each central wavelength setting and each FP-POS position will be obtained for the verification. Additional exposures and waits between certain exposures will be required to avoid - and to evaluate - mechanism drifts.

  7. NEXT Thruster Component Verification Testing

    NASA Technical Reports Server (NTRS)

    Pinero, Luis R.; Sovey, James S.

    2007-01-01

    Component testing is a critical part of thruster life validation activities under NASA s Evolutionary Xenon Thruster (NEXT) project testing. The high voltage propellant isolators were selected for design verification testing. Even though they are based on a heritage design, design changes were made because the isolators will be operated under different environmental conditions including temperature, voltage, and pressure. The life test of two NEXT isolators was therefore initiated and has accumulated more than 10,000 hr of operation. Measurements to date indicate only a negligibly small increase in leakage current. The cathode heaters were also selected for verification testing. The technology to fabricate these heaters, developed for the International Space Station plasma contactor hollow cathode assembly, was transferred to Aerojet for the fabrication of the NEXT prototype model ion thrusters. Testing the contractor-fabricated heaters is necessary to validate fabrication processes for high reliability heaters. This paper documents the status of the propellant isolator and cathode heater tests.

  8. Formal verification of AI software

    NASA Technical Reports Server (NTRS)

    Rushby, John; Whitehurst, R. Alan

    1989-01-01

    The application of formal verification techniques to Artificial Intelligence (AI) software, particularly expert systems, is investigated. Constraint satisfaction and model inversion are identified as two formal specification paradigms for different classes of expert systems. A formal definition of consistency is developed, and the notion of approximate semantics is introduced. Examples are given of how these ideas can be applied in both declarative and imperative forms.

  9. Verification and transparency in future arms control

    SciTech Connect

    Pilat, J.F.

    1996-09-01

    Verification`s importance has changed dramatically over time, although it always has been in the forefront of arms control. The goals and measures of verification and the criteria for success have changed with the times as well, reflecting such factors as the centrality of the prospective agreement to East-West relations during the Cold War, the state of relations between the United States and the Soviet Union, and the technologies available for monitoring. Verification`s role may be declining in the post-Cold War period. The prospects for such a development will depend, first and foremost, on the high costs of traditional arms control, especially those associated with requirements for verification. Moreover, the growing interest in informal, or non-negotiated arms control does not allow for verification provisions by the very nature of these arrangements. Multilateral agreements are also becoming more prominent and argue against highly effective verification measures, in part because of fears of promoting proliferation by opening sensitive facilities to inspectors from potential proliferant states. As a result, it is likely that transparency and confidence-building measures will achieve greater prominence, both as supplements to and substitutes for traditional verification. Such measures are not panaceas and do not offer all that we came to expect from verification during the Cold war. But they may be the best possible means to deal with current problems of arms reductions and restraints at acceptable levels of expenditure.

  10. Verification Challenges at Low Numbers

    SciTech Connect

    Benz, Jacob M.; Booker, Paul M.; McDonald, Benjamin S.

    2013-06-01

    Many papers have dealt with the political difficulties and ramifications of deep nuclear arms reductions, and the issues of “Going to Zero”. Political issues include extended deterrence, conventional weapons, ballistic missile defense, and regional and geo-political security issues. At each step on the road to low numbers, the verification required to ensure compliance of all parties will increase significantly. Looking post New START, the next step will likely include warhead limits in the neighborhood of 1000 . Further reductions will include stepping stones at1000 warheads, 100’s of warheads, and then 10’s of warheads before final elimination could be considered of the last few remaining warheads and weapons. This paper will focus on these three threshold reduction levels, 1000, 100’s, 10’s. For each, the issues and challenges will be discussed, potential solutions will be identified, and the verification technologies and chain of custody measures that address these solutions will be surveyed. It is important to note that many of the issues that need to be addressed have no current solution. In these cases, the paper will explore new or novel technologies that could be applied. These technologies will draw from the research and development that is ongoing throughout the national laboratory complex, and will look at technologies utilized in other areas of industry for their application to arms control verification.

  11. Nuclear Data Verification and Standardization

    SciTech Connect

    Karam, Lisa R.; Arif, Muhammad; Thompson, Alan K.

    2011-10-01

    The objective of this interagency program is to provide accurate neutron interaction verification and standardization data for the U.S. Department of Energy Division of Nuclear Physics programs which include astrophysics, radioactive beam studies, and heavy-ion reactions. The measurements made in this program are also useful to other programs that indirectly use the unique properties of the neutron for diagnostic and analytical purposes. These include homeland security, personnel health and safety, nuclear waste disposal, treaty verification, national defense, and nuclear based energy production. The work includes the verification of reference standard cross sections and related neutron data employing the unique facilities and capabilities at NIST and other laboratories as required; leadership and participation in international intercomparisons and collaborations; and the preservation of standard reference deposits. An essential element of the program is critical evaluation of neutron interaction data standards including international coordinations. Data testing of critical data for important applications is included. The program is jointly supported by the Department of Energy and the National Institute of Standards and Technology.

  12. Regression Verification Using Impact Summaries

    NASA Technical Reports Server (NTRS)

    Backes, John; Person, Suzette J.; Rungta, Neha; Thachuk, Oksana

    2013-01-01

    Regression verification techniques are used to prove equivalence of syntactically similar programs. Checking equivalence of large programs, however, can be computationally expensive. Existing regression verification techniques rely on abstraction and decomposition techniques to reduce the computational effort of checking equivalence of the entire program. These techniques are sound but not complete. In this work, we propose a novel approach to improve scalability of regression verification by classifying the program behaviors generated during symbolic execution as either impacted or unimpacted. Our technique uses a combination of static analysis and symbolic execution to generate summaries of impacted program behaviors. The impact summaries are then checked for equivalence using an o-the-shelf decision procedure. We prove that our approach is both sound and complete for sequential programs, with respect to the depth bound of symbolic execution. Our evaluation on a set of sequential C artifacts shows that reducing the size of the summaries can help reduce the cost of software equivalence checking. Various reduction, abstraction, and compositional techniques have been developed to help scale software verification techniques to industrial-sized systems. Although such techniques have greatly increased the size and complexity of systems that can be checked, analysis of large software systems remains costly. Regression analysis techniques, e.g., regression testing [16], regression model checking [22], and regression verification [19], restrict the scope of the analysis by leveraging the differences between program versions. These techniques are based on the idea that if code is checked early in development, then subsequent versions can be checked against a prior (checked) version, leveraging the results of the previous analysis to reduce analysis cost of the current version. Regression verification addresses the problem of proving equivalence of closely related program

  13. Indoor location estimation using radio beacons

    NASA Astrophysics Data System (ADS)

    Ahmad, Uzair; Lee, Young-Koo; Lee, Sungyoug; Park, Chongkug

    2007-12-01

    We present a simple location estimation method for developing radio beacon based location system in the indoor environments. It employs an online learning approach for making large scale location systems in a short time collaboratively. The salient features of our method are low memory requirements and simple computations which make it suitable for both distributed location-aware applications based on client-server model as well as privacy sensitive applications residing on stand alone devices.

  14. Gender verification in competitive sports.

    PubMed

    Simpson, J L; Ljungqvist, A; de la Chapelle, A; Ferguson-Smith, M A; Genel, M; Carlson, A S; Ehrhardt, A A; Ferris, E

    1993-11-01

    The possibility that men might masquerade as women and be unfair competitors in women's sports is accepted as outrageous by athletes and the public alike. Since the 1930s, media reports have fuelled claims that individuals who once competed as female athletes subsequently appeared to be men. In most of these cases there was probably ambiguity of the external genitalia, possibly as a result of male pseudohermaphroditism. Nonetheless, beginning at the Rome Olympic Games in 1960, the International Amateur Athletics Federation (IAAF) began establishing rules of eligibility for women athletes. Initially, physical examination was used as a method for gender verification, but this plan was widely resented. Thus, sex chromatin testing (buccal smear) was introduced at the Mexico City Olympic Games in 1968. The principle was that genetic females (46,XX) show a single X-chromatic mass, whereas males (46,XY) do not. Unfortunately, sex chromatin analysis fell out of common diagnostic use by geneticists shortly after the International Olympic Committee (IOC) began its implementation for gender verification. The lack of laboratories routinely performing the test aggravated the problem of errors in interpretation by inexperienced workers, yielding false-positive and false-negative results. However, an even greater problem is that there exist phenotypic females with male sex chromatin patterns (e.g. androgen insensitivity, XY gonadal dysgenesis). These individuals have no athletic advantage as a result of their congenital abnormality and reasonably should not be excluded from competition. That is, only the chromosomal (genetic) sex is analysed by sex chromatin testing, not the anatomical or psychosocial status. For all the above reasons sex chromatin testing unfairly excludes many athletes. Although the IOC offered follow-up physical examinations that could have restored eligibility for those 'failing' sex chromatin tests, most affected athletes seemed to prefer to 'retire'. All

  15. ENVIRONMENTAL TECHNOLOGY VERIFICATION: GENERIC VERIFICATION PROTOCOL FOR BIOLOGICAL AND AEROSOL TESTING OF GENERAL VENTILATION AIR CLEANERS

    EPA Science Inventory

    The U.S. Environmental Protection Agency established the Environmental Technology Verification Program to accelerate the development and commercialization of improved environmental technology through third party verification and reporting of product performance. Research Triangl...

  16. PERFORMANCE VERIFICATION OF STORMWATER TREATMENT DEVICES UNDER EPA�S ENVIRONMENTAL TECHNOLOGY VERIFICATION PROGRAM

    EPA Science Inventory

    The Environmental Technology Verification (ETV) Program was created to facilitate the deployment of innovative or improved environmental technologies through performance verification and dissemination of information. The program�s goal is to further environmental protection by a...

  17. PERFORMANCE VERIFICATION OF ANIMAL WATER TREATMENT TECHNOLOGIES THROUGH EPA'S ENVIRONMENTAL TECHNOLOGY VERIFICATION PROGRAM

    EPA Science Inventory

    The U.S. Environmental Protection Agency created the Environmental Technology Verification Program (ETV) to further environmental protection by accelerating the commercialization of new and innovative technology through independent performance verification and dissemination of in...

  18. Cable-fault locator

    NASA Technical Reports Server (NTRS)

    Cason, R. L.; Mcstay, J. J.; Heymann, A. P., Sr.

    1979-01-01

    Inexpensive system automatically indicates location of short-circuited section of power cable. Monitor does not require that cable be disconnected from its power source or that test signals be applied. Instead, ground-current sensors are installed in manholes or at other selected locations along cable run. When fault occurs, sensors transmit information about fault location to control center. Repair crew can be sent to location and cable can be returned to service with minimum of downtime.

  19. Magnetic cleanliness verification approach on tethered satellite

    NASA Technical Reports Server (NTRS)

    Messidoro, Piero; Braghin, Massimo; Grande, Maurizio

    1990-01-01

    Magnetic cleanliness testing was performed on the Tethered Satellite as the last step of an articulated verification campaign aimed at demonstrating the capability of the satellite to support its TEMAG (TEthered MAgnetometer) experiment. Tests at unit level and analytical predictions/correlations using a dedicated mathematical model (GANEW program) are also part of the verification activities. Details of the tests are presented, and the results of the verification are described together with recommendations for later programs.

  20. ENVIRONMENTAL TECHNOLOGY VERIFICATION: Reduction of Nitrogen in Domestic Wastewater from Individual Residential Homes. BioConcepts, Inc. ReCip® RTS ~ 500 System

    EPA Science Inventory

    Verification testing of the ReCip® RTS-500 System was conducted over a 12-month period at the Massachusetts Alternative Septic System Test Center (MASSTC) located on Otis Air National Guard Base in Bourne, Massachusetts. A nine-week startup period preceded the verification test t...

  1. Location, Location, Location: Where Do Location-Based Services Fit into Your Institution's Social Media Mix?

    ERIC Educational Resources Information Center

    Nekritz, Tim

    2011-01-01

    Foursquare is a location-based social networking service that allows users to share their location with friends. Some college administrators have been thinking about whether and how to take the leap into location-based services, which are also known as geosocial networking services. These platforms, which often incorporate gaming elements like…

  2. Verification of regional climates of GISS GCM. Part 1: Winter

    NASA Technical Reports Server (NTRS)

    Druyan, Leonard M.; Rind, David

    1988-01-01

    Verification is made of the synoptic fields, sea level pressure, precipitation rate, 200 mb zonal wind and the surface resultant wind, generated by two versions of the GISS climate model. The models differ regarding the horizontal resolution of the computational grids and the specification of the sea surface temperatures. Maps of the regional distributions of seasonal variations of the model fields are shown alongside maps showing the observed distributions. Comparisons of the model results with observations are discussed, and also summarized in tables according to geographic regions.

  3. Verification of regional climates of GISS GCM. Part 2: Summer

    NASA Technical Reports Server (NTRS)

    Druyan, Leonard M.; Rind, David

    1989-01-01

    Verification is made of the synoptic fields, sea-level pressure, precipitation rate, 200mb zonal wind and the surface resultant wind generated by two versions of the Goddard Institute for Space Studies (GISS) climate model. The models differ regarding the horizontal resolution of the computation grids and the specification of the sea-surface temperatures. Maps of the regional distributions of seasonal means of the model fields are shown alongside maps that show the observed distributions. Comparisons of the model results with observations are discussed and also summarized in tables according to geographic region.

  4. Automatic verification methods for finite state systems

    SciTech Connect

    Sifakis, J. )

    1990-01-01

    This volume contains the proceedings of a workshop devoted to the verification of finite state systems. The workshop focused on the development and use of methods, tools and theories for automatic verification of finite state systems. The goal at the workshop was to compare verification methods and tools to assist the applications designer. The papers review verification techniques for finite state systems and evaluate their relative advantages. The techniques considered cover various specification formalisms such as process algebras, automata and logics. Most of the papers focus on exploitation of existing results in three application areas: hardware design, communication protocols and real-time systems.

  5. The SeaHorn Verification Framework

    NASA Technical Reports Server (NTRS)

    Gurfinkel, Arie; Kahsai, Temesghen; Komuravelli, Anvesh; Navas, Jorge A.

    2015-01-01

    In this paper, we present SeaHorn, a software verification framework. The key distinguishing feature of SeaHorn is its modular design that separates the concerns of the syntax of the programming language, its operational semantics, and the verification semantics. SeaHorn encompasses several novelties: it (a) encodes verification conditions using an efficient yet precise inter-procedural technique, (b) provides flexibility in the verification semantics to allow different levels of precision, (c) leverages the state-of-the-art in software model checking and abstract interpretation for verification, and (d) uses Horn-clauses as an intermediate language to represent verification conditions which simplifies interfacing with multiple verification tools based on Horn-clauses. SeaHorn provides users with a powerful verification tool and researchers with an extensible and customizable framework for experimenting with new software verification techniques. The effectiveness and scalability of SeaHorn are demonstrated by an extensive experimental evaluation using benchmarks from SV-COMP 2015 and real avionics code.

  6. Guide to good practices for independent verification

    SciTech Connect

    1998-12-01

    This Guide to Good Practices is written to enhance understanding of, and provide direction for, Independent Verification, Chapter X of Department of Energy (DOE) Order 5480.19, Conduct of Operations Requirements for DOE Facilities. The practices in this guide should be considered when planning or reviewing independent verification activities. Contractors are advised to adopt procedures that meet the intent of DOE Order 5480.19. Independent Verification is an element of an effective Conduct of Operations program. The complexity and array of activities performed in DOE facilities dictate the necessity for coordinated independent verification activities to promote safe and efficient operations.

  7. Input apparatus for dynamic signature verification systems

    DOEpatents

    EerNisse, Errol P.; Land, Cecil E.; Snelling, Jay B.

    1978-01-01

    The disclosure relates to signature verification input apparatus comprising a writing instrument and platen containing piezoelectric transducers which generate signals in response to writing pressures.

  8. Muscle glycogen and cell function--Location, location, location.

    PubMed

    Ørtenblad, N; Nielsen, J

    2015-12-01

    The importance of glycogen, as a fuel during exercise, is a fundamental concept in exercise physiology. The use of electron microscopy has revealed that glycogen is not evenly distributed in skeletal muscle fibers, but rather localized in distinct pools. In this review, we present the available evidence regarding the subcellular localization of glycogen in skeletal muscle and discuss this from the perspective of skeletal muscle fiber function. The distribution of glycogen in the defined pools within the skeletal muscle varies depending on exercise intensity, fiber phenotype, training status, and immobilization. Furthermore, these defined pools may serve specific functions in the cell. Specifically, reduced levels of these pools of glycogen are associated with reduced SR Ca(2+) release, muscle relaxation rate, and membrane excitability. Collectively, the available literature strongly demonstrates that the subcellular localization of glycogen has to be considered to fully understand the role of glycogen metabolism and signaling in skeletal muscle function. Here, we propose that the effect of low muscle glycogen on excitation-contraction coupling may serve as a built-in mechanism, which links the energetic state of the muscle fiber to energy utilization. PMID:26589115

  9. Conformance Verification of Privacy Policies

    NASA Astrophysics Data System (ADS)

    Fu, Xiang

    Web applications are both the consumers and providers of information. To increase customer confidence, many websites choose to publish their privacy protection policies. However, policy conformance is often neglected. We propose a logic based framework for formally specifying and reasoning about the implementation of privacy protection by a web application. A first order extension of computation tree logic is used to specify a policy. A verification paradigm, built upon a static control/data flow analysis, is presented to verify if a policy is satisfied.

  10. Why do verification and validation?

    DOE PAGESBeta

    Hu, Kenneth T.; Paez, Thomas L.

    2016-02-19

    In this discussion paper, we explore different ways to assess the value of verification and validation (V&V) of engineering models. We first present a literature review on the value of V&V and then use value chains and decision trees to show how value can be assessed from a decision maker's perspective. In this context, the value is what the decision maker is willing to pay for V&V analysis with the understanding that the V&V results are uncertain. As a result, the 2014 Sandia V&V Challenge Workshop is used to illustrate these ideas.

  11. Science verification results from PMAS

    NASA Astrophysics Data System (ADS)

    Roth, M. M.; Becker, T.; Böhm, P.; Kelz, A.

    2004-02-01

    PMAS, the Potsdam Multi-Aperture Spectrophotometer, is a new integral field instrument which was commissioned at the Calar Alto 3.5m Telescope in May 2001. We report on results obtained from a science verification run in October 2001. We present observations of the low-metallicity blue compact dwarf galaxy SBS0335-052, the ultra-luminous X-ray Source X-1 in the Holmberg;II galaxy, the quadruple gravitational lens system Q2237+0305 (the ``Einstein Cross''), the Galactic planetary nebula NGC7027, and extragalactic planetary nebulae in M31. PMAS is now available as a common user instrument at Calar Alto Observatory.

  12. Structural dynamics verification facility study

    NASA Technical Reports Server (NTRS)

    Kiraly, L. J.; Hirchbein, M. S.; Mcaleese, J. M.; Fleming, D. P.

    1981-01-01

    The need for a structural dynamics verification facility to support structures programs was studied. Most of the industry operated facilities are used for highly focused research, component development, and problem solving, and are not used for the generic understanding of the coupled dynamic response of major engine subsystems. Capabilities for the proposed facility include: the ability to both excite and measure coupled structural dynamic response of elastic blades on elastic shafting, the mechanical simulation of various dynamical loadings representative of those seen in operating engines, and the measurement of engine dynamic deflections and interface forces caused by alternative engine mounting configurations and compliances.

  13. Mine locations: Kazakhstan

    SciTech Connect

    Perry, Bradley A

    2008-01-01

    Upon accepting this internship at Los Alamos National Laboratory, I was excited but a bit nervous because I was placed into a field I knew nothing about and did not incorporate my mechanical engineering background. However, I stayed positive and realized that experience and education can come in many forms and that this would be a once in a lifetime opportunity. The EES-II Division (which stands for Earth and Environmental Sciences, Geophysics division) concentrates on several topics, including Nuclear Treaty Verification Seismology. The study of this is extremely important in order to monitor countries that have nuclear capability and make sure they follow the rules of the international comprehensive nuclear test ban treaty. Seismology is only one aspect of this monitoring and EES-II works diligently with many other groups here at Los Alamos and across the world.

  14. Hailstorms over Switzerland: Verification of Crowd-sourced Data

    NASA Astrophysics Data System (ADS)

    Noti, Pascal-Andreas; Martynov, Andrey; Hering, Alessandro; Martius, Olivia

    2016-04-01

    The reports of smartphone users, witnessing hailstorms, can be used as source of independent, ground-based observation data on ground-reaching hailstorms with high temporal and spatial resolution. The presented work focuses on the verification of crowd-sourced data collected over Switzerland with the help of a smartphone application recently developed by MeteoSwiss. The precise location, time of hail precipitation and the hailstone size are included in the crowd-sourced data, assessed on the basis of the weather radar data of MeteoSwiss. Two radar-based hail detection algorithms, POH (Probability of Hail) and MESHS (Maximum Expected Severe Hail Size), in use at MeteoSwiss are confronted with the crowd-sourced data. The available data and investigation time period last from June to August 2015. Filter criteria have been applied in order to remove false reports from the crowd-sourced data. Neighborhood methods have been introduced to reduce the uncertainties which result from spatial and temporal biases. The crowd-sourced and radar data are converted into binary sequences according to previously set thresholds, allowing for using a categorical verification. Verification scores (e.g. hit rate) are then calculated from a 2x2 contingency table. The hail reporting activity and patterns corresponding to "hail" and "no hail" reports, sent from smartphones, have been analyzed. The relationship between the reported hailstone sizes and both radar-based hail detection algorithms have been investigated.

  15. Enrichment Assay Methods Development for the Integrated Cylinder Verification System

    SciTech Connect

    Smith, Leon E.; Misner, Alex C.; Hatchell, Brian K.; Curtis, Michael M.

    2009-10-22

    International Atomic Energy Agency (IAEA) inspectors currently perform periodic inspections at uranium enrichment plants to verify UF6 cylinder enrichment declarations. Measurements are typically performed with handheld high-resolution sensors on a sampling of cylinders taken to be representative of the facility's entire product-cylinder inventory. Pacific Northwest National Laboratory (PNNL) is developing a concept to automate the verification of enrichment plant cylinders to enable 100 percent product-cylinder verification and potentially, mass-balance calculations on the facility as a whole (by also measuring feed and tails cylinders). The Integrated Cylinder Verification System (ICVS) could be located at key measurement points to positively identify each cylinder, measure its mass and enrichment, store the collected data in a secure database, and maintain continuity of knowledge on measured cylinders until IAEA inspector arrival. The three main objectives of this FY09 project are summarized here and described in more detail in the report: (1) Develop a preliminary design for a prototype NDA system, (2) Refine PNNL's MCNP models of the NDA system, and (3) Procure and test key pulse-processing components. Progress against these tasks to date, and next steps, are discussed.

  16. Sarsat location algorithms

    NASA Astrophysics Data System (ADS)

    Nardi, Jerry

    The Satellite Aided Search and Rescue (Sarsat) is designed to detect and locate distress beacons using satellite receivers. Algorithms used for calculating the positions of 406 MHz beacons and 121.5/243 MHz beacons are presented. The techniques for matching, resolving and averaging calculated locations from multiple satellite passes are also described along with results pertaining to single pass and multiple pass location estimate accuracy.

  17. GENERIC VERIFICATION PROTOCOL FOR THE VERIFICATION OF PESTICIDE SPRAY DRIFT REDUCTION TECHNOLOGIES FOR ROW AND FIELD CROPS

    EPA Science Inventory

    This ETV program generic verification protocol was prepared and reviewed for the Verification of Pesticide Drift Reduction Technologies project. The protocol provides a detailed methodology for conducting and reporting results from a verification test of pesticide drift reductio...

  18. Reversible micromachining locator

    DOEpatents

    Salzer, Leander J.; Foreman, Larry R.

    1999-01-01

    This invention provides a device which includes a locator, a kinematic mount positioned on a conventional tooling machine, a part carrier disposed on the locator and a retainer ring. The locator has disposed therein a plurality of steel balls, placed in an equidistant position circumferentially around the locator. The kinematic mount includes a plurality of magnets which are in registry with the steel balls on the locator. In operation, a blank part to be machined is placed between a surface of a locator and the retainer ring (fitting within the part carrier). When the locator (with a blank part to be machined) is coupled to the kinematic mount, the part is thus exposed for the desired machining process. Because the locator is removably attachable to the kinematic mount, it can easily be removed from the mount, reversed, and reinserted onto the mount for additional machining. Further, the locator can likewise be removed from the mount and placed onto another tooling machine having a properly aligned kinematic mount. Because of the unique design and use of magnetic forces of the present invention, positioning errors of less than 0.25 micrometer for each machining process can be achieved.

  19. Reversible micromachining locator

    DOEpatents

    Salzer, L.J.; Foreman, L.R.

    1999-08-31

    This invention provides a device which includes a locator, a kinematic mount positioned on a conventional tooling machine, a part carrier disposed on the locator and a retainer ring. The locator has disposed therein a plurality of steel balls, placed in an equidistant position circumferentially around the locator. The kinematic mount includes a plurality of magnets which are in registry with the steel balls on the locator. In operation, a blank part to be machined is placed between a surface of a locator and the retainer ring (fitting within the part carrier). When the locator (with a blank part to be machined) is coupled to the kinematic mount, the part is thus exposed for the desired machining process. Because the locator is removably attachable to the kinematic mount, it can easily be removed from the mount, reversed, and reinserted onto the mount for additional machining. Further, the locator can likewise be removed from the mount and placed onto another tooling machine having a properly aligned kinematic mount. Because of the unique design and use of magnetic forces of the present invention, positioning errors of less than 0.25 micrometer for each machining process can be achieved. 7 figs.

  20. Runtime Verification with State Estimation

    NASA Technical Reports Server (NTRS)

    Stoller, Scott D.; Bartocci, Ezio; Seyster, Justin; Grosu, Radu; Havelund, Klaus; Smolka, Scott A.; Zadok, Erez

    2011-01-01

    We introduce the concept of Runtime Verification with State Estimation and show how this concept can be applied to estimate theprobability that a temporal property is satisfied by a run of a program when monitoring overhead is reduced by sampling. In such situations, there may be gaps in the observed program executions, thus making accurate estimation challenging. To deal with the effects of sampling on runtime verification, we view event sequences as observation sequences of a Hidden Markov Model (HMM), use an HMM model of the monitored program to "fill in" sampling-induced gaps in observation sequences, and extend the classic forward algorithm for HMM state estimation (which determines the probability of a state sequence, given an observation sequence) to compute the probability that the property is satisfied by an execution of the program. To validate our approach, we present a case study based on the mission software for a Mars rover. The results of our case study demonstrate high prediction accuracy for the probabilities computed by our algorithm. They also show that our technique is much more accurate than simply evaluating the temporal property on the given observation sequences, ignoring the gaps.

  1. RISKIND verification and benchmark comparisons

    SciTech Connect

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  2. Cognitive Bias in Systems Verification

    NASA Technical Reports Server (NTRS)

    Larson, Steve

    2012-01-01

    Working definition of cognitive bias: Patterns by which information is sought and interpreted that can lead to systematic errors in decisions. Cognitive bias is used in diverse fields: Economics, Politics, Intelligence, Marketing, to name a few. Attempts to ground cognitive science in physical characteristics of the cognitive apparatus exceed our knowledge. Studies based on correlations; strict cause and effect is difficult to pinpoint. Effects cited in the paper and discussed here have been replicated many times over, and appear sound. Many biases have been described, but it is still unclear whether they are all distinct. There may only be a handful of fundamental biases, which manifest in various ways. Bias can effect system verification in many ways . Overconfidence -> Questionable decisions to deploy. Availability -> Inability to conceive critical tests. Representativeness -> Overinterpretation of results. Positive Test Strategies -> Confirmation bias. Debiasing at individual level very difficult. The potential effect of bias on the verification process can be managed, but not eliminated. Worth considering at key points in the process.

  3. Video-Based Fingerprint Verification

    PubMed Central

    Qin, Wei; Yin, Yilong; Liu, Lili

    2013-01-01

    Conventional fingerprint verification systems use only static information. In this paper, fingerprint videos, which contain dynamic information, are utilized for verification. Fingerprint videos are acquired by the same capture device that acquires conventional fingerprint images, and the user experience of providing a fingerprint video is the same as that of providing a single impression. After preprocessing and aligning processes, “inside similarity” and “outside similarity” are defined and calculated to take advantage of both dynamic and static information contained in fingerprint videos. Match scores between two matching fingerprint videos are then calculated by combining the two kinds of similarity. Experimental results show that the proposed video-based method leads to a relative reduction of 60 percent in the equal error rate (EER) in comparison to the conventional single impression-based method. We also analyze the time complexity of our method when different combinations of strategies are used. Our method still outperforms the conventional method, even if both methods have the same time complexity. Finally, experimental results demonstrate that the proposed video-based method can lead to better accuracy than the multiple impressions fusion method, and the proposed method has a much lower false acceptance rate (FAR) when the false rejection rate (FRR) is quite low. PMID:24008283

  4. The Ontogeny of the Verification System.

    ERIC Educational Resources Information Center

    Akiyama, M. Michael; Guillory, Andrea W.

    1983-01-01

    Young children found it difficult to verify negative statements, but found affirmative statements, affirmative questions, and negative questions equally easy to deal with. It is proposed that children acquire the answering system earlier than the verification system, and use answering to verify statements before acquiring the verification system.…

  5. The monitoring and verification of nuclear weapons

    SciTech Connect

    Garwin, Richard L.

    2014-05-09

    This paper partially reviews and updates the potential for monitoring and verification of nuclear weapons, including verification of their destruction. Cooperative monitoring with templates of the gamma-ray spectrum are an important tool, dependent on the use of information barriers.

  6. ENVIRONMENTAL TECHNOLOGY VERIFICATION AND INDOOR AIR

    EPA Science Inventory

    The paper discusses environmental technology verification and indoor air. RTI has responsibility for a pilot program for indoor air products as part of the U.S. EPA's Environmental Technology Verification (ETV) program. The program objective is to further the development of sel...

  7. 25 CFR 61.8 - Verification forms.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR TRIBAL GOVERNMENT PREPARATION OF ROLLS OF INDIANS § 61.8... enrollment, a verification form, to be completed and returned, shall be mailed to each previous enrollee using the last address of record. The verification form will be used to ascertain the previous...

  8. ENVIRONMENTAL TECHNOLOGY VERIFICATION FOR INDOOR AIR PRODUCTS

    EPA Science Inventory

    The paper discusses environmental technology verification (ETV) for indoor air products. RTI is developing the framework for a verification testing program for indoor air products, as part of EPA's ETV program. RTI is establishing test protocols for products that fit into three...

  9. Gender Verification of Female Olympic Athletes.

    ERIC Educational Resources Information Center

    Dickinson, Barry D.; Genel, Myron; Robinowitz, Carolyn B.; Turner, Patricia L.; Woods, Gary L.

    2002-01-01

    Gender verification of female athletes has long been criticized by geneticists, endocrinologists, and others in the medical community. Recently, the International Olympic Committee's Athletic Commission called for discontinuation of mandatory laboratory-based gender verification of female athletes. This article discusses normal sexual…

  10. The monitoring and verification of nuclear weapons

    NASA Astrophysics Data System (ADS)

    Garwin, Richard L.

    2014-05-01

    This paper partially reviews and updates the potential for monitoring and verification of nuclear weapons, including verification of their destruction. Cooperative monitoring with templates of the gamma-ray spectrum are an important tool, dependent on the use of information barriers.

  11. 40 CFR 1066.220 - Linearity verification.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Linearity verification. 1066.220 Section 1066.220 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Dynamometer Specifications § 1066.220 Linearity verification. (a) Scope and frequency. Perform linearity...

  12. IMPROVING AIR QUALITY THROUGH ENVIRONMENTAL TECHNOLOGY VERIFICATIONS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) began the Environmental Technology Verification (ETV) Program in 1995 as a means of working with the private sector to establish a market-based verification process available to all environmental technologies. Under EPA's Office of R...

  13. HTGR analytical methods and design verification

    SciTech Connect

    Neylan, A.J.; Northup, T.E.

    1982-05-01

    Analytical methods for the high-temperature gas-cooled reactor (HTGR) include development, update, verification, documentation, and maintenance of all computer codes for HTGR design and analysis. This paper presents selected nuclear, structural mechanics, seismic, and systems analytical methods related to the HTGR core. This paper also reviews design verification tests in the reactor core, reactor internals, steam generator, and thermal barrier.

  14. Students' Verification Strategies for Combinatorial Problems

    ERIC Educational Resources Information Center

    Mashiach Eizenberg, Michal; Zaslavsky, Orit

    2004-01-01

    We focus on a major difficulty in solving combinatorial problems, namely, on the verification of a solution. Our study aimed at identifying undergraduate students' tendencies to verify their solutions, and the verification strategies that they employ when solving these problems. In addition, an attempt was made to evaluate the level of efficiency…

  15. Verification testing of advanced environmental monitoring systems

    SciTech Connect

    Kelly, T.J.; Riggs, K.B.; Fuerst, R.G.

    1999-03-01

    This paper describes the Advanced Monitoring Systems (AMS) pilot project, one of 12 pilots comprising the US EPA`s Environmental Technology Verification (ETV) program. The aim of ETV is to promote the acceptance of environmental technologies in the marketplace, through objective third-party verification of technology performance.

  16. 47 CFR 74.737 - Antenna location.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., AUXILIARY, SPECIAL BROADCAST AND OTHER PROGRAM DISTRIBUTIONAL SERVICES Low Power TV, TV Translator, and TV Booster Stations § 74.737 Antenna location. (a) An applicant for a new low power TV, TV translator, or...

  17. 47 CFR 74.737 - Antenna location.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., AUXILIARY, SPECIAL BROADCAST AND OTHER PROGRAM DISTRIBUTIONAL SERVICES Low Power TV, TV Translator, and TV Booster Stations § 74.737 Antenna location. (a) An applicant for a new low power TV, TV translator, or...

  18. 47 CFR 74.737 - Antenna location.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., AUXILIARY, SPECIAL BROADCAST AND OTHER PROGRAM DISTRIBUTIONAL SERVICES Low Power TV, TV Translator, and TV Booster Stations § 74.737 Antenna location. (a) An applicant for a new low power TV, TV translator, or...

  19. 47 CFR 74.737 - Antenna location.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., AUXILIARY, SPECIAL BROADCAST AND OTHER PROGRAM DISTRIBUTIONAL SERVICES Low Power TV, TV Translator, and TV Booster Stations § 74.737 Antenna location. (a) An applicant for a new low power TV, TV translator, or...

  20. 47 CFR 74.737 - Antenna location.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., AUXILIARY, SPECIAL BROADCAST AND OTHER PROGRAM DISTRIBUTIONAL SERVICES Low Power TV, TV Translator, and TV Booster Stations § 74.737 Antenna location. (a) An applicant for a new low power TV, TV translator, or...

  1. CTBT Integrated Verification System Evaluation Model

    SciTech Connect

    Edenburn, M.W.; Bunting, M.L.; Payne, A.C. Jr.

    1997-10-01

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia`s Monitoring Systems and Technology Center and has been funded by the US Department of Energy`s Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, top-level, modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM`s unique features is that it integrates results from the various CTBT sensor technologies (seismic, infrasound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection) and location accuracy of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system`s performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. This report describes version 1.2 of IVSEM.

  2. Stennis Space Center Verification & Validation Capabilities

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary; Ryan, Robert E.; Holekamp, Kara; O'Neal, Duane; Knowlton, Kelly; Ross, Kenton; Blonski, Slawomir

    2007-01-01

    Scientists within NASA#s Applied Research & Technology Project Office (formerly the Applied Sciences Directorate) have developed a well-characterized remote sensing Verification & Validation (V&V) site at the John C. Stennis Space Center (SSC). This site enables the in-flight characterization of satellite and airborne high spatial resolution remote sensing systems and their products. The smaller scale of the newer high resolution remote sensing systems allows scientists to characterize geometric, spatial, and radiometric data properties using a single V&V site. The targets and techniques used to characterize data from these newer systems can differ significantly from the techniques used to characterize data from the earlier, coarser spatial resolution systems. Scientists have used the SSC V&V site to characterize thermal infrared systems. Enhancements are being considered to characterize active lidar systems. SSC employs geodetic targets, edge targets, radiometric tarps, atmospheric monitoring equipment, and thermal calibration ponds to characterize remote sensing data products. Similar techniques are used to characterize moderate spatial resolution sensing systems at selected nearby locations. The SSC Instrument Validation Lab is a key component of the V&V capability and is used to calibrate field instrumentation and to provide National Institute of Standards and Technology traceability. This poster presents a description of the SSC characterization capabilities and examples of calibration data.

  3. Cleanup Verification Package for the 118-F-1 Burial Ground

    SciTech Connect

    E. J. Farris and H. M. Sulloway

    2008-01-10

    This cleanup verification package documents completion of remedial action for the 118-F-1 Burial Ground on the Hanford Site. This burial ground is a combination of two locations formerly called Minor Construction Burial Ground No. 2 and Solid Waste Burial Ground No. 2. This waste site received radioactive equipment and other miscellaneous waste from 105-F Reactor operations, including dummy elements and irradiated process tubing; gun barrel tips, steel sleeves, and metal chips removed from the reactor; filter boxes containing reactor graphite chips; and miscellaneous construction solid waste.

  4. Hierarchical Design and Verification for VLSI

    NASA Technical Reports Server (NTRS)

    Shostak, R. E.; Elliott, W. D.; Levitt, K. N.

    1983-01-01

    The specification and verification work is described in detail, and some of the problems and issues to be resolved in their application to Very Large Scale Integration VLSI systems are examined. The hierarchical design methodologies enable a system architect or design team to decompose a complex design into a formal hierarchy of levels of abstraction. The first step inprogram verification is tree formation. The next step after tree formation is the generation from the trees of the verification conditions themselves. The approach taken here is similar in spirit to the corresponding step in program verification but requires modeling of the semantics of circuit elements rather than program statements. The last step is that of proving the verification conditions using a mechanical theorem-prover.

  5. New method of verificating optical flat flatness

    NASA Astrophysics Data System (ADS)

    Sun, Hao; Li, Xueyuan; Han, Sen; Zhu, Jianrong; Guo, Zhenglai; Fu, Yuegang

    2014-11-01

    Optical flat is commonly used in optical testing instruments, flatness is the most important parameter of forming errors. As measurement criteria, optical flat flatness (OFF) index needs to have good precision. Current measurement in China is heavily dependent on the artificial visual interpretation, through discrete points to characterize the flatness. The efficiency and accuracy of this method can not meet the demand of industrial development. In order to improve the testing efficiency and accuracy of measurement, it is necessary to develop an optical flat verification system, which can obtain all surface information rapidly and efficiently, at the same time, in accordance with current national metrological verification procedures. This paper reviews current optical flat verification method and solves the problems existing in previous test, by using new method and its supporting software. Final results show that the new system can improve verification efficiency and accuracy, by comparing with JJG 28-2000 metrological verification procedures method.

  6. Space transportation system payload interface verification

    NASA Technical Reports Server (NTRS)

    Everline, R. T.

    1977-01-01

    The paper considers STS payload-interface verification requirements and the capability provided by STS to support verification. The intent is to standardize as many interfaces as possible, not only through the design, development, test and evaluation (DDT and E) phase of the major payload carriers but also into the operational phase. The verification process is discussed in terms of its various elements, such as the Space Shuttle DDT and E (including the orbital flight test program) and the major payload carriers DDT and E (including the first flights). Five tools derived from the Space Shuttle DDT and E are available to support the verification process: mathematical (structural and thermal) models, the Shuttle Avionics Integration Laboratory, the Shuttle Manipulator Development Facility, and interface-verification equipment (cargo-integration test equipment).

  7. Test/QA Plan for Verification of Semi-Continuous Ambient Air Monitoring Systems - Second Round

    EPA Science Inventory

    Test/QA Plan for Verification of Semi-Continuous Ambient Air Monitoring Systems - Second Round. Changes reflect performance of second round of testing at new location and with various changes to personnel. Additional changes reflect general improvements to the Version 1 test/QA...

  8. Environmental Technology Verification Report for Applikon MARGA Semi-Continuous Ambient Air Monitoring System

    EPA Science Inventory

    The verification test was conducted oer a period of 30 days (October 1 to October 31, 2008) and involved the continuous operation of duplicate semi-continuous monitoring technologies at the Burdens Creek Air Monitoring Site, an existing ambient-air monitoring station located near...

  9. Criteria for monitoring a chemical arms treaty: Implications for the verification regime

    SciTech Connect

    Mullen, M.F.; Apt, K.E.; Stanbro, W.D.

    1991-12-01

    The multinational Chemical Weapons Convention (CWC) being negotiated at the Conference on Disarmament in Geneva is viewed by many as an effective way to rid the world of the threat of chemical weapons. Parties could, however, legitimately engage in certain CW-related activities in industry, agriculture, research, medicine, and law enforcement. Treaty verification requirements related to declared activities include: confirming destruction of declared CW stockpiles and production facilities; monitoring legitimate, treaty-allowed activities, such as production of certain industrial chemicals; and, detecting proscribed activities within the declared locations of treaty signatories, e.g., the illegal production of CW agents at a declared industrial facility or the diversion or substitution of declared CW stockpile items. Verification requirements related to undeclared activities or locations include investigating possible clandestine CW stocks and production capability not originally declared by signatories; detecting clandestine, proscribed activities at facilities or sites that are not declared and hence not subject to routine inspection; and, investigating allegations of belligerent use of CW. We discuss here a possible set of criteria for assessing the effectiveness of CWC verification (and certain aspects of the bilateral CW reduction agreement). Although the criteria are applicable to the full range of verification requirements, the discussion emphasizes verification of declared activities and sites.

  10. Criteria for monitoring a chemical arms treaty: Implications for the verification regime. Report No. 13

    SciTech Connect

    Mullen, M.F.; Apt, K.E.; Stanbro, W.D.

    1991-12-01

    The multinational Chemical Weapons Convention (CWC) being negotiated at the Conference on Disarmament in Geneva is viewed by many as an effective way to rid the world of the threat of chemical weapons. Parties could, however, legitimately engage in certain CW-related activities in industry, agriculture, research, medicine, and law enforcement. Treaty verification requirements related to declared activities include: confirming destruction of declared CW stockpiles and production facilities; monitoring legitimate, treaty-allowed activities, such as production of certain industrial chemicals; and, detecting proscribed activities within the declared locations of treaty signatories, e.g., the illegal production of CW agents at a declared industrial facility or the diversion or substitution of declared CW stockpile items. Verification requirements related to undeclared activities or locations include investigating possible clandestine CW stocks and production capability not originally declared by signatories; detecting clandestine, proscribed activities at facilities or sites that are not declared and hence not subject to routine inspection; and, investigating allegations of belligerent use of CW. We discuss here a possible set of criteria for assessing the effectiveness of CWC verification (and certain aspects of the bilateral CW reduction agreement). Although the criteria are applicable to the full range of verification requirements, the discussion emphasizes verification of declared activities and sites.

  11. Object locating system

    DOEpatents

    Novak, James L.; Petterson, Ben

    1998-06-09

    A sensing system locates an object by sensing the object's effect on electric fields. The object's effect on the mutual capacitance of electrode pairs varies according to the distance between the object and the electrodes. A single electrode pair can sense the distance from the object to the electrodes. Multiple electrode pairs can more precisely locate the object in one or more dimensions.

  12. Reversible micromachining locator

    SciTech Connect

    Salzer, Leander J.; Foreman, Larry R.

    2002-01-01

    A locator with a part support is used to hold a part onto the kinematic mount of a tooling machine so that the part can be held in or replaced in exactly the same position relative to the cutting tool for machining different surfaces of the part or for performing different machining operations on the same or different surfaces of the part. The locator has disposed therein a plurality of steel balls placed at equidistant positions around the planar surface of the locator and the kinematic mount has a plurality of magnets which alternate with grooves which accommodate the portions of the steel balls projecting from the locator. The part support holds the part to be machined securely in place in the locator. The locator can be easily detached from the kinematic mount, turned over, and replaced onto the same kinematic mount or another kinematic mount on another tooling machine without removing the part to be machined from the locator so that there is no need to touch or reposition the part within the locator, thereby assuring exact replication of the position of the part in relation to the cutting tool on the tooling machine for each machining operation on the part.

  13. Acoustic emission source location

    NASA Astrophysics Data System (ADS)

    Promboon, Yajai

    The objective of the research program was development of reliable source location techniques. The study comprised two phases. First, the research focused on development of source location methods for homogeneous plates. The specimens used in the program were steel railroad tank cars. Source location methods were developed and demonstrated for empty and water filled tanks. The second phase of the research was an exploratory study of source location method for fiber reinforced composites. Theoretical analysis and experimental measurement of wave propagation were carried out. This data provided the basis for development of a method using the intersection of the group velocity curves for the first three wave propagation modes. Simplex optimization was used to calculate the location of the source. Additional source location methods have been investigated and critically examined. Emphasis has been placed on evaluating different methods for determining the time of arrival of a wave. The behavior of wave in a water filled tank was studied and source location methods suitable for use in this situation have been examined through experiment and theory. Particular attention is paid to the problem caused by leaky Lamb waves. A preliminary study into the use of neural networks for source location in fiber reinforced composites was included in the research program. A preliminary neural network model and the results from training and testing data are reported.

  14. 46 CFR 76.60-10 - Location.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 3 2011-10-01 2011-10-01 false Location. 76.60-10 Section 76.60-10 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) PASSENGER VESSELS FIRE PROTECTION EQUIPMENT Fire Axes § 76.60-10 Location. (a) Fire axes shall be distributed throughout the spaces available to passengers and crew so...

  15. 46 CFR 95.60-10 - Location.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 4 2012-10-01 2012-10-01 false Location. 95.60-10 Section 95.60-10 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CARGO AND MISCELLANEOUS VESSELS FIRE PROTECTION EQUIPMENT Fire Axes § 95.60-10 Location. (a) Fire axes shall be distributed throughout the spaces available to...

  16. 46 CFR 76.60-10 - Location.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 3 2014-10-01 2014-10-01 false Location. 76.60-10 Section 76.60-10 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) PASSENGER VESSELS FIRE PROTECTION EQUIPMENT Fire Axes § 76.60-10 Location. (a) Fire axes shall be distributed throughout the spaces available to passengers and crew so...

  17. 46 CFR 95.60-10 - Location.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 4 2013-10-01 2013-10-01 false Location. 95.60-10 Section 95.60-10 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CARGO AND MISCELLANEOUS VESSELS FIRE PROTECTION EQUIPMENT Fire Axes § 95.60-10 Location. (a) Fire axes shall be distributed throughout the spaces available to...

  18. 46 CFR 193.60-10 - Location.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 7 2013-10-01 2013-10-01 false Location. 193.60-10 Section 193.60-10 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) OCEANOGRAPHIC RESEARCH VESSELS FIRE PROTECTION EQUIPMENT Fire Axes § 193.60-10 Location. (a) Fire axes shall be distributed throughout the spaces...

  19. 46 CFR 95.60-10 - Location.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Location. 95.60-10 Section 95.60-10 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CARGO AND MISCELLANEOUS VESSELS FIRE PROTECTION EQUIPMENT Fire Axes § 95.60-10 Location. (a) Fire axes shall be distributed throughout the spaces available to...

  20. 46 CFR 193.60-10 - Location.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 7 2010-10-01 2010-10-01 false Location. 193.60-10 Section 193.60-10 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) OCEANOGRAPHIC RESEARCH VESSELS FIRE PROTECTION EQUIPMENT Fire Axes § 193.60-10 Location. (a) Fire axes shall be distributed throughout the spaces...

  1. 46 CFR 193.60-10 - Location.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 7 2011-10-01 2011-10-01 false Location. 193.60-10 Section 193.60-10 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) OCEANOGRAPHIC RESEARCH VESSELS FIRE PROTECTION EQUIPMENT Fire Axes § 193.60-10 Location. (a) Fire axes shall be distributed throughout the spaces...

  2. 46 CFR 193.60-10 - Location.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 7 2012-10-01 2012-10-01 false Location. 193.60-10 Section 193.60-10 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) OCEANOGRAPHIC RESEARCH VESSELS FIRE PROTECTION EQUIPMENT Fire Axes § 193.60-10 Location. (a) Fire axes shall be distributed throughout the spaces...

  3. 46 CFR 76.60-10 - Location.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 3 2013-10-01 2013-10-01 false Location. 76.60-10 Section 76.60-10 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) PASSENGER VESSELS FIRE PROTECTION EQUIPMENT Fire Axes § 76.60-10 Location. (a) Fire axes shall be distributed throughout the spaces available to passengers and crew so...

  4. 46 CFR 193.60-10 - Location.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 7 2014-10-01 2014-10-01 false Location. 193.60-10 Section 193.60-10 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) OCEANOGRAPHIC RESEARCH VESSELS FIRE PROTECTION EQUIPMENT Fire Axes § 193.60-10 Location. (a) Fire axes shall be distributed throughout the spaces...

  5. 46 CFR 95.60-10 - Location.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 4 2011-10-01 2011-10-01 false Location. 95.60-10 Section 95.60-10 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CARGO AND MISCELLANEOUS VESSELS FIRE PROTECTION EQUIPMENT Fire Axes § 95.60-10 Location. (a) Fire axes shall be distributed throughout the spaces available to...

  6. 46 CFR 76.60-10 - Location.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 3 2010-10-01 2010-10-01 false Location. 76.60-10 Section 76.60-10 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) PASSENGER VESSELS FIRE PROTECTION EQUIPMENT Fire Axes § 76.60-10 Location. (a) Fire axes shall be distributed throughout the spaces available to passengers and crew so...

  7. 46 CFR 76.60-10 - Location.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 3 2012-10-01 2012-10-01 false Location. 76.60-10 Section 76.60-10 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) PASSENGER VESSELS FIRE PROTECTION EQUIPMENT Fire Axes § 76.60-10 Location. (a) Fire axes shall be distributed throughout the spaces available to passengers and crew so...

  8. 46 CFR 95.60-10 - Location.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 4 2014-10-01 2014-10-01 false Location. 95.60-10 Section 95.60-10 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CARGO AND MISCELLANEOUS VESSELS FIRE PROTECTION EQUIPMENT Fire Axes § 95.60-10 Location. (a) Fire axes shall be distributed throughout the spaces available to...

  9. Integrating Fingerprint Verification into the Smart Card-Based Healthcare Information System

    NASA Astrophysics Data System (ADS)

    Moon, Daesung; Chung, Yongwha; Pan, Sung Bum; Park, Jin-Won

    2009-12-01

    As VLSI technology has been improved, a smart card employing 32-bit processors has been released, and more personal information such as medical, financial data can be stored in the card. Thus, it becomes important to protect personal information stored in the card. Verification of the card holder's identity using a fingerprint has advantages over the present practices of Personal Identification Numbers (PINs) and passwords. However, the computational workload of fingerprint verification is much heavier than that of the typical PIN-based solution. In this paper, we consider three strategies to implement fingerprint verification in a smart card environment and how to distribute the modules of fingerprint verification between the smart card and the card reader. We first evaluate the number of instructions of each step of a typical fingerprint verification algorithm, and estimate the execution time of several cryptographic algorithms to guarantee the security/privacy of the fingerprint data transmitted in the smart card with the client-server environment. Based on the evaluation results, we analyze each scenario with respect to the security level and the real-time execution requirements in order to implement fingerprint verification in the smart card with the client-server environment.

  10. Estimation of the ROC Curve under Verification Bias

    PubMed Central

    FLUSS, RONEN; REISER, BENJAMIN; FARAGGI, DAVID; ROTNITZKY, ANDREA

    2009-01-01

    Summary The ROC (Receiver Operating Characteristic) curve is the most commonly used statistical tool for describing the discriminatory accuracy of a diagnostic test. Classical estimation of the ROC curve relies on data from a simple random sample from the target population. In practice, estimation is often complicated due to not all subjects undergoing a definitive assessment of disease status (verification). Estimation of the ROC curve based on data only from subjects with verified disease status may be badly biased. In this work we investigate the properties of the doubly robust (DR) method for estimating the ROC curve under verification bias originally developed by Rotnitzky et al. (2006) for estimating the area under the ROC curve. The DR method can be applied for continuous scaled tests and allows for a non ignorable process of selection to verification. We develop the estimator's asymptotic distribution and examine its finite sample properties via a simulation study. We exemplify the DR procedure for estimation of ROC curves with data collected on patients undergoing electron beam computer tomography, a diagnostic test for calcification of the arteries. PMID:19588455

  11. Shipper/receiver difference verification of spent fuel by use of PDET

    SciTech Connect

    Ham, Y. S.; Sitaraman, S.

    2011-07-01

    Spent fuel storage pools in most countries are rapidly approaching their design limits with the discharge of over 10,000 metric tons of heavy metal from global reactors. Countries like UK, France or Japan have adopted a closed fuel cycle by reprocessing spent fuel and recycling MOX fuel while many other countries opted for above ground interim dry storage for their spent fuel management strategy. Some countries like Finland and Sweden are already well on the way to setting up a conditioning plant and a deep geological repository for spent fuel. For all these situations, shipments of spent fuel are needed and the number of these shipments is expected to increase significantly. Although shipper/receiver difference (SRD) verification measurements are needed by IAEA when the recipient facility receives spent fuel, these are not being practiced to the level that IAEA has desired due to lack of a credible measurement methodology and instrument that can reliably perform these measurements to verify non-diversion of spent fuel during shipment and confirm facility operator declarations on the spent fuel. In this paper, we describe a new safeguards method and an associated instrument, Partial Defect Tester (PDET), which can detect pin diversion from Pressurized Water Reactor (PWR) Spent Fuel Assemblies in an in-situ condition. The PDET uses multiple tiny neutron and gamma detectors in the form of a cluster and a simple, yet highly precise, gravity-driven system to obtain underwater radiation measurements inside a Pressurized Water Reactor (PWR) spent fuel assembly. The method takes advantage of the PWR fuel design which contains multiple guide tubes which can be accessed from the top. The data obtained in such a manner can provide spatial distribution of neutron and gamma flux within a spent fuel assembly. Our simulation study as well as validation measurements indicated that the ratio of the gamma signal to the thermal neutron signal at each detector location normalized to

  12. In vivo proton range verification: a review

    NASA Astrophysics Data System (ADS)

    Knopf, Antje-Christin; Lomax, Antony

    2013-08-01

    Protons are an interesting modality for radiotherapy because of their well defined range and favourable depth dose characteristics. On the other hand, these same characteristics lead to added uncertainties in their delivery. This is particularly the case at the distal end of proton dose distributions, where the dose gradient can be extremely steep. In practice however, this gradient is rarely used to spare critical normal tissues due to such worries about its exact position in the patient. Reasons for this uncertainty are inaccuracies and non-uniqueness of the calibration from CT Hounsfield units to proton stopping powers, imaging artefacts (e.g. due to metal implants) and anatomical changes of the patient during treatment. In order to improve the precision of proton therapy therefore, it would be extremely desirable to verify proton range in vivo, either prior to, during, or after therapy. In this review, we describe and compare state-of-the art in vivo proton range verification methods currently being proposed, developed or clinically implemented.

  13. Verification of the Resistive DCON Code

    NASA Astrophysics Data System (ADS)

    Glasser, A. H.; Wang, Z. R.; Park, J.-K.

    2014-10-01

    The ideal MHD axisymmetric toroidal stability code DCON has been extended to treat resistive instabilities, with resonant surfaces at rational safety factor values of q = m / n . DCON solves the ideal MHD equations using a singular Galerkin method to obtain matching data for the ideal outer region. Robust convergence is achieved by a careful choice of basis functions: C1 Hermite cubics to resolve nonresonant solutions; a high-order power series in the neighborhood of each singular surface to resolve large and small resonant solutions; distributed with a grid-packing algorith with high resolution near the singular surfaces and adequate grid to resolve the nonresonant region. The degenerate case for β = 0 has been derived and coded up for verification, in addition to the nondegenerate case β > 0 . The DELTAR code computes corresponding inner region matching data for the resistive MHD equations of Glasser, Greene, and Johnson. The MATCH code matches the inner and outer region data to obtain global eigenvalues and eigenfunctions. The VACUUM provides data for a vacuum region outside the plasma region. The MARS-F code, which solves the same equations by a straigh-through method, is used to verify the accuracy of the DCON solution. Results will be presented.

  14. Sensors Locate Radio Interference

    NASA Technical Reports Server (NTRS)

    2009-01-01

    After receiving a NASA Small Business Innovation Research (SBIR) contract from Kennedy Space Center, Soneticom Inc., based in West Melbourne, Florida, created algorithms for time difference of arrival and radio interferometry, which it used in its Lynx Location System (LLS) to locate electromagnetic interference that can disrupt radio communications. Soneticom is collaborating with the Federal Aviation Administration (FAA) to install and test the LLS at its field test center in New Jersey in preparation for deploying the LLS at commercial airports. The software collects data from each sensor in order to compute the location of the interfering emitter.

  15. Verification of NASA Emergent Systems

    NASA Technical Reports Server (NTRS)

    Rouff, Christopher; Vanderbilt, Amy K. C. S.; Truszkowski, Walt; Rash, James; Hinchey, Mike

    2004-01-01

    NASA is studying advanced technologies for a future robotic exploration mission to the asteroid belt. This mission, the prospective ANTS (Autonomous Nano Technology Swarm) mission, will comprise of 1,000 autonomous robotic agents designed to cooperate in asteroid exploration. The emergent properties of swarm type missions make them powerful, but at the same time are more difficult to design and assure that the proper behaviors will emerge. We are currently investigating formal methods and techniques for verification and validation of future swarm-based missions. The advantage of using formal methods is their ability to mathematically assure the behavior of a swarm, emergent or otherwise. The ANT mission is being used as an example and case study for swarm-based missions for which to experiment and test current formal methods with intelligent swam. Using the ANTS mission, we have evaluated multiple formal methods to determine their effectiveness in modeling and assuring swarm behavior.

  16. Verification of FANTASTIC integrated code

    NASA Technical Reports Server (NTRS)

    Chauhan, Rajinder Singh

    1987-01-01

    FANTASTIC is an acronym for Failure Analysis Nonlinear Thermal and Structural Integrated Code. This program was developed by Failure Analysis Associates, Palo Alto, Calif., for MSFC to improve the accuracy of solid rocket motor nozzle analysis. FANTASTIC has three modules: FACT - thermochemical analysis; FAHT - heat transfer analysis; and FAST - structural analysis. All modules have keywords for data input. Work is in progress for the verification of the FAHT module, which is done by using data for various problems with known solutions as inputs to the FAHT module. The information obtained is used to identify problem areas of the code and passed on to the developer for debugging purposes. Failure Analysis Associates have revised the first version of the FANTASTIC code and a new improved version has been released to the Thermal Systems Branch.

  17. Automated claim and payment verification.

    PubMed

    Segal, Mark J; Morris, Susan; Rubin, James M O

    2002-01-01

    Since the start of managed care, there has been steady deterioration in the ability of physicians, hospitals, payors, and patients to understand reimbursement and the contracts and payment policies that drive it. This lack of transparency has generated administrative costs, confusion, and mistrust. It is therefore essential that physicians, hospitals, and payors have rapid access to accurate information on contractual payment terms. This article summarizes problems with contract-based reimbursement and needed responses by medical practices. It describes an innovative, Internet-based claims and payment verification service, Phynance, which automatically verifies the accuracy of all claims and payments by payor, contract and line item. This service enables practices to know and apply the one, true, contractually obligated allowable. The article details implementation costs and processes and anticipated return on investment. The resulting transparency improves business processes throughout health care, increasing efficiency and lowering costs for physicians, hospitals, payors, employers--and patients. PMID:12122814

  18. Retail applications of signature verification

    NASA Astrophysics Data System (ADS)

    Zimmerman, Thomas G.; Russell, Gregory F.; Heilper, Andre; Smith, Barton A.; Hu, Jianying; Markman, Dmitry; Graham, Jon E.; Drews, Clemens

    2004-08-01

    The dramatic rise in identity theft, the ever pressing need to provide convenience in checkout services to attract and retain loyal customers, and the growing use of multi-function signature captures devices in the retail sector provides favorable conditions for the deployment of dynamic signature verification (DSV) in retail settings. We report on the development of a DSV system to meet the needs of the retail sector. We currently have a database of approximately 10,000 signatures collected from 600 subjects and forgers. Previous work at IBM on DSV has been merged and extended to achieve robust performance on pen position data available from commercial point of sale hardware, achieving equal error rates on skilled forgeries and authentic signatures of 1.5% to 4%.

  19. Lunar Impact Flash Locations

    NASA Technical Reports Server (NTRS)

    Moser, D. E.; Suggs, R. M.; Kupferschmidt, L.; Feldman, J.

    2015-01-01

    A bright impact flash detected by the NASA Lunar Impact Monitoring Program in March 2013 brought into focus the importance of determining the impact flash location. A process for locating the impact flash, and presumably its associated crater, was developed using commercially available software tools. The process was successfully applied to the March 2013 impact flash and put into production on an additional 300 impact flashes. The goal today: provide a description of the geolocation technique developed.

  20. Infrared horizon locator

    NASA Technical Reports Server (NTRS)

    Jalink, A., Jr. (Inventor)

    1973-01-01

    A precise method and apparatus for locating the earth's infrared horizon from space that is independent of season and latitude is described. First and second integrations of the earth's radiance profile are made from space to earth with the second delayed with respect to the first. The second integration is multiplied by a predetermined constant R and then compared with the first integration. When the two are equal the horizon is located.

  1. Object locating system

    DOEpatents

    Novak, J.L.; Petterson, B.

    1998-06-09

    A sensing system locates an object by sensing the object`s effect on electric fields. The object`s effect on the mutual capacitance of electrode pairs varies according to the distance between the object and the electrodes. A single electrode pair can sense the distance from the object to the electrodes. Multiple electrode pairs can more precisely locate the object in one or more dimensions. 12 figs.

  2. Liquefied Natural Gas (LNG) dispenser verification device

    NASA Astrophysics Data System (ADS)

    Xiong, Maotao; Yang, Jie-bin; Zhao, Pu-jun; Yu, Bo; Deng, Wan-quan

    2013-01-01

    The composition of working principle and calibration status of LNG (Liquefied Natural Gas) dispenser in China are introduced. According to the defect of weighing method in the calibration of LNG dispenser, LNG dispenser verification device has been researched. The verification device bases on the master meter method to verify LNG dispenser in the field. The experimental results of the device indicate it has steady performance, high accuracy level and flexible construction, and it reaches the international advanced level. Then LNG dispenser verification device will promote the development of LNG dispenser industry in China and to improve the technical level of LNG dispenser manufacture.

  3. Design for Verification: Enabling Verification of High Dependability Software-Intensive Systems

    NASA Technical Reports Server (NTRS)

    Mehlitz, Peter C.; Penix, John; Markosian, Lawrence Z.; Koga, Dennis (Technical Monitor)

    2003-01-01

    Strategies to achieve confidence that high-dependability applications are correctly implemented include testing and automated verification. Testing deals mainly with a limited number of expected execution paths. Verification usually attempts to deal with a larger number of possible execution paths. While the impact of architecture design on testing is well known, its impact on most verification methods is not as well understood. The Design for Verification approach considers verification from the application development perspective, in which system architecture is designed explicitly according to the application's key properties. The D4V-hypothesis is that the same general architecture and design principles that lead to good modularity, extensibility and complexity/functionality ratio can be adapted to overcome some of the constraints on verification tools, such as the production of hand-crafted models and the limits on dynamic and static analysis caused by state space explosion.

  4. Production-worthy full chip image-based verification

    NASA Astrophysics Data System (ADS)

    Yu, Zongchang; Zhang, Youping; Xiao, Yanjun; Li, Wanyu

    2007-10-01

    At 65nm technology node and below, with the ever-smaller process window, it is no longer sufficient to apply traditional model-based verification at only the nominal condition. Full-chip, full process-window verification has started to integrate into the OPC flow at the 65nm production as a way of preventing potentially weak post-OPC designs from reaching the mask making step. Through process-window analysis can be done by way of simulating wafer images at each of the corresponding focus and exposure dose conditions throughout the process window using an accurate and predictive FEM model. Alternatively, due to the strong correlation between the post-OPC design sensitivity to dose variation and aerial image (AI) quality, the study of through-dose behavior of the post-OPC design can also be carried out by carefully analyzing the AI. These types of analysis can be performed at multiple defocus conditions to assess the robustness of the post-OPC designs with respect to focus and dose variations. In this paper, we study the AI based approach for post-OPC verification in detail. For metal layer, the primary metrics for verification are bridging, necking, and via coverage. In this paper we are mainly interested in studying bridging and necking. The minimum AI value in the open space gives an indication of its susceptibility to bridging in an over-dosed situation. Lower minimum intensity indicates less risk of bridging. Conversely, the maximum AI between the metal lines provides indication of potential necking issues in an under-dosed situation. At times, however, in a complex 2D pattern area, the location as to where the AI reaches either maximum or minimum is not obvious. This requires a full-chip, dense image-based approach to fully explore the AI profile of the entire space of the design. We have developed such an algorithm to find the AI maximums and minimums that will bear true relevance to the bridging and necking analysis. In this paper, we apply the full

  5. Dosimetry investigation of MOSFET for clinical IMRT dose verification.

    PubMed

    Deshpande, Sudesh; Kumar, Rajesh; Ghadi, Yogesh; Neharu, R M; Kannan, V

    2013-06-01

    In IMRT, patient-specific dose verification is followed regularly at each centre. Simple and efficient dosimetry techniques play a very important role in routine clinical dosimetry QA. The MOSFET dosimeter offers several advantages over the conventional dosimeters such as its small detector size, immediate readout, immediate reuse, multiple point dose measurements. To use the MOSFET as routine clinical dosimetry system for pre-treatment dose verification in IMRT, a comprehensive set of experiments has been conducted, to investigate its linearity, reproducibility, dose rate effect and angular dependence for 6 MV x-ray beam. The MOSFETs shows a linear response with linearity coefficient of 0.992 for a dose range of 35 cGy to 427 cGy. The reproducibility of the MOSFET was measured by irradiating the MOSFET for ten consecutive irradiations in the dose range of 35 cGy to 427 cGy. The measured reproducibility of MOSFET was found to be within 4% up to 70 cGy and within 1.4% above 70 cGy. The dose rate effect on the MOSFET was investigated in the dose rate range 100 MU/min to 600 MU/min. The response of the MOSFET varies from -1.7% to 2.1%. The angular responses of the MOSFETs were measured at 10 degrees intervals from 90 to 270 degrees in an anticlockwise direction and normalized at gantry angle zero and it was found to be in the range of 0.98 ± 0.014 to 1.01 ± 0.014. The MOSFETs were calibrated in a phantom which was later used for IMRT verification. The measured calibration coefficients were found to be 1 mV/cGy and 2.995 mV/cGy in standard and high sensitivity mode respectively. The MOSFETs were used for pre-treatment dose verification in IMRT. Nine dosimeters were used for each patient to measure the dose in different plane. The average variation between calculated and measured dose at any location was within 3%. Dose verification using MOSFET and IMRT phantom was found to quick and efficient and well suited for a busy radiotherapy

  6. Locating levels in tanks and silos using infrared thermography

    NASA Astrophysics Data System (ADS)

    Snell, John R., Jr.; Schwoegler, Matt

    2004-04-01

    Thermography is a powerful tool for locating or verifying levels in tanks and silos. But one could ask "Why bother?" All too often existing level indication instruments are simply not reliable or positive verification of instrumentation readings is required. When properly used, thermography can reveal not only the liquid/gas interface, but also sludge buildup and floating materials such as waxes and foams. Similar techniques can be used to locate levels and bridging problems in silos containing fluidized solids. This paper discusses the parameters and limitations that must be addressed, shows techniques that can be employed, and illustrates the discussions with numerous thermal images.

  7. Evaluation of Patient Residual Deviation and Its Impact on Dose Distribution for Proton Radiotherapy

    SciTech Connect

    Arjomandy, Bijan

    2011-10-01

    The residual deviations after final patient repositioning based on bony anatomy and the impact of such deviations on the proton dose distributions was investigated. Digitally reconstructed radiographs (DRRs) and kilovoltage (kV) 'portal verification' images from 10 patients treated with passively scattered proton radiotherapy was used to estimate the residual deviation. These changes were then applied to the location of isocenter points that, in effect, moved the isocenter relative to the apertures and compensators. A composite verification plan was obtained and compared with the original clinical treatment plan to evaluate any changes in dose distributions. The residual deviations were fitted to a Gaussian distribution with {mu} = -0.9 {+-} 0.1 mm and {sigma} = 2.55 {+-} 0.07 mm. The dose distribution showed under- and overcovered dose spots with complex dose distributions both in the target volumes and in the organs at risk. In some cases, this amounts to 63.5% above the intended clinical plan. Although patient positioning is carefully verified before treatment delivery and setup uncertainties are accounted for by using compensator smearing and aperture margins, a residual shift in a patient's position can considerably affect the dose distribution.

  8. Verification of plunger cooling for glass forming in real working mode

    NASA Astrophysics Data System (ADS)

    Starý, Michal; Salač, Petr

    2012-04-01

    The article presents results of experimental verification of plunger watercooling for glass forming in a working cycle which has been set by a real working mode. Results have been submitted in the form of confrontation of the temperature distributions across classical and optimized plunger surfaces. During the experiment, the glass was replaced by a tin bath.

  9. 7 CFR 272.8 - State income and eligibility verification system.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false State income and eligibility verification system. 272.8 Section 272.8 Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE FOOD STAMP AND FOOD DISTRIBUTION PROGRAM REQUIREMENTS FOR PARTICIPATING STATE AGENCIES § 272.8 State income...

  10. Verification of thermal analysis codes for modeling solid rocket nozzles

    NASA Astrophysics Data System (ADS)

    Keyhani, M.

    1993-05-01

    One of the objectives of the Solid Propulsion Integrity Program (SPIP) at Marshall Space Flight Center (MSFC) is development of thermal analysis codes capable of accurately predicting the temperature field, pore pressure field and the surface recession experienced by decomposing polymers which are used as thermal barriers in solid rocket nozzles. The objective of this study is to provide means for verifications of thermal analysis codes developed for modeling of flow and heat transfer in solid rocket nozzles. In order to meet the stated objective, a test facility was designed and constructed for measurement of the transient temperature field in a sample composite subjected to a constant heat flux boundary condition. The heating was provided via a steel thin-foil with a thickness of 0.025 mm. The designed electrical circuit can provide a heating rate of 1800 W. The heater was sandwiched between two identical samples, and thus ensure equal power distribution between them. The samples were fitted with Type K thermocouples, and the exact location of the thermocouples were determined via X-rays. The experiments were modeled via a one-dimensional code (UT1D) as a conduction and phase change heat transfer process. Since the pyrolysis gas flow was in the direction normal to the heat flow, the numerical model could not account for the convection cooling effect of the pyrolysis gas flow. Therefore, the predicted values in the decomposition zone are considered to be an upper estimate of the temperature. From the analysis of the experimental and the numerical results the following are concluded: (1) The virgin and char specific heat data for FM 5055 as reported by SoRI can not be used to obtain any reasonable agreement between the measured temperatures and the predictions. However, use of virgin and char specific heat data given in Acurex report produced good agreement for most of the measured temperatures. (2) Constant heat flux heating process can produce a much higher

  11. Verification of thermal analysis codes for modeling solid rocket nozzles

    NASA Technical Reports Server (NTRS)

    Keyhani, M.

    1993-01-01

    One of the objectives of the Solid Propulsion Integrity Program (SPIP) at Marshall Space Flight Center (MSFC) is development of thermal analysis codes capable of accurately predicting the temperature field, pore pressure field and the surface recession experienced by decomposing polymers which are used as thermal barriers in solid rocket nozzles. The objective of this study is to provide means for verifications of thermal analysis codes developed for modeling of flow and heat transfer in solid rocket nozzles. In order to meet the stated objective, a test facility was designed and constructed for measurement of the transient temperature field in a sample composite subjected to a constant heat flux boundary condition. The heating was provided via a steel thin-foil with a thickness of 0.025 mm. The designed electrical circuit can provide a heating rate of 1800 W. The heater was sandwiched between two identical samples, and thus ensure equal power distribution between them. The samples were fitted with Type K thermocouples, and the exact location of the thermocouples were determined via X-rays. The experiments were modeled via a one-dimensional code (UT1D) as a conduction and phase change heat transfer process. Since the pyrolysis gas flow was in the direction normal to the heat flow, the numerical model could not account for the convection cooling effect of the pyrolysis gas flow. Therefore, the predicted values in the decomposition zone are considered to be an upper estimate of the temperature. From the analysis of the experimental and the numerical results the following are concluded: (1) The virgin and char specific heat data for FM 5055 as reported by SoRI can not be used to obtain any reasonable agreement between the measured temperatures and the predictions. However, use of virgin and char specific heat data given in Acurex report produced good agreement for most of the measured temperatures. (2) Constant heat flux heating process can produce a much higher

  12. An integrated user-oriented laboratory for verification of digital flight control systems: Features and capabilities

    NASA Technical Reports Server (NTRS)

    Defeo, P.; Doane, D.; Saito, J.

    1982-01-01

    A Digital Flight Control Systems Verification Laboratory (DFCSVL) has been established at NASA Ames Research Center. This report describes the major elements of the laboratory, the research activities that can be supported in the area of verification and validation of digital flight control systems (DFCS), and the operating scenarios within which these activities can be carried out. The DFCSVL consists of a palletized dual-dual flight-control system linked to a dedicated PDP-11/60 processor. Major software support programs are hosted in a remotely located UNIVAC 1100 accessible from the PDP-11/60 through a modem link. Important features of the DFCSVL include extensive hardware and software fault insertion capabilities, a real-time closed loop environment to exercise the DFCS, an integrated set of software verification tools, and a user-oriented interface to all the resources and capabilities.

  13. ETV - ENVIRONMENTAL TECHNOLOGY VERIFICATION (ETV) - RISK MANAGEMENT

    EPA Science Inventory

    In October 1995, the Environmental Technology Verification (ETV) Program was established by EPA. The goal of ETV is to provide credible performance data for commercial-ready environmental technologies to speed their implementation for the benefit of vendors, purchasers, permitter...

  14. The PASCAL-HDM Verification System

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The PASCAL-HDM verification system is described. This system supports the mechanical generation of verification conditions from PASCAL programs and HDM-SPECIAL specifications using the Floyd-Hoare axiomatic method. Tools are provided to parse programs and specifications, check their static semantics, generate verification conditions from Hoare rules, and translate the verification conditions appropriately for proof using the Shostak Theorem Prover, are explained. The differences between standard PASCAL and the language handled by this system are explained. This consists mostly of restrictions to the standard language definition, the only extensions or modifications being the addition of specifications to the code and the change requiring the references to a function of no arguments to have empty parentheses.

  15. Engineering drawing field verification program. Revision 3

    SciTech Connect

    Ulk, P.F.

    1994-10-12

    Safe, efficient operation of waste tank farm facilities is dependent in part upon the availability of accurate, up-to-date plant drawings. Accurate plant drawings are also required in support of facility upgrades and future engineering remediation projects. This supporting document establishes the procedure for performing a visual field verification of engineering drawings, the degree of visual observation being performed and documenting the results. A copy of the drawing attesting to the degree of visual observation will be paginated into the released Engineering Change Notice (ECN) documenting the field verification for future retrieval and reference. All waste tank farm essential and support drawings within the scope of this program will be converted from manual to computer aided drafting (CAD) drawings. A permanent reference to the field verification status will be placed along the right border of the CAD-converted drawing, referencing the revision level, at which the visual verification was performed and documented.

  16. VERIFICATION OF GLOBAL CLIMATE CHANGE MITIGATION TECHNOLOGIES

    EPA Science Inventory

    This is a continuation of independent performance evaluations of environmental technologies under EPA's Environmental Technology Verification Program. Emissions of some greenhouse gases, most notably methane. can be controlled profitably now, even in the absence of regulations. ...

  17. MAMA Software Features: Quantification Verification Documentation-1

    SciTech Connect

    Ruggiero, Christy E.; Porter, Reid B.

    2014-05-21

    This document reviews the verification of the basic shape quantification attributes in the MAMA software against hand calculations in order to show that the calculations are implemented mathematically correctly and give the expected quantification results.

  18. U.S. Environmental Technology Verification Program

    EPA Science Inventory

    Overview of the U.S. Environmental Technology Verification Program (ETV), the ETV Greenhouse Gas Technology Center, and energy-related ETV projects. Presented at the Department of Energy's National Renewable Laboratory in Boulder, Colorado on June 23, 2008.

  19. HDM/PASCAL Verification System User's Manual

    NASA Technical Reports Server (NTRS)

    Hare, D.

    1983-01-01

    The HDM/Pascal verification system is a tool for proving the correctness of programs written in PASCAL and specified in the Hierarchical Development Methodology (HDM). This document assumes an understanding of PASCAL, HDM, program verification, and the STP system. The steps toward verification which this tool provides are parsing programs and specifications, checking the static semantics, and generating verification conditions. Some support functions are provided such as maintaining a data base, status management, and editing. The system runs under the TOPS-20 and TENEX operating systems and is written in INTERLISP. However, no knowledge is assumed of these operating systems or of INTERLISP. The system requires three executable files, HDMVCG, PARSE, and STP. Optionally, the editor EMACS should be on the system in order for the editor to work. The file HDMVCG is invoked to run the system. The files PARSE and STP are used as lower forks to perform the functions of parsing and proving.

  20. Calibration and verification of environmental models

    NASA Technical Reports Server (NTRS)

    Lee, S. S.; Sengupta, S.; Weinberg, N.; Hiser, H.

    1976-01-01

    The problems of calibration and verification of mesoscale models used for investigating power plant discharges are considered. The value of remote sensors for data acquisition is discussed as well as an investigation of Biscayne Bay in southern Florida.

  1. Verification timer for AECL 780 Cobalt unit.

    PubMed

    Smathers, J B; Holly, F E

    1984-05-01

    To obtain verification of the proper time setting of the motorized run down timer for a AECL 780 Cobalt Unit, a digital timer is described, which can be added to the system for under $300. PMID:6735762

  2. Electronic Verification at the Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Johnson, T. W.

    1995-01-01

    This document reviews some current applications of Electronic Verification and the benefits such applications are providing the Kennedy Space Center (KSC). It also previews some new technologies, including statistics regarding performance and possible utilization of the technology.

  3. THE EPA'S ENVIRONMENTAL TECHNOLOGY VERIFICATION (ETV) PROGRAM

    EPA Science Inventory

    The Environmental Protection Agency (EPA) instituted the Environmental Technology Verification Program--or ETV--to verify the performance of innovative technical solutions to problems that threaten human health or the environment. ETV was created to substantially accelerate the e...

  4. ENVIRONMENTAL TECHNOLOGY VERIFICATION (ETV) PROGRAM: FUEL CELLS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) Environmental Technology Verification (ETV) Program evaluates the performance of innovative air, water, pollution prevention and monitoring technologies that have the potential to improve human health and the environment. This techno...

  5. ENVIRONMENTAL TECHNOLOGY VERIFICATION (ETV) PROGRAM: STORMWATER TECHNOLOGIES

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) Environmental Technology Verification (ETV) program evaluates the performance of innovative air, water, pollution prevention and monitoring technologies that have the potential to improve human health and the environment. This techn...

  6. SU-E-J-138: On the Ion Beam Range and Dose Verification in Hadron Therapy Using Sound Waves

    SciTech Connect

    Fourkal, E; Veltchev, I; Gayou, O; Nahirnyak, V

    2015-06-15

    Purpose: Accurate range verification is of great importance to fully exploit the potential benefits of ion beam therapies. Current research efforts on this topic include the use of PET imaging of induced activity, detection of emerging prompt gamma rays or secondary particles. It has also been suggested recently to detect the ultrasound waves emitted through the ion energy absorption process. The energy absorbed in a medium is dissipated as heat, followed by thermal expansion that leads to generation of acoustic waves. By using an array of ultrasound transducers the precise spatial location of the Bragg peak can be obtained. The shape and intensity of the emitted ultrasound pulse depend on several variables including the absorbed energy and the pulse length. The main objective of this work is to understand how the ultrasound wave amplitude and shape depend on the initial ion energy and intensity. This would help guide future experiments in ionoacoustic imaging. Methods: The absorbed energy density for protons and carbon ions of different energy and field sizes were obtained using Fluka Monte Carlo code. Subsequently, the system of coupled equations for temperature and pressure is solved for different ion pulse intensities and lengths to obtain the pressure wave shape, amplitude and spectral distribution. Results: The proposed calculations show that the excited pressure wave amplitude is proportional to the absorbed energy density and for longer ion pulses inversely proportional to the ion pulse duration. It is also shown that the resulting ionoacoustic pressure distribution depends on both ion pulse duration and time between the pulses. Conclusion: The Bragg peak localization using ionoacoustic signal may eventually lead to the development of an alternative imaging method with sub-millimeter resolution. It may also open a way for in-vivo dose verification from the measured acoustic signal.

  7. Marine cable location system

    SciTech Connect

    Zachariadis, R.G.

    1984-05-01

    An acoustic positioning system locates a marine cable at an exploration site, such cable employing a plurality of hydrophones at spaced-apart positions along the cable. A marine vessel measures water depth to the cable as the vessel passes over the cable and interrogates the hydrophones with sonar pulses along a slant range as the vessel travels in a parallel and horizontally offset path to the cable. The location of the hydrophones is determined from the recordings of water depth and slant range.

  8. Cable fault locator research

    NASA Astrophysics Data System (ADS)

    Cole, C. A.; Honey, S. K.; Petro, J. P.; Phillips, A. C.

    1982-07-01

    Cable fault location and the construction of four field test units are discussed. Swept frequency sounding of mine cables with RF signals was the technique most thoroughly investigated. The swept frequency technique is supplemented with a form of moving target indication to provide a method for locating the position of a technician along a cable and relative to a suspected fault. Separate, more limited investigations involved high voltage time domain reflectometry and acoustical probing of mine cables. Particular areas of research included microprocessor-based control of the swept frequency system, a microprocessor based fast Fourier transform for spectral analysis, and RF synthesizers.

  9. RFI emitter location techniques

    NASA Technical Reports Server (NTRS)

    Rao, B. L. J.

    1973-01-01

    The possibility is discussed of using Doppler techniques for determining the location of ground based emitters causing radio frequency interference with low orbiting satellites. An error analysis indicates that it is possible to find the emitter location within an error range of 2 n.mi. The parameters which determine the required satellite receiver characteristic are discussed briefly along with the non-real time signal processing which may by used in obtaining the Doppler curve. Finally, the required characteristics of the satellite antenna are analyzed.

  10. Handbook: Design of automated redundancy verification

    NASA Technical Reports Server (NTRS)

    Ford, F. A.; Hasslinger, T. W.; Moreno, F. J.

    1971-01-01

    The use of the handbook is discussed and the design progress is reviewed. A description of the problem is presented, and examples are given to illustrate the necessity for redundancy verification, along with the types of situations to which it is typically applied. Reusable space vehicles, such as the space shuttle, are recognized as being significant in the development of the automated redundancy verification problem.

  11. The NPARC Alliance Verification and Validation Archive

    NASA Technical Reports Server (NTRS)

    Slater, John W.; Dudek, Julianne C.; Tatum, Kenneth E.

    2000-01-01

    The NPARC Alliance (National Project for Applications oriented Research in CFD) maintains a publicly-available, web-based verification and validation archive as part of the development and support of the WIND CFD code. The verification and validation methods used for the cases attempt to follow the policies and guidelines of the ASME and AIAA. The emphasis is on air-breathing propulsion flow fields with Mach numbers ranging from low-subsonic to hypersonic.

  12. A verification library for multibody simulation software

    NASA Technical Reports Server (NTRS)

    Kim, Sung-Soo; Haug, Edward J.; Frisch, Harold P.

    1989-01-01

    A multibody dynamics verification library, that maintains and manages test and validation data is proposed, based on RRC Robot arm and CASE backhoe validation and a comparitive study of DADS, DISCOS, and CONTOPS that are existing public domain and commercial multibody dynamic simulation programs. Using simple representative problems, simulation results from each program are cross checked, and the validation results are presented. Functionalities of the verification library are defined, in order to automate validation procedure.

  13. Transmutation Fuel Performance Code Thermal Model Verification

    SciTech Connect

    Gregory K. Miller; Pavel G. Medvedev

    2007-09-01

    FRAPCON fuel performance code is being modified to be able to model performance of the nuclear fuels of interest to the Global Nuclear Energy Partnership (GNEP). The present report documents the effort for verification of the FRAPCON thermal model. It was found that, with minor modifications, FRAPCON thermal model temperature calculation agrees with that of the commercial software ABAQUS (Version 6.4-4). This report outlines the methodology of the verification, code input, and calculation results.

  14. Source Identification and Location Techniques

    NASA Technical Reports Server (NTRS)

    Weir, Donald; Bridges, James; Agboola, Femi; Dougherty, Robert

    2001-01-01

    Mr. Weir presented source location results obtained from an engine test as part of the Engine Validation of Noise Reduction Concepts program. Two types of microphone arrays were used in this program to determine the jet noise source distribution for the exhaust from a 4.3 bypass ratio turbofan engine. One was a linear array of 16 microphones located on a 25 ft. sideline and the other was a 103 microphone 3-D "cage" array in the near field of the jet. Data were obtained from a baseline nozzle and from numerous nozzle configuration using chevrons and/or tabs to reduce the jet noise. Mr. Weir presented data from two configurations: the baseline nozzle and a nozzle configuration with chevrons on both the core and bypass nozzles. This chevron configuration had achieved a jet noise reduction of 4 EPNdB in small scale tests conducted at the Glenn Research Center. IR imaging showed that the chevrons produced significant improvements in mixing and greatly reduced the length of the jet potential core. Comparison of source location data from the 1-D phased array showed a shift of the noise sources towards the nozzle and clear reductions of the sources due to the noise reduction devices. Data from the 3-D array showed a single source at a frequency of 125 Hz. located several diameters downstream from the nozzle exit. At 250 and 400 Hz., multiple sources, periodically spaced, appeared to exist downstream of the nozzle. The trend of source location moving toward the nozzle exit with increasing frequency was also observed. The 3-D array data also showed a reduction in source strength with the addition of chevrons. The overall trend of source location with frequency was compared for the two arrays and with classical experience. Similar trends were observed. Although overall trends with frequency and addition of suppression devices were consistent between the data from the 1-D and the 3-D arrays, a comparison of the details of the inferred source locations did show differences. A

  15. Earthquake location in island arcs

    USGS Publications Warehouse

    Engdahl, E.R.; Dewey, J.W.; Fujita, K.

    1982-01-01

    A comprehensive data set of selected teleseismic P-wave arrivals and local-network P- and S-wave arrivals from large earthquakes occurring at all depths within a small section of the central Aleutians is used to examine the general problem of earthquake location in island arcs. Reference hypocenters for this special data set are determined for shallow earthquakes from local-network data and for deep earthquakes from combined local and teleseismic data by joint inversion for structure and location. The high-velocity lithospheric slab beneath the central Aleutians may displace hypocenters that are located using spherically symmetric Earth models; the amount of displacement depends on the position of the earthquakes with respect to the slab and on whether local or teleseismic data are used to locate the earthquakes. Hypocenters for trench and intermediate-depth events appear to be minimally biased by the effects of slab structure on rays to teleseismic stations. However, locations of intermediate-depth events based on only local data are systematically displaced southwards, the magnitude of the displacement being proportional to depth. Shallow-focus events along the main thrust zone, although well located using only local-network data, are severely shifted northwards and deeper, with displacements as large as 50 km, by slab effects on teleseismic travel times. Hypocenters determined by a method that utilizes seismic ray tracing through a three-dimensional velocity model of the subduction zone, derived by thermal modeling, are compared to results obtained by the method of joint hypocenter determination (JHD) that formally assumes a laterally homogeneous velocity model over the source region and treats all raypath anomalies as constant station corrections to the travel-time curve. The ray-tracing method has the theoretical advantage that it accounts for variations in travel-time anomalies within a group of events distributed over a sizable region of a dipping, high

  16. Cleanliness verification process at Martin Marietta Astronautics

    NASA Astrophysics Data System (ADS)

    King, Elizabeth A.; Giordano, Thomas J.

    1994-06-01

    The Montreal Protocol and the 1990 Clean Air Act Amendments mandate CFC-113, other chlorinated fluorocarbons (CFC's) and 1,1,1-Trichloroethane (TCA) be banned from production after December 31, 1995. In response to increasing pressures, the Air Force has formulated policy that prohibits purchase of these solvents for Air Force use after April 1, 1994. In response to the Air Force policy, Martin Marietta Astronautics is in the process of eliminating all CFC's and TCA from use at the Engineering Propulsion Laboratory (EPL), located on Air Force property PJKS. Gross and precision cleaning operations are currently performed on spacecraft components at EPL. The final step of the operation is a rinse with a solvent, typically CFC-113. This solvent is then analyzed for nonvolatile residue (NVR), particle count and total filterable solids (TFS) to determine cleanliness of the parts. The CFC-113 used in this process must be replaced in response to the above policies. Martin Marietta Astronautics, under contract to the Air Force, is currently evaluating and testing alternatives for a cleanliness verification solvent. Completion of test is scheduled for May, 1994. Evaluation of the alternative solvents follows a three step approach. This first is initial testing of solvents picked from literature searches and analysis. The second step is detailed testing of the top candidates from the initial test phase. The final step is implementation and validation of the chosen alternative(s). Testing will include contaminant removal, nonvolatile residue, material compatibility and propellant compatibility. Typical materials and contaminants will be tested with a wide range of solvents. Final results of the three steps will be presented as well as the implementation plan for solvent replacement.

  17. Verification and Validation in Computational Fluid Dynamics

    SciTech Connect

    OBERKAMPF, WILLIAM L.; TRUCANO, TIMOTHY G.

    2002-03-01

    Verification and validation (V and V) are the primary means to assess accuracy and reliability in computational simulations. This paper presents an extensive review of the literature in V and V in computational fluid dynamics (CFD), discusses methods and procedures for assessing V and V, and develops a number of extensions to existing ideas. The review of the development of V and V terminology and methodology points out the contributions from members of the operations research, statistics, and CFD communities. Fundamental issues in V and V are addressed, such as code verification versus solution verification, model validation versus solution validation, the distinction between error and uncertainty, conceptual sources of error and uncertainty, and the relationship between validation and prediction. The fundamental strategy of verification is the identification and quantification of errors in the computational model and its solution. In verification activities, the accuracy of a computational solution is primarily measured relative to two types of highly accurate solutions: analytical solutions and highly accurate numerical solutions. Methods for determining the accuracy of numerical solutions are presented and the importance of software testing during verification activities is emphasized.

  18. Investigation of an implantable dosimeter for single-point water equivalent path length verification in proton therapy

    PubMed Central

    Lu, Hsiao-Ming; Mann, Greg; Cascio, Ethan

    2010-01-01

    Purpose:In vivo range verification in proton therapy is highly desirable. A recent study suggested that it was feasible to use point dose measurement for in vivo beam range verification in proton therapy, provided that the spread-out Bragg peak dose distribution is delivered in a different and rather unconventional manner. In this work, the authors investigate the possibility of using a commercial implantable dosimeter with wireless reading for this particular application. Methods: The traditional proton treatment technique delivers all the Bragg peaks required for a SOBP field in a single sequence, producing a constant dose plateau across the target volume. As a result, a point dose measurement anywhere in the target volume will produce the same value, thus providing no information regarding the water equivalent path length to the point of measurement. However, the same constant dose distribution can be achieved by splitting the field into a complementary pair of subfields, producing two oppositely “sloped” depth-dose distributions, respectively. The ratio between the two distributions can be a sensitive function of depth and measuring this ratio at a point inside the target volume can provide the water equivalent path length to the dosimeter location. Two types of field splits were used in the experiment, one achieved by the technique of beam current modulation and the other by manipulating the location and width of the beam pulse relative to the range modulator track. Eight MOSFET-based implantable dosimeters at four different depths in a water tank were used to measure the dose ratios for these field pairs. A method was developed to correct the effect of the well-known LET dependence of the MOSFET detectors on the depth-dose distributions using the columnar recombination model. The LET-corrected dose ratios were used to derive the water equivalent path lengths to the dosimeter locations to be compared to physical measurements. Results: The implantable

  19. 37 CFR 382.16 - Verification of royalty distributions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... SATELLITE DIGITAL AUDIO RADIO SERVICES Preexisting Satellite Digital Audio Radio Services § 382.16... CONGRESS RATES AND TERMS FOR STATUTORY LICENSES RATES AND TERMS FOR DIGITAL TRANSMISSIONS OF...

  20. 37 CFR 382.16 - Verification of royalty distributions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... SATELLITE DIGITAL AUDIO RADIO SERVICES Preexisting Satellite Digital Audio Radio Services § 382.16... CONGRESS RATES AND TERMS FOR STATUTORY LICENSES RATES AND TERMS FOR DIGITAL TRANSMISSIONS OF...

  1. 37 CFR 382.16 - Verification of royalty distributions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... SATELLITE DIGITAL AUDIO RADIO SERVICES Preexisting Satellite Digital Audio Radio Services § 382.16... CONGRESS RATES AND TERMS FOR STATUTORY LICENSES RATES AND TERMS FOR DIGITAL TRANSMISSIONS OF...

  2. 37 CFR 382.16 - Verification of royalty distributions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... SATELLITE DIGITAL AUDIO RADIO SERVICES Preexisting Satellite Digital Audio Radio Services § 382.16... CONGRESS RATES AND TERMS FOR STATUTORY LICENSES RATES AND TERMS FOR DIGITAL TRANSMISSIONS OF...

  3. 37 CFR 382.16 - Verification of royalty distributions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... SATELLITE DIGITAL AUDIO RADIO SERVICES Preexisting Satellite Digital Audio Radio Services § 382.16... CONGRESS RATES AND TERMS FOR STATUTORY LICENSES RATES AND TERMS FOR DIGITAL TRANSMISSIONS OF...

  4. Particle impact location detector

    NASA Technical Reports Server (NTRS)

    Auer, S. O.

    1974-01-01

    Detector includes delay lines connected to each detector surface strip. When several particles strike different strips simultaneously, pulses generated by each strip are time delayed by certain intervals. Delay time for each strip is known. By observing time delay in pulse, it is possible to locate strip that is struck by particle.

  5. LOCATING AREAS OF CONCERN

    EPA Science Inventory

    A simple method to locate changes in vegetation cover, which can be used to identify areas under stress. The method only requires inexpensive NDVI data. The use of remotely sensed data is far more cost-effective than field studies and can be performed more quickly. Local knowledg...

  6. Location of Spirit's Home

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This image shows where Earth would set on the martian horizon from the perspective of the Mars Exploration Rover Spirit if it were facing northwest atop its lander at Gusev Crater. Earth cannot be seen in this image, but engineers have mapped its location. This image mosaic was taken by the hazard-identification camera onboard Spirit.

  7. Development of advanced seal verification

    NASA Technical Reports Server (NTRS)

    Workman, Gary L.; Kosten, Susan E.; Abushagur, Mustafa A.

    1992-01-01

    The purpose of this research is to develop a technique to monitor and insure seal integrity with a sensor that has no active elements to burn-out during a long duration activity, such as a leakage test or especially during a mission in space. The original concept proposed is that by implementing fiber optic sensors, changes in the integrity of a seal can be monitored in real time and at no time should the optical fiber sensor fail. The electrical components which provide optical excitation and detection through the fiber are not part of the seal; hence, if these electrical components fail, they can be easily changed without breaking the seal. The optical connections required for the concept to work does present a functional problem to work out. The utility of the optical fiber sensor for seal monitoring should be general enough that the degradation of a seal can be determined before catastrophic failure occurs and appropriate action taken. Two parallel efforts were performed in determining the feasibility of using optical fiber sensors for seal verification. In one study, research on interferometric measurements of the mechanical response of the optical fiber sensors to seal integrity was studied. In a second study, the implementation of the optical fiber to a typical vacuum chamber was implemented and feasibility studies on microbend experiments in the vacuum chamber were performed. Also, an attempt was made to quantify the amount of pressure actually being applied to the optical fiber using finite element analysis software by Algor.

  8. Learning Assumptions for Compositional Verification

    NASA Technical Reports Server (NTRS)

    Cobleigh, Jamieson M.; Giannakopoulou, Dimitra; Pasareanu, Corina; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Compositional verification is a promising approach to addressing the state explosion problem associated with model checking. One compositional technique advocates proving properties of a system by checking properties of its components in an assume-guarantee style. However, the application of this technique is difficult because it involves non-trivial human input. This paper presents a novel framework for performing assume-guarantee reasoning in an incremental and fully automated fashion. To check a component against a property, our approach generates assumptions that the environment needs to satisfy for the property to hold. These assumptions are then discharged on the rest of the system. Assumptions are computed by a learning algorithm. They are initially approximate, but become gradually more precise by means of counterexamples obtained by model checking the component and its environment, alternately. This iterative process may at any stage conclude that the property is either true or false in the system. We have implemented our approach in the LTSA tool and applied it to the analysis of a NASA system.

  9. Cold Flow Verification Test Facility

    SciTech Connect

    Shamsi, A.; Shadle, L.J.

    1996-12-31

    The cold flow verification test facility consists of a 15-foot high, 3-foot diameter, domed vessel made of clear acrylic in two flanged sections. The unit can operate up to pressures of 14 psig. The internals include a 10-foot high jetting fluidized bed, a cylindrical baffle that hangs from the dome, and a rotating grate for control of continuous solids removal. The fluid bed is continuously fed solids (20 to 150 lb/hr) through a central nozzle made up of concentric pipes. It can either be configured as a half or full cylinder of various dimensions. The fluid bed has flow loops for separate air flow control for conveying solids (inner jet, 500 to 100000 scfh) , make-up into the jet (outer jet, 500 to 8000 scfh), spargers in the solids removal annulus (100 to 2000 scfh), and 6 air jets (20 to 200 scfh) on the sloping conical grid. Additional air (500 to 10000 scfh) can be added to the top of the dome and under the rotating grate. The outer vessel, the hanging cylindrical baffles or skirt, and the rotating grate can be used to study issues concerning moving bed reactors. There is ample allowance for access and instrumentation in the outer shell. Furthermore, this facility is available for future Cooperative Research and Development Program Manager Agreements (CRADA) to study issues and problems associated with fluid- and fixed-bed reactors. The design allows testing of different dimensions and geometries.

  10. Verification of excess defense material

    SciTech Connect

    Fearey, B.L.; Pilat, J.F.; Eccleston, G.W.; Nicholas, N.J.; Tape, J.W.

    1997-12-01

    The international community in the post-Cold War period has expressed an interest in the International Atomic Energy Agency (IAEA) using its expertise in support of the arms control and disarmament process in unprecedented ways. The pledges of the US and Russian presidents to place excess defense materials under some type of international inspections raises the prospect of using IAEA safeguards approaches for monitoring excess materials, which include both classified and unclassified materials. Although the IAEA has suggested the need to address inspections of both types of materials, the most troublesome and potentially difficult problems involve approaches to the inspection of classified materials. The key issue for placing classified nuclear components and materials under IAEA safeguards is the conflict between these traditional IAEA materials accounting procedures and the US classification laws and nonproliferation policy designed to prevent the disclosure of critical weapon-design information. Possible verification approaches to classified excess defense materials could be based on item accountancy, attributes measurements, and containment and surveillance. Such approaches are not wholly new; in fact, they are quite well established for certain unclassified materials. Such concepts may be applicable to classified items, but the precise approaches have yet to be identified, fully tested, or evaluated for technical and political feasibility, or for their possible acceptability in an international inspection regime. Substantial work remains in these areas. This paper examines many of the challenges presented by international inspections of classified materials.

  11. Warm forming simulation of titanium tailor-welded blanks with experimental verification

    SciTech Connect

    Lai, C. P.; Chan, L. C.; Chow, C. L.

    2007-05-17

    The simulation of the forming process of Ti-TWBs at elevated temperatures using finite element analysis to determine the optimum forming conditions of Ti-TWBs is presented in this paper. For verification of the simulation results, titanium alloy (Ti-6Al-4V) was selected for the first instance to prepare the specimen of Ti-TWBs. The thickness combinations of 0.7mm/1.0mm and in widths of 20mm, 90mm and 110mm were used. A specific tooling system with temperature control device was developed to the forming of Ti-TWBs at 550 deg. C. A cylindrical punch of 50mm diameter was designed and manufactured. Different forming parameters (i.e. traveling distance of the punch and the stroke as well as the time of each forming process) and material characteristics under various temperatures were measured. In addition, the true stress and strain values by tensile test as well as the major and minor strain distributions of forming Ti-TWBs at elevated temperatures by Swift Forming test were carried out and applied as input into the finite element program. The simulation results indentify failure locations and Limit Dome Height (LDH) of Ti-TWBs at elevated temperatures and were compared with the measured ones. Finally, the optimum forming conditions of Ti-TWBs were determined based on the experimentally verified simulation results.

  12. Experimental validation of a commercial 3D dose verification system for intensity-modulated arc therapies

    NASA Astrophysics Data System (ADS)

    Boggula, Ramesh; Lorenz, Friedlieb; Mueller, Lutz; Birkner, Mattias; Wertz, Hansjoerg; Stieler, Florian; Steil, Volker; Lohr, Frank; Wenz, Frederik

    2010-10-01

    We validate the dosimetric performance of COMPASS®, a novel 3D quality assurance system for verification of volumetric-modulated arc therapy (VMAT) treatment plans that can correlate the delivered dose to the patient's anatomy, taking into account the tissue inhomogeneity. The accuracy of treatment delivery was assessed by the COMPASS® for 12 VMAT plans, and the resulting assessments were evaluated using an ionization chamber and film measurements. Dose-volume relationships were evaluated by the COMPASS® for three additional treatment plans and these were used to verify the accuracy of treatment planning dose calculations. The results matched well between COMPASS® and measurements for the ionization chamber (<=3%) and film (73-99% for gamma(3%/3 mm) < 1 and 98-100% for gamma(5%/5 mm) < 1) for the phantom plans. Differences in dose-volume statistics for the average dose to the PTV were within 2.5% for three treatment plans. For the structures located in the low-dose region, a maximum difference of <9% was observed. In its current implementation, the system could measure the delivered dose with sufficient accuracy and could project the 3D dose distribution directly on the patient's anatomy. Slight deviations were found for large open fields. These could be minimized by improving the COMPASS® in-built beam model.

  13. A Probabilistic Mass Estimation Algorithm for a Novel 7- Channel Capacitive Sample Verification Sensor

    NASA Technical Reports Server (NTRS)

    Wolf, Michael

    2012-01-01

    A document describes an algorithm created to estimate the mass placed on a sample verification sensor (SVS) designed for lunar or planetary robotic sample return missions. A novel SVS measures the capacitance between a rigid bottom plate and an elastic top membrane in seven locations. As additional sample material (soil and/or small rocks) is placed on the top membrane, the deformation of the membrane increases the capacitance. The mass estimation algorithm addresses both the calibration of each SVS channel, and also addresses how to combine the capacitances read from each of the seven channels into a single mass estimate. The probabilistic approach combines the channels according to the variance observed during the training phase, and provides not only the mass estimate, but also a value for the certainty of the estimate. SVS capacitance data is collected for known masses under a wide variety of possible loading scenarios, though in all cases, the distribution of sample within the canister is expected to be approximately uniform. A capacitance-vs-mass curve is fitted to this data, and is subsequently used to determine the mass estimate for the single channel s capacitance reading during the measurement phase. This results in seven different mass estimates, one for each SVS channel. Moreover, the variance of the calibration data is used to place a Gaussian probability distribution function (pdf) around this mass estimate. To blend these seven estimates, the seven pdfs are combined into a single Gaussian distribution function, providing the final mean and variance of the estimate. This blending technique essentially takes the final estimate as an average of the estimates of the seven channels, weighted by the inverse of the channel s variance.

  14. National Verification System of National Meteorological Center , China

    NASA Astrophysics Data System (ADS)

    Zhang, Jinyan; Wei, Qing; Qi, Dan

    2016-04-01

    Product Quality Verification Division for official weather forecasting of China was founded in April, 2011. It is affiliated to Forecast System Laboratory (FSL), National Meteorological Center (NMC), China. There are three employees in this department. I'm one of the employees and I am in charge of Product Quality Verification Division in NMC, China. After five years of construction, an integrated realtime National Verification System of NMC, China has been established. At present, its primary roles include: 1) to verify official weather forecasting quality of NMC, China; 2) to verify the official city weather forecasting quality of Provincial Meteorological Bureau; 3) to evaluate forecasting quality for each forecasters in NMC, China. To verify official weather forecasting quality of NMC, China, we have developed : • Grid QPF Verification module ( including upascale) • Grid temperature, humidity and wind forecast verification module • Severe convective weather forecast verification module • Typhoon forecast verification module • Disaster forecast verification • Disaster warning verification module • Medium and extend period forecast verification module • Objective elements forecast verification module • Ensemble precipitation probabilistic forecast verification module To verify the official city weather forecasting quality of Provincial Meteorological Bureau, we have developed : • City elements forecast verification module • Public heavy rain forecast verification module • City air quality forecast verification module. To evaluate forecasting quality for each forecasters in NMC, China, we have developed : • Off-duty forecaster QPF practice evaluation module • QPF evaluation module for forecasters • Severe convective weather forecast evaluation module • Typhoon track forecast evaluation module for forecasters • Disaster warning evaluation module for forecasters • Medium and extend period forecast evaluation module The further

  15. Substance Abuse Treatment Facility Locator

    MedlinePlus

    ... Health Services Locator Buprenorphine Physician Locator Find a Facility in Your State To locate the drug and ... Service . Privacy Policy . Home | About the Locator | Find Facilities Near You | Find Facilities by City, County, State ...

  16. Experimental measurement-device-independent verification of quantum steering

    NASA Astrophysics Data System (ADS)

    Kocsis, Sacha; Hall, Michael J. W.; Bennet, Adam J.; Saunders, Dylan J.; Pryde, Geoff J.

    2015-01-01

    Bell non-locality between distant quantum systems—that is, joint correlations which violate a Bell inequality—can be verified without trusting the measurement devices used, nor those performing the measurements. This leads to unconditionally secure protocols for quantum information tasks such as cryptographic key distribution. However, complete verification of Bell non-locality requires high detection efficiencies, and is not robust to typical transmission losses over long distances. In contrast, quantum or Einstein-Podolsky-Rosen steering, a weaker form of quantum correlation, can be verified for arbitrarily low detection efficiencies and high losses. The cost is that current steering-verification protocols require complete trust in one of the measurement devices and its operator, allowing only one-sided secure key distribution. Here we present measurement-device-independent steering protocols that remove this need for trust, even when Bell non-locality is not present. We experimentally demonstrate this principle for singlet states and states that do not violate a Bell inequality.

  17. RFI and SCRIMP Model Development and Verification

    NASA Technical Reports Server (NTRS)

    Loos, Alfred C.; Sayre, Jay

    2000-01-01

    Vacuum-Assisted Resin Transfer Molding (VARTM) processes are becoming promising technologies in the manufacturing of primary composite structures in the aircraft industry as well as infrastructure. A great deal of work still needs to be done on efforts to reduce the costly trial-and-error methods of VARTM processing that are currently in practice today. A computer simulation model of the VARTM process would provide a cost-effective tool in the manufacturing of composites utilizing this technique. Therefore, the objective of this research was to modify an existing three-dimensional, Resin Film Infusion (RFI)/Resin Transfer Molding (RTM) model to include VARTM simulation capabilities and to verify this model with the fabrication of aircraft structural composites. An additional objective was to use the VARTM model as a process analysis tool, where this tool would enable the user to configure the best process for manufacturing quality composites. Experimental verification of the model was performed by processing several flat composite panels. The parameters verified included flow front patterns and infiltration times. The flow front patterns were determined to be qualitatively accurate, while the simulated infiltration times over predicted experimental times by 8 to 10%. Capillary and gravitational forces were incorporated into the existing RFI/RTM model in order to simulate VARTM processing physics more accurately. The theoretical capillary pressure showed the capability to reduce the simulated infiltration times by as great as 6%. The gravity, on the other hand, was found to be negligible for all cases. Finally, the VARTM model was used as a process analysis tool. This enabled the user to determine such important process constraints as the location and type of injection ports and the permeability and location of the high-permeable media. A process for a three-stiffener composite panel was proposed. This configuration evolved from the variation of the process

  18. Ultra-wideband Location Authentication for Item Tracking

    SciTech Connect

    Rowe, Nathan C; Kuhn, Michael J; Stinson, Brad J; Holland, Stephen A

    2012-01-01

    International safeguards is increasingly utilizing unattended and remote monitoring methods to improve inspector efficiency and the timeliness of diversion detection. Item identification and tracking has been proposed as one unattended remote monitoring method, and a number of radio-frequency (RF) technologies have been proposed. When utilizing location information for verification purposes, strong assurance of the authenticity of the reported location is required, but most commercial RF systems are vulnerable to a variety of spoofing and relay attacks. ORNL has developed a distance bounding method that uses ultra-wideband technology to provide strong assurance of item location. This distance bounding approach can be coupled with strong symmetric key authentication methods to provide a fully authenticable tracking system that is resistant to both spoofing and relay attacks. This paper will discuss the overall problems associated with RF tracking including the common spoofing and relay attack scenarios, the ORNL distance bounding approach for authenticating location, and the potential applications for this technology.

  19. Verification and Validation of Kinetic Codes

    NASA Astrophysics Data System (ADS)

    Christlieb, Andrew

    2014-10-01

    We review the last three workshops held on Validation and Verification of Kinetic Codes. The goal of the workshops was to highlight the need to develop benchmark test problems beyond traditional test problems such as Landau damping and the two-stream instability. These test problems provide a limited understanding how a code might perform and mask key issues in more complicated situations. Developing these test problems highlights the strengths and weaknesses of both mesh- and particle-based codes. One outcome is that designing test problems that clearly deliver a path forward for developing improved methods is complicated by the need to create a completely self-consistent model. For example, two test cases proposed by the authors as simple test cases turn out to be ill defined. The first case is the modeling of sheath formation in a 1D 1V collisionless plasma. We found that losses to the wall lead to discontinuous distribution functions, a challenge for high order mesh-based solvers. The semi-infinite case was problematic because the far field boundary condition poses difficulty in computing on a finite domain. Our second case was flow of a collisionless electron beam in a pipe. Here, numerical diffusion is a key problem we are testing; however, two-stream instability at the beam edges introduces other issues in terms of finding convergent solutions. For mesh-based codes, before particle trapping takes place, mesh-based methods find themselves outside of the asymptotic regime. Another conclusion we draw from this exercise is that including collisional models in benchmark test problems for mesh-based plasma simulation tools is an important step in providing robust test problems for mesh-based kinetic solvers. In collaboration with Yaman Guclu, David Seal, and John Verboncoeur, Michigan State University.

  20. Alternate calibration method of radiochromic EBT3 film for quality assurance verification of clinical radiotherapy treatments

    NASA Astrophysics Data System (ADS)

    Park, Soah; Kang, Sei-Kwon; Cheong, Kwang-Ho; Hwang, Taejin; Yoon, Jai-Woong; Koo, Taeryool; Han, Tae Jin; Kim, Haeyoung; Lee, Me Yeon; Bae, Hoonsik; Kim, Kyoung Ju

    2016-07-01

    EBT3 film is utilized as a dosimetry quality assurance tool for the verification of clinical radiotherapy treatments. In this work, we suggest a percentage-depth-dose (PDD) calibration method that can calibrate several EBT3 film pieces together at different dose levels because photon beams provide different dose levels at different depths along the axis of the beam. We investigated the feasibility of the film PDD calibration method based on PDD data and compared the results those from the traditional film calibration method. Photon beams at 6 MV were delivered to EBT3 film pieces for both calibration methods. For the PDD-based calibration, the film pieces were placed on solid phantoms at the depth of maximum dose (dmax) and at depths of 3, 5, 8, 12, 17, and 22 cm, and a photon beam was delivered twice, at 100 cGy and 400 cGy, to extend the calibration dose range under the same conditions. Fourteen film pieces, to maintain their consistency, were irradiated at doses ranging from approximately 30 to 400 cGy for both film calibrations. The film pieces were located at the center position on the scan bed of an Epson 1680 flatbed scanner in the parallel direction. Intensity-modulated radiation therapy (IMRT) plans were created, and their dose distributions were delivered to the film. The dose distributions for the traditional method and those for the PDD-based calibration method were evaluated using a Gamma analysis. The PDD dose values using a CC13 ion chamber and those obtained by using a FC65-G Farmer chamber and measured at the depth of interest produced very similar results. With the objective test criterion of a 1% dosage agreement at 1 mm, the passing rates for the four cases of the three IMRT plans were essentially identical. The traditional and the PDD-based calibrations provided similar plan verification results. We also describe another alternative for calibrating EBT3 films, i.e., a PDD-based calibration method that provides an easy and time-saving approach

  1. 40 CFR 1065.550 - Gas analyzer range verification and drift verification.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Gas analyzer range verification and drift verification. 1065.550 Section 1065.550 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Performing an Emission Test Over Specified Duty Cycles § 1065.550 Gas analyzer...

  2. Dipole Well Location

    Energy Science and Technology Software Center (ESTSC)

    1998-08-03

    The problem here is to model the three-dimensional response of an electromagnetic logging tool to a practical situation which is often encountered in oil and gas exploration. The DWELL code provide the electromagnetic fields on the axis of a borehole due to either an electric or a magnetic dipole located on the same axis. The borehole is cylindrical, and is located within a stratified formation in which the bedding planes are not horizontal. The anglemore » between the normal to the bedding planes and the axis of the borehole may assume any value, or in other words, the borehole axis may be tilted with respect to the bedding planes. Additionally, all of the formation layers may have invasive zones of drilling mud. The operating frequency of the source dipole(s) extends from a few Hertz to hundreds of Megahertz.« less

  3. Electric current locator

    DOEpatents

    King, Paul E.; Woodside, Charles Rigel

    2012-02-07

    The disclosure herein provides an apparatus for location of a quantity of current vectors in an electrical device, where the current vector has a known direction and a known relative magnitude to an input current supplied to the electrical device. Mathematical constants used in Biot-Savart superposition equations are determined for the electrical device, the orientation of the apparatus, and relative magnitude of the current vector and the input current, and the apparatus utilizes magnetic field sensors oriented to a sensing plane to provide current vector location based on the solution of the Biot-Savart superposition equations. Description of required orientations between the apparatus and the electrical device are disclosed and various methods of determining the mathematical constants are presented.

  4. Dipole Well Location

    SciTech Connect

    Newman, Gregory

    1998-08-03

    The problem here is to model the three-dimensional response of an electromagnetic logging tool to a practical situation which is often encountered in oil and gas exploration. The DWELL code provide the electromagnetic fields on the axis of a borehole due to either an electric or a magnetic dipole located on the same axis. The borehole is cylindrical, and is located within a stratified formation in which the bedding planes are not horizontal. The angle between the normal to the bedding planes and the axis of the borehole may assume any value, or in other words, the borehole axis may be tilted with respect to the bedding planes. Additionally, all of the formation layers may have invasive zones of drilling mud. The operating frequency of the source dipole(s) extends from a few Hertz to hundreds of Megahertz.

  5. Underwater hydrophone location survey

    NASA Technical Reports Server (NTRS)

    Cecil, Jack B.

    1993-01-01

    The Atlantic Undersea Test and Evaluation Center (AUTEC) is a U.S. Navy test range located on Andros Island, Bahamas, and a Division of the Naval Undersea Warfare Center (NUWC), Newport, RI. The Headquarters of AUTEC is located at a facility in West Palm Beach, FL. AUTEC's primary mission is to provide the U.S. Navy with a deep-water test and evaluation facility for making underwater acoustic measurements, testing and calibrating sonars, and providing accurate underwater, surface, and in-air tracking data on surface ships, submarines, aircraft, and weapon systems. Many of these programs are in support of Antisubmarine Warfare (ASW), undersea research and development programs, and Fleet assessment and operational readiness trials. Most tests conducted at AUTEC require precise underwater tracking (plus or minus 3 yards) of multiple acoustic signals emitted with the correct waveshape and repetition criteria from either a surface craft or underwater vehicle.

  6. Marine cable location system

    SciTech Connect

    Ottsen, H.; Barker, Th.

    1985-04-23

    An acoustic positioning system for locating a marine cable at an exploration site employs a plurality of acoustic transponders, each having a characteristic frequency, at spaced-apart positions along the cable. A marine vessel measures the depth to the transponders as the vessel passes over the cable and measures the slant range from the vessel to each of the acoustic transponders as the vessel travels in a parallel and horizontally offset path to the cable.

  7. Magnetic Location Indicator

    NASA Technical Reports Server (NTRS)

    Stegman, Thomas W.

    1992-01-01

    Ferrofluidic device indicates point of highest magnetic-flux density in workspace. Consists of bubble of ferrofluid in immiscible liquid carrier in clear plastic case. Used in flat block or tube. Axes of centering circle on flat-block version used to mark location of maximum flux density when bubble in circle. Device used to find point on wall corresponding to known point on opposite side of wall.

  8. Ammonia Leak Locator Study

    NASA Technical Reports Server (NTRS)

    Dodge, Franklin T.; Wuest, Martin P.; Deffenbaugh, Danny M.

    1995-01-01

    The thermal control system of International Space Station Alpha will use liquid ammonia as the heat exchange fluid. It is expected that small leaks (of the order perhaps of one pound of ammonia per day) may develop in the lines transporting the ammonia to the various facilities as well as in the heat exchange equipment. Such leaks must be detected and located before the supply of ammonia becomes critically low. For that reason, NASA-JSC has a program underway to evaluate instruments that can detect and locate ultra-small concentrations of ammonia in a high vacuum environment. To be useful, the instrument must be portable and small enough that an astronaut can easily handle it during extravehicular activity. An additional complication in the design of the instrument is that the environment immediately surrounding ISSA will contain small concentrations of many other gases from venting of onboard experiments as well as from other kinds of leaks. These other vapors include water, cabin air, CO2, CO, argon, N2, and ethylene glycol. Altogether, this local environment might have a pressure of the order of 10(exp -7) to 10(exp -6) torr. Southwest Research Institute (SwRI) was contracted by NASA-JSC to provide support to NASA-JSC and its prime contractors in evaluating ammonia-location instruments and to make a preliminary trade study of the advantages and limitations of potential instruments. The present effort builds upon an earlier SwRI study to evaluate ammonia leak detection instruments [Jolly and Deffenbaugh]. The objectives of the present effort include: (1) Estimate the characteristics of representative ammonia leaks; (2) Evaluate the baseline instrument in the light of the estimated ammonia leak characteristics; (3) Propose alternative instrument concepts; and (4) Conduct a trade study of the proposed alternative concepts and recommend promising instruments. The baseline leak-location instrument selected by NASA-JSC was an ion gauge.

  9. Numerical Weather Predictions Evaluation Using Spatial Verification Methods

    NASA Astrophysics Data System (ADS)

    Tegoulias, I.; Pytharoulis, I.; Kotsopoulos, S.; Kartsios, S.; Bampzelis, D.; Karacostas, T.

    2014-12-01

    During the last years high-resolution numerical weather prediction simulations have been used to examine meteorological events with increased convective activity. Traditional verification methods do not provide the desired level of information to evaluate those high-resolution simulations. To assess those limitations new spatial verification methods have been proposed. In the present study an attempt is made to estimate the ability of the WRF model (WRF -ARW ver3.5.1) to reproduce selected days with high convective activity during the year 2010 using those feature-based verification methods. Three model domains, covering Europe, the Mediterranean Sea and northern Africa (d01), the wider area of Greece (d02) and central Greece - Thessaly region (d03) are used at horizontal grid-spacings of 15km, 5km and 1km respectively. By alternating microphysics (Ferrier, WSM6, Goddard), boundary layer (YSU, MYJ) and cumulus convection (Kain-­-Fritsch, BMJ) schemes, a set of twelve model setups is obtained. The results of those simulations are evaluated against data obtained using a C-Band (5cm) radar located at the centre of the innermost domain. Spatial characteristics are well captured but with a variable time lag between simulation results and radar data. Acknowledgements: This research is co­financed by the European Union (European Regional Development Fund) and Greek national funds, through the action "COOPERATION 2011: Partnerships of Production and Research Institutions in Focused Research and Technology Sectors" (contract number 11SYN_8_1088 - DAPHNE) in the framework of the operational programme "Competitiveness and Entrepreneurship" and Regions in Transition (OPC II, NSRF 2007-­-2013).

  10. RF model of the distribution system as a communication channel, phase 2. Volume 2: Task reports

    NASA Technical Reports Server (NTRS)

    Rustay, R. C.; Gajjar, J. T.; Rankin, R. W.; Wentz, R. C.; Wooding, R.

    1982-01-01

    Based on the established feasibility of predicting, via a model, the propagation of Power Line Frequency on radial type distribution feeders, verification studies comparing model predictions against measurements were undertaken using more complicated feeder circuits and situations. Detailed accounts of the major tasks are presented. These include: (1) verification of model; (2) extension, implementation, and verification of perturbation theory; (3) parameter sensitivity; (4) transformer modeling; and (5) compensation of power distribution systems for enhancement of power line carrier communication reliability.

  11. Location Bias of Identifiers in Clinical Narratives

    PubMed Central

    Hanauer, David A; Mei, Qiaozhu; Malin, Bradley; Zheng, Kai

    2013-01-01

    Scrubbing identifying information from narrative clinical documents is a critical first step to preparing the data for secondary use purposes, such as translational research. Evidence suggests that the differential distribution of protected health information (PHI) in clinical documents could be used as additional features to improve the performance of automated de-identification algorithms or toolkits. However, there has been little investigation into the extent to which such phenomena transpires in practice. To empirically assess this issue, we identified the location of PHI in 140,000 clinical notes from an electronic health record system and characterized the distribution as a function of location in a document. In addition, we calculated the ‘word proximity’ of nearby PHI elements to determine their co-occurrence rates. The PHI elements were found to have non-random distribution patterns. Location within a document and proximity between PHI elements might therefore be used to help de-identification systems better label PHI. PMID:24551358

  12. Monitoring and verification R&D

    SciTech Connect

    Pilat, Joseph F; Budlong - Sylvester, Kory W; Fearey, Bryan L

    2011-01-01

    The 2010 Nuclear Posture Review (NPR) report outlined the Administration's approach to promoting the agenda put forward by President Obama in Prague on April 5, 2009. The NPR calls for a national monitoring and verification R&D program to meet future challenges arising from the Administration's nonproliferation, arms control and disarmament agenda. Verification of a follow-on to New START could have to address warheads and possibly components along with delivery capabilities. Deeper cuts and disarmament would need to address all of these elements along with nuclear weapon testing, nuclear material and weapon production facilities, virtual capabilities from old weapon and existing energy programs and undeclared capabilities. We only know how to address some elements of these challenges today, and the requirements may be more rigorous in the context of deeper cuts as well as disarmament. Moreover, there is a critical need for multiple options to sensitive problems and to address other challenges. There will be other verification challenges in a world of deeper cuts and disarmament, some of which we are already facing. At some point, if the reductions process is progressing, uncertainties about past nuclear materials and weapons production will have to be addressed. IAEA safeguards will need to continue to evolve to meet current and future challenges, and to take advantage of new technologies and approaches. Transparency/verification of nuclear and dual-use exports will also have to be addressed, and there will be a need to make nonproliferation measures more watertight and transparent. In this context, and recognizing we will face all of these challenges even if disarmament is not achieved, this paper will explore possible agreements and arrangements; verification challenges; gaps in monitoring and verification technologies and approaches; and the R&D required to address these gaps and other monitoring and verification challenges.

  13. Verification of Functional Fault Models and the Use of Resource Efficient Verification Tools

    NASA Technical Reports Server (NTRS)

    Bis, Rachael; Maul, William A.

    2015-01-01

    Functional fault models (FFMs) are a directed graph representation of the failure effect propagation paths within a system's physical architecture and are used to support development and real-time diagnostics of complex systems. Verification of these models is required to confirm that the FFMs are correctly built and accurately represent the underlying physical system. However, a manual, comprehensive verification process applied to the FFMs was found to be error prone due to the intensive and customized process necessary to verify each individual component model and to require a burdensome level of resources. To address this problem, automated verification tools have been developed and utilized to mitigate these key pitfalls. This paper discusses the verification of the FFMs and presents the tools that were developed to make the verification process more efficient and effective.

  14. Ozone Monitoring Instrument geolocation verification

    NASA Astrophysics Data System (ADS)

    Kroon, M.; Dobber, M. R.; Dirksen, R.; Veefkind, J. P.; van den Oord, G. H. J.; Levelt, P. F.

    2008-08-01

    Verification of the geolocation assigned to individual ground pixels as measured by the Ozone Monitoring Instrument (OMI) aboard the NASA EOS-Aura satellite was performed by comparing geophysical Earth surface details as observed in OMI false color images with the high-resolution continental outline vector map as provided by the Interactive Data Language (IDL) software tool from ITT Visual Information Solutions. The OMI false color images are generated from the OMI visible channel by integration over 20-nm-wide spectral bands of the Earth radiance intensity around 484 nm, 420 nm, and 360 nm wavelength per ground pixel. Proportional to the integrated intensity, we assign color values composed of CRT standard red, green, and blue to the OMI ground pixels. Earth surface details studied are mostly high-contrast coast lines where arid land or desert meets deep blue ocean. The IDL high-resolution vector map is based on the 1993 CIA World Database II Map with a 1-km accuracy. Our results indicate that the average OMI geolocation offset over the years 2005-2006 is 0.79 km in latitude and 0.29 km in longitude, with a standard deviation of 1.64 km in latitude and 2.04 km in longitude, respectively. Relative to the OMI nadir pixel size, one obtains mean displacements of ˜6.1% in latitude and ˜1.2% in longitude, with standard deviations of 12.6% and 7.9%, respectively. We conclude that the geolocation assigned to individual OMI ground pixels is sufficiently accurate to support scientific studies of atmospheric features as observed in OMI level 2 satellite data products, such as air quality issues on urban scales or volcanic eruptions and its plumes, that occur on spatial scales comparable to or smaller than OMI nadir pixels.

  15. Monitoring/Verification using DMS: TATP Example

    SciTech Connect

    Stephan Weeks; Kevin Kyle

    2008-03-01

    Field-rugged and field-programmable differential mobility spectrometry (DMS) networks provide highly selective, universal monitoring of vapors and aerosols at detectable levels from persons or areas involved with illicit chemical/biological/explosives (CBE) production. CBE sensor motes used in conjunction with automated fast gas chromatography with DMS detection (GC/DMS) verification instrumentation integrated into situational operations management systems can be readily deployed and optimized for changing application scenarios. The feasibility of developing selective DMS motes for a 'smart dust' sampling approach with guided, highly selective, fast GC/DMS verification analysis is a compelling approach to minimize or prevent the use of explosives or chemical and biological weapons in terrorist activities. Two peroxide-based liquid explosives, triacetone triperoxide (TATP) and hexamethylene triperoxide diamine (HMTD), are synthesized from common chemicals such as hydrogen peroxide, acetone, sulfuric acid, ammonia, and citric acid (Figure 1). Recipes can be readily found on the Internet by anyone seeking to generate sufficient quantities of these highly explosive chemicals to cause considerable collateral damage. Detection of TATP and HMTD by advanced sensing systems can provide the early warning necessary to prevent terror plots from coming to fruition. DMS is currently one of the foremost emerging technologies for the separation and detection of gas-phase chemical species. This is due to trace-level detection limits, high selectivity, and small size. DMS separates and identifies ions at ambient pressures by utilizing the non-linear dependence of an ion's mobility on the radio frequency (rf) electric field strength. GC is widely considered to be one of the leading analytical methods for the separation of chemical species in complex mixtures. Advances in the technique have led to the development of low-thermal-mass fast GC columns. These columns are capable of

  16. Clinical application of in vivo treatment delivery verification based on PET/CT imaging of positron activity induced at high energy photon therapy

    NASA Astrophysics Data System (ADS)

    Janek Strååt, Sara; Andreassen, Björn; Jonsson, Cathrine; Noz, Marilyn E.; Maguire, Gerald Q., Jr.; Näfstadius, Peder; Näslund, Ingemar; Schoenahl, Frederic; Brahme, Anders

    2013-08-01

    The purpose of this study was to investigate in vivo verification of radiation treatment with high energy photon beams using PET/CT to image the induced positron activity. The measurements of the positron activation induced in a preoperative rectal cancer patient and a prostate cancer patient following 50 MV photon treatments are presented. A total dose of 5 and 8 Gy, respectively, were delivered to the tumors. Imaging was performed with a 64-slice PET/CT scanner for 30 min, starting 7 min after the end of the treatment. The CT volume from the PET/CT and the treatment planning CT were coregistered by matching anatomical reference points in the patient. The treatment delivery was imaged in vivo based on the distribution of the induced positron emitters produced by photonuclear reactions in tissue mapped on to the associated dose distribution of the treatment plan. The results showed that spatial distribution of induced activity in both patients agreed well with the delivered beam portals of the treatment plans in the entrance subcutaneous fat regions but less so in blood and oxygen rich soft tissues. For the preoperative rectal cancer patient however, a 2 ± (0.5) cm misalignment was observed in the cranial-caudal direction of the patient between the induced activity distribution and treatment plan, indicating a beam patient setup error. No misalignment of this kind was seen in the prostate cancer patient. However, due to a fast patient setup error in the PET/CT scanner a slight mis-position of the patient in the PET/CT was observed in all three planes, resulting in a deformed activity distribution compared to the treatment plan. The present study indicates that the induced positron emitters by high energy photon beams can be measured quite accurately using PET imaging of subcutaneous fat to allow portal verification of the delivered treatment beams. Measurement of the induced activity in the patient 7 min after receiving 5 Gy involved count rates which were about

  17. Clinical application of in vivo treatment delivery verification based on PET/CT imaging of positron activity induced at high energy photon therapy.

    PubMed

    Janek Strååt, Sara; Andreassen, Björn; Jonsson, Cathrine; Noz, Marilyn E; Maguire, Gerald Q; Näfstadius, Peder; Näslund, Ingemar; Schoenahl, Frederic; Brahme, Anders

    2013-08-21

    The purpose of this study was to investigate in vivo verification of radiation treatment with high energy photon beams using PET/CT to image the induced positron activity. The measurements of the positron activation induced in a preoperative rectal cancer patient and a prostate cancer patient following 50 MV photon treatments are presented. A total dose of 5 and 8 Gy, respectively, were delivered to the tumors. Imaging was performed with a 64-slice PET/CT scanner for 30 min, starting 7 min after the end of the treatment. The CT volume from the PET/CT and the treatment planning CT were coregistered by matching anatomical reference points in the patient. The treatment delivery was imaged in vivo based on the distribution of the induced positron emitters produced by photonuclear reactions in tissue mapped on to the associated dose distribution of the treatment plan. The results showed that spatial distribution of induced activity in both patients agreed well with the delivered beam portals of the treatment plans in the entrance subcutaneous fat regions but less so in blood and oxygen rich soft tissues. For the preoperative rectal cancer patient however, a 2 ± (0.5) cm misalignment was observed in the cranial-caudal direction of the patient between the induced activity distribution and treatment plan, indicating a beam patient setup error. No misalignment of this kind was seen in the prostate cancer patient. However, due to a fast patient setup error in the PET/CT scanner a slight mis-position of the patient in the PET/CT was observed in all three planes, resulting in a deformed activity distribution compared to the treatment plan. The present study indicates that the induced positron emitters by high energy photon beams can be measured quite accurately using PET imaging of subcutaneous fat to allow portal verification of the delivered treatment beams. Measurement of the induced activity in the patient 7 min after receiving 5 Gy involved count rates which were about

  18. Sonar Locator Systems

    NASA Technical Reports Server (NTRS)

    1985-01-01

    An underwater locator device called a Pinger is attached to an airplane's flight recorder for recovery in case of a crash. Burnett Electronics Pinger Model 512 resulted from a Burnett Electronics Laboratory, Inc./Langley Research Center contract for development of a search system for underwater mines. The Pinger's battery-powered transmitter is activated when immersed in water, and sends multidirectional signals for up to 500 hours. When a surface receiver picks up the signal, a diver can retrieve the pinger and the attached airplane flight recorder. Other pingers are used to track whales, mark underwater discoveries and assist oil drilling vessels.

  19. Location of Planet X

    SciTech Connect

    Harrington, R.S.

    1988-10-01

    Observed positions of Uranus and Neptune along with residuals in right ascension and declination are used to constrain the location of a postulated tenth planet. The residuals are converted into residuals in ecliptic longitude and latitude. The results are then combined into seasonal normal points, producing average geocentric residuals spaced slightly more than a year apart that are assumed to represent the equivalent heliocentric average residuals for the observed oppositions. Such a planet is found to most likely reside in the region of Scorpius, with considerably less likelihood that it is in Taurus. 8 references.

  20. Gated Treatment Delivery Verification With On-Line Megavoltage Fluoroscopy

    SciTech Connect

    Tai An; Christensen, James D.; Gore, Elizabeth; Khamene, Ali; Boettger, Thomas; Li, X. Allen

    2010-04-15

    Purpose: To develop and clinically demonstrate the use of on-line real-time megavoltage (MV) fluoroscopy for gated treatment delivery verification. Methods and Materials: Megavoltage fluoroscopy (MVF) image sequences were acquired using a flat panel equipped for MV cone-beam CT in synchrony with the respiratory signal obtained from the Anzai gating device. The MVF images can be obtained immediately before or during gated treatment delivery. A prototype software tool (named RTReg4D) was developed to register MVF images with phase-sequenced digitally reconstructed radiograph images generated from the treatment planning system based on four-dimensional CT. The image registration can be used to reposition the patient before or during treatment delivery. To demonstrate the reliability and clinical usefulness, the system was first tested using a thoracic phantom and then prospectively in actual patient treatments under an institutional review board-approved protocol. Results: The quality of the MVF images for lung tumors is adequate for image registration with phase-sequenced digitally reconstructed radiographs. The MVF was found to be useful for monitoring inter- and intrafractional variations of tumor positions. With the planning target volume contour displayed on the MVF images, the system can verify whether the moving target stays within the planning target volume margin during gated delivery. Conclusions: The use of MVF images was found to be clinically effective in detecting discrepancies in tumor location before and during respiration-gated treatment delivery. The tools and process developed can be useful for gated treatment delivery verification.

  1. Ionoacoustics: A new direct method for range verification

    NASA Astrophysics Data System (ADS)

    Parodi, Katia; Assmann, Walter

    2015-05-01

    The superior ballistic properties of ion beams may offer improved tumor-dose conformality and unprecedented sparing of organs at risk in comparison to other radiation modalities in external radiotherapy. However, these advantages come at the expense of increased sensitivity to uncertainties in the actual treatment delivery, resulting from inaccuracies of patient positioning, physiological motion and uncertainties in the knowledge of the ion range in living tissue. In particular, the dosimetric selectivity of ion beams depends on the longitudinal location of the Bragg peak, making in vivo knowledge of the actual beam range the greatest challenge to full clinical exploitation of ion therapy. Nowadays, in vivo range verification techniques, which are already, or close to, being investigated in the clinical practice, rely on the detection of the secondary annihilation photons or prompt gammas, resulting from nuclear interaction of the primary ion beam with the irradiated tissue. Despite the initial promising results, these methods utilize a not straightforward correlation between nuclear and electromagnetic processes, and typically require massive and costly instrumentation. On the contrary, the long-term known, yet only recently revisited process of "ionoacoustics", which is generated by local tissue heating especially at the Bragg peak, may offer a more direct approach to in vivo range verification, as reviewed here.

  2. Theoretical study of closed-loop recycling liquid-liquid chromatography and experimental verification of the theory.

    PubMed

    Kostanyan, Artak E; Erastov, Andrey A

    2016-09-01

    The non-ideal recycling equilibrium-cell model including the effects of extra-column dispersion is used to simulate and analyze closed-loop recycling counter-current chromatography (CLR CCC). Previously, the operating scheme with the detector located before the column was considered. In this study, analysis of the process is carried out for a more realistic and practical scheme with the detector located immediately after the column. Peak equation for individual cycles and equations describing the transport of single peaks and complex chromatograms inside the recycling closed-loop, as well as equations for the resolution between single solute peaks of the neighboring cycles, for the resolution of peaks in the recycling chromatogram and for the resolution between the chromatograms of the neighboring cycles are presented. It is shown that, unlike conventional chromatography, increasing of the extra-column volume (the recycling line length) may allow a better separation of the components in CLR chromatography. For the experimental verification of the theory, aspirin, caffeine, coumarin and the solvent system hexane/ethyl acetate/ethanol/water (1:1:1:1) were used. Comparison of experimental and simulated processes of recycling and distribution of the solutes in the closed-loop demonstrated a good agreement between theory and experiment. PMID:27492599

  3. Dust storm events over Delhi: verification of dust AOD forecasts with satellite and surface observations

    NASA Astrophysics Data System (ADS)

    Singh, Aditi; Iyengar, Gopal R.; George, John P.

    2016-05-01

    Thar desert located in northwest part of India is considered as one of the major dust source. Dust storms originate in Thar desert during pre-monsoon season, affects large part of Indo-Gangetic plains. High dust loading causes the deterioration of the ambient air quality and degradation in visibility. Present study focuses on the identification of dust events and verification of the forecast of dust events over Delhi and western part of IG Plains, during the pre-monsoon season of 2015. Three dust events have been identified over Delhi during the study period. For all the selected days, Terra-MODIS AOD at 550 nm are found close to 1.0, while AURA-OMI AI shows high values. Dust AOD forecasts from NCMRWF Unified Model (NCUM) for the three selected dust events are verified against satellite (MODIS) and ground based observations (AERONET). Comparison of observed AODs at 550 nm from MODIS with NCUM predicted AODs reveals that NCUM is able to predict the spatial and temporal distribution of dust AOD, in these cases. Good correlation (~0.67) is obtained between the NCUM predicted dust AODs and location specific observations available from AERONET. Model under-predicted the AODs as compared to the AERONET observations. This may be mainly because the model account for only dust and no anthropogenic activities are considered. The results of the present study emphasize the requirement of more realistic representation of local dust emission in the model both of natural and anthropogenic origin, to improve the forecast of dust from NCUM during the dust events.

  4. Dust forecast over North Africa: verification with satellite and ground based observations

    NASA Astrophysics Data System (ADS)

    Singh, Aditi; Kumar, Sumit; George, John P.

    2016-05-01

    Arid regions of North Africa are considered as one of the major dust source. Present study focuses on the forecast of aerosol optical depth (AOD) of dust over different regions of North Africa. NCMRWF Unified Model (NCUM) produces dust AOD forecasts at different wavelengths with lead time upto 240 hr, based on 00UTC initial conditions. Model forecast of dust AOD at 550 nm up to 72 hr forecast, based on different initial conditions are verified against satellite and ground based observations of total AOD during May-June 2014 with the assumption that except dust, presence of all other aerosols type are negligible. Location specific and geographical distribution of dust AOD forecast is verified against Aerosol Robotic Network (AERONET) station observations of total and coarse mode AOD. Moderate Resolution Imaging Spectroradiometer (MODIS) dark target and deep blue merged level 3 total aerosol optical depth (AOD) at 550 nm and Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) retrieved dust AOD at 532 nm are also used for verification. CALIOP dust AOD was obtained by vertical integration of aerosol extinction coefficient at 532 nm from the aerosol profile level 2 products. It is found that at all the selected AERONET stations, the trend in dust AODs is well predicted by NCUM up to three days advance. Good correlation, with consistently low bias (~ +/-0.06) and RMSE (~ 0.2) values, is found between model forecasts and point measurements of AERONET, except over one location Cinzana (Mali). Model forecast consistently overestimated the dust AOD compared to CALIOP dust AOD, with a bias of 0.25 and RMSE of 0.40.

  5. PBL Verification with Radiosonde and Aircraft Data

    NASA Astrophysics Data System (ADS)

    Tsidulko, M.; McQueen, J.; Dimego, G.; Ek, M.

    2008-12-01

    Boundary layer depth is an important characteristic in weather forecasting and it is a key parameter in air quality modeling determining extent of turbulence and dispersion for pollutants. Real-time PBL depths from the NAM(WRF/NMM) model are verified with different types of observations. PBL depths verification is incorporated into NCEP verification system including an ability to provide a range of statistical characteristics for the boundary layer heights. For the model, several types of boundary layer definitions are used. PBL height from the TKE scheme, critical Ri number approach as well as mixed layer depth are compared with observations. Observed PBL depths are determined applying Ri number approach to radiosonde profiles. Also, preliminary study of using ACARS data for PBL verification is conducted.

  6. Critical Surface Cleaning and Verification Alternatives

    NASA Technical Reports Server (NTRS)

    Melton, Donald M.; McCool, A. (Technical Monitor)

    2000-01-01

    As a result of federal and state requirements, historical critical cleaning and verification solvents such as Freon 113, Freon TMC, and Trichloroethylene (TCE) are either highly regulated or no longer 0 C available. Interim replacements such as HCFC 225 have been qualified, however toxicity and future phase-out regulations necessitate long term solutions. The scope of this project was to qualify a safe and environmentally compliant LOX surface verification alternative to Freon 113, TCE and HCFC 225. The main effort was focused on initiating the evaluation and qualification of HCFC 225G as an alternate LOX verification solvent. The project was scoped in FY 99/00 to perform LOX compatibility, cleaning efficiency and qualification on flight hardware.

  7. Land Ice Verification and Validation Kit

    SciTech Connect

    2015-07-15

    To address a pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice-sheet models is underway. The associated verification and validation process of these models is being coordinated through a new, robust, python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVV). This release provides robust and automated verification and a performance evaluation on LCF platforms. The performance V&V involves a comprehensive comparison of model performance relative to expected behavior on a given computing platform. LIVV operates on a set of benchmark and test data, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-4-bit evaluation, and plots of tests where differences occur.

  8. Packaged low-level waste verification system

    SciTech Connect

    Tuite, K.T.; Winberg, M.; Flores, A.Y.; Killian, E.W.; McIsaac, C.V.

    1996-08-01

    Currently, states and low-level radioactive waste (LLW) disposal site operators have no method of independently verifying the radionuclide content of packaged LLW that arrive at disposal sites for disposal. At this time, disposal sites rely on LLW generator shipping manifests and accompanying records to insure that LLW received meets the waste acceptance criteria. An independent verification system would provide a method of checking generator LLW characterization methods and help ensure that LLW disposed of at disposal facilities meets requirements. The Mobile Low-Level Waste Verification System (MLLWVS) provides the equipment, software, and methods to enable the independent verification of LLW shipping records to insure that disposal site waste acceptance criteria are being met. The MLLWVS system was developed under a cost share subcontract between WMG, Inc., and Lockheed Martin Idaho Technologies through the Department of Energy`s National Low-Level Waste Management Program at the Idaho National Engineering Laboratory (INEL).

  9. Land Ice Verification and Validation Kit

    Energy Science and Technology Software Center (ESTSC)

    2015-07-15

    To address a pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice-sheet models is underway. The associated verification and validation process of these models is being coordinated through a new, robust, python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVV). This release provides robust and automated verification and a performance evaluation on LCF platforms. The performance V&Vmore » involves a comprehensive comparison of model performance relative to expected behavior on a given computing platform. LIVV operates on a set of benchmark and test data, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-4-bit evaluation, and plots of tests where differences occur.« less

  10. Active alignment/contact verification system

    DOEpatents

    Greenbaum, William M.

    2000-01-01

    A system involving an active (i.e. electrical) technique for the verification of: 1) close tolerance mechanical alignment between two component, and 2) electrical contact between mating through an elastomeric interface. For example, the two components may be an alumina carrier and a printed circuit board, two mating parts that are extremely small, high density parts and require alignment within a fraction of a mil, as well as a specified interface point of engagement between the parts. The system comprises pairs of conductive structures defined in the surfaces layers of the alumina carrier and the printed circuit board, for example. The first pair of conductive structures relate to item (1) above and permit alignment verification between mating parts. The second pair of conductive structures relate to item (2) above and permit verification of electrical contact between mating parts.

  11. GHG MITIGATION TECHNOLOGY PERFORMANCE EVALUATIONS UNDERWAY AT THE GHG TECHNOLOGY VERIFICATION CENTER

    EPA Science Inventory

    The paper outlines the verification approach and activities of the Greenhouse Gas (GHG) Technology Verification Center, one of 12 independent verification entities operating under the U.S. EPA-sponsored Environmental Technology Verification (ETV) program. (NOTE: The ETV program...

  12. Implications of Non-Systematic Observations for Verification of Forecasts of Aviation Weather Variables

    NASA Astrophysics Data System (ADS)

    Brown, B. G.; Young, G. S.; Fowler, T. L.

    2001-12-01

    Over the last several years, efforts have been undertaken to develop improved automated forecasts of weather phenomena that have large impacts on aviation, including turbulence and in-flight icing conditions. Verification of these forecasts - which has played a major role in their development - is difficult due to the nature of the limited observations available for these evaluations; in particular, voice reports by pilots (PIREPs). These reports, which are provided inconsistently by pilots, currently are the best observations of turbulence and in-flight icing conditions available. However, their sampling characteristics make PIREPs a difficult dataset to use for these evaluations. In particular, PIREPs have temporal and spatial biases (e.g., they are more frequent during daylight hours, and they occur most frequently along flight routes and in the vicinity of major airports, where aircraft are concentrated), and they are subjective. Most importantly, the observations are non-systematic. That is, observations are not consistently reported at the same location and time. This characteristic of the reports has numerous implications for the verification of forecasts of these phenomena. In particular, it is inappropriate to estimate certain common verification statistics that normally are of interest in forecast evaluations. For example, estimates of the false alarm ratio and critical success index are incorrect, due to the unrepresentativeness of the observations. Analytical explanations for this result have been developed, and the magnitudes of the errors associated with estimating these statistics have been estimated through Monte Carlo simulations. In addition, several approaches have been developed to compensate for these characteristics of PIREPs in verification studies, including methods for estimating confidence intervals for the verification statistics, which take into account their sampling variability. These approaches also have implications for verification

  13. A tracking and verification system implemented in a clinical environment for partial HIPAA compliance

    NASA Astrophysics Data System (ADS)

    Guo, Bing; Documet, Jorge; Liu, Brent; King, Nelson; Shrestha, Rasu; Wang, Kevin; Huang, H. K.; Grant, Edward G.

    2006-03-01

    The paper describes the methodology for the clinical design and implementation of a Location Tracking and Verification System (LTVS) that has distinct benefits for the Imaging Department at the Healthcare Consultation Center II (HCCII), an outpatient imaging facility located on the USC Health Science Campus. A novel system for tracking and verification of patients and staff in a clinical environment using wireless and facial biometric technology to monitor and automatically identify patients and staff was developed in order to streamline patient workflow, protect against erroneous examinations and create a security zone to prevent and audit unauthorized access to patient healthcare data under the HIPAA mandate. This paper describes the system design and integration methodology based on initial clinical workflow studies within a clinical environment. An outpatient center was chosen as an initial first step for the development and implementation of this system.

  14. ENVIRONMENTAL TECHNOLOGY VERIFICATION: JOINT (NSF-EPA) VERIFICATION STATEMENT AND REPORT FOR THE REDUCTION OF NITROGEN IN DOMESTIC WASTEWATER FROM INDIVIDUAL HOMES, AQUAPOINT, INC. BIOCLERE MODEL 16/12 - 02/02/WQPC-SWP

    EPA Science Inventory

    Verification testing of the Aquapoint, Inc. (AQP) BioclereTM Model 16/12 was conducted over a thirteen month period at the Massachusetts Alternative Septic System Test Center (MASSTC), located at Otis Air National Guard Base in Bourne, Massachusetts. Sanitary sewerage from the ba...

  15. ENVIRONMENTAL TECHNOLOGY VERIFICATION: JOINT (NSF-EPA) VERIFICATION STATEMENT AND REPORT FOR THE REDUCTION OF NITROGEN IN DOMESTIC WASTEWATER FROM INDIVIDUAL HOMES, F. R. MAHONEY & ASSOC., AMPHIDROME SYSTEM FOR SINGLE FAMILY HOMES - 02/05/WQPC-SWP

    EPA Science Inventory

    Verification testing of the F.R. Mahoney Amphidrome System was conducted over a twelve month period at the Massachusetts Alternative Septic System Test Center (MASSTC) located at the Otis Air National Guard Base in Borne, MA. Sanitary Sewerage from the base residential housing w...

  16. ENVIRONMENTAL TECHNOLOGY VERIFICATION: JOINT (NSF-EPA) VERIFICATION STATEMENT AND REPORT FOR THE REDUCTION OF NITROGEN IN DOMESTIC WASTEWATER FROM INDIVIDUAL HOMES, BIO-MICROBICS, INC., MODEL RETROFAST ®0.375

    EPA Science Inventory

    Verification testing of the Bio-Microbics RetroFAST® 0.375 System to determine the reduction of nitrogen in residential wastewater was conducted over a twelve-month period at the Mamquam Wastewater Technology Test Facility, located at the Mamquam Wastewater Treatment Plant. The R...

  17. ENVIRONMENTAL TECHNOLOGY VERIFICATION: JOINT (NSDF-EPA) VERIFICATION STATEMENT AND REPORT FOR THE REDUCTION OF NITROGEN IN DOMESTIC WASTEWATER FROM INDIVIDUAL HOMES, SEPTITECH, INC. MODEL 400 SYSTEM - 02/04/WQPC-SWP

    EPA Science Inventory

    Verification testing of the SeptiTech Model 400 System was conducted over a twelve month period at the Massachusetts Alternative Septic System Test Center (MASSTC) located at the Otis Air National Guard Base in Borne, MA. Sanitary Sewerage from the base residential housing was u...

  18. ENVIRONMENTAL TECHNOLOGY VERIFICATION: JOINT (NSF-EPA) VERIFICATION STATEMENT AND REPORT FOR THE REDUCTION OF NITROGEN IN DOMESTIC WASTEWATER FROM INDIVIDUAL RESIDENTIAL HOMES, WATERLOO BIOFILTER® MODEL 4-BEDROOM (NSF 02/03/WQPC-SWP)

    EPA Science Inventory

    Verification testing of the Waterloo Biofilter Systems (WBS), Inc. Waterloo Biofilter® Model 4-Bedroom system was conducted over a thirteen month period at the Massachusetts Alternative Septic System Test Center (MASSTC) located at Otis Air National Guard Base in Bourne, Mas...

  19. Dose Verification of Stereotactic Radiosurgery Treatment for Trigeminal Neuralgia with Presage 3D Dosimetry System

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Thomas, A.; Newton, J.; Ibbott, G.; Deasy, J.; Oldham, M.

    2010-11-01

    Achieving adequate verification and quality-assurance (QA) for radiosurgery treatment of trigeminal-neuralgia (TGN) is particularly challenging because of the combination of very small fields, very high doses, and complex irradiation geometries (multiple gantry and couch combinations). TGN treatments have extreme requirements for dosimetry tools and QA techniques, to ensure adequate verification. In this work we evaluate the potential of Presage/Optical-CT dosimetry system as a tool for the verification of TGN distributions in high-resolution and in 3D. A TGN treatment was planned and delivered to a Presage 3D dosimeter positioned inside the Radiological-Physics-Center (RPC) head and neck IMRT credentialing phantom. A 6-arc treatment plan was created using the iPlan system, and a maximum dose of 80Gy was delivered with a Varian Trilogy machine. The delivered dose to Presage was determined by optical-CT scanning using the Duke Large field-of-view Optical-CT Scanner (DLOS) in 3D, with isotropic resolution of 0.7mm3. DLOS scanning and reconstruction took about 20minutes. 3D dose comparisons were made with the planning system. Good agreement was observed between the planned and measured 3D dose distributions, and this work provides strong support for the viability of Presage/Optical-CT as a highly useful new approach for verification of this complex technique.

  20. Groth Deep Locations Image

    NASA Technical Reports Server (NTRS)

    2003-01-01

    NASA's Galaxy Evolution Explorer photographed this ultraviolet color blowup of the Groth Deep Image on June 22 and June 23, 2003. Hundreds of galaxies are detected in this portion of the image, and the faint red galaxies are believed to be 6 billion light years away. The white boxes show the location of these distant galaxies, of which more than a 100 can be detected in this image. NASA astronomers expect to detect 10,000 such galaxies after extrapolating to the full image at a deeper exposure level.

    The Galaxy Evolution Explorer mission is led by the California Institute of Technology, which is also responsible for the science operations and data analysis. NASA's Jet Propulsion Laboratory, Pasadena, Calif., a division of Caltech, manages the mission and built the science instrument. The mission was developed under NASA's Explorers Program, managed by the Goddard Space Flight Center, Greenbelt, Md. The mission's international partners include South Korea and France.

  1. On Backward-Style Anonymity Verification

    NASA Astrophysics Data System (ADS)

    Kawabe, Yoshinobu; Mano, Ken; Sakurada, Hideki; Tsukada, Yasuyuki

    Many Internet services and protocols should guarantee anonymity; for example, an electronic voting system should guarantee to prevent the disclosure of who voted for which candidate. To prove trace anonymity, which is an extension of the formulation of anonymity by Schneider and Sidiropoulos, this paper presents an inductive method based on backward anonymous simulations. We show that the existence of an image-finite backward anonymous simulation implies trace anonymity. We also demonstrate the anonymity verification of an e-voting protocol (the FOO protocol) with our backward anonymous simulation technique. When proving the trace anonymity, this paper employs a computer-assisted verification tool based on a theorem prover.

  2. 340 and 310 drawing field verification

    SciTech Connect

    Langdon, J.

    1996-09-27

    The purpose of the drawing field verification work plan is to provide reliable drawings for the 310 Treated Effluent Disposal Facility (TEDF) and 340 Waste Handling Facility (340 Facility). The initial scope of this work plan is to provide field verified and updated versions of all the 340 Facility essential drawings. This plan can also be used for field verification of any other drawings that the facility management directs to be so updated. Any drawings revised by this work plan will be issued in an AutoCAD format.

  3. Verification of Plan Models Using UPPAAL

    NASA Technical Reports Server (NTRS)

    Khatib, Lina; Muscettola, Nicola; Haveland, Klaus; Lau, Sonic (Technical Monitor)

    2001-01-01

    This paper describes work on the verification of HSTS, the planner and scheduler of the Remote Agent autonomous control system deployed in Deep Space 1 (DS1). The verification is done using UPPAAL, a real time model checking tool. We start by motivating our work in the introduction. Then we give a brief description of HSTS and UPPAAL. After that, we give a mapping of HSTS models into UPPAAL and we present samples of plan model properties one may want to verify. Finally, we conclude with a summary.

  4. The formal verification of generic interpreters

    NASA Technical Reports Server (NTRS)

    Windley, P.; Levitt, K.; Cohen, G. C.

    1991-01-01

    The task assignment 3 of the design and validation of digital flight control systems suitable for fly-by-wire applications is studied. Task 3 is associated with formal verification of embedded systems. In particular, results are presented that provide a methodological approach to microprocessor verification. A hierarchical decomposition strategy for specifying microprocessors is also presented. A theory of generic interpreters is presented that can be used to model microprocessor behavior. The generic interpreter theory abstracts away the details of instruction functionality, leaving a general model of what an interpreter does.

  5. Design and ground verification of proximity operations

    NASA Astrophysics Data System (ADS)

    Tobias, A.; Ankersen, F.; Fehse, W.; Pauvert, C.; Pairot, J.

    This paper describes the approach to guidance, navigation, and control (GNC) design and verification for proximity operations. The most critical part of the rendezvous mission is the proximity operations phase when the distance between chaser and target is below approximately 20 m. Safety is the overriding consideration in the design of the GNC system. Requirements on the GNC system also stem from the allocation of performance between proximity operations and the mating process, docking, or capture for berthing. Whereas the design process follows a top down approach, the verification process goes bottom up in a stepwise way according to the development stage.

  6. Electric power system test and verification program

    NASA Technical Reports Server (NTRS)

    Rylicki, Daniel S.; Robinson, Frank, Jr.

    1994-01-01

    Space Station Freedom's (SSF's) electric power system (EPS) hardware and software verification is performed at all levels of integration, from components to assembly and system level tests. Careful planning is essential to ensure the EPS is tested properly on the ground prior to launch. The results of the test performed on breadboard model hardware and analyses completed to date have been evaluated and used to plan for design qualification and flight acceptance test phases. These results and plans indicate the verification program for SSF's 75-kW EPS would have been successful and completed in time to support the scheduled first element launch.

  7. Adjustment of Sensor Locations During Thermal Property Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Milos, Frank S.; Marschall, Jochen; Rasky, Daniel J. (Technical Monitor)

    1996-01-01

    The temperature dependent thermal properties of a material may be evaluated from transient temperature histories using nonlinear parameter estimation techniques. The usual approach is to minimize the sum of the squared errors between measured and calculated temperatures at specific locations in the body. Temperature measurements are usually made with thermocouples and it is customary to take thermocouple locations as known and fixed during parameter estimation computations. In fact, thermocouple locations are never known exactly. Location errors on the order of the thermocouple wire diameter are intrinsic to most common instrumentation procedures (e.g., inserting a thermocouple into a drilled hole) and additional errors can be expected for delicate materials, difficult installations, large thermocouple beads, etc.. Thermocouple location errors are especially significant when estimating thermal properties of low diffusively materials which can sustain large temperature gradients during testing. In the present work, a parameter estimation formulation is presented which allows for the direct inclusion of thermocouple positions into the primary parameter estimation procedure. It is straightforward to set bounds on thermocouple locations which exclude non-physical locations and are consistent with installation tolerances. Furthermore, bounds may be tightened to an extent consistent with any independent verification of thermocouple location, such as x-raying, and so the procedure is entirely consonant with experimental information. A mathematical outline of the procedure is given and its implementation is illustrated through numerical examples characteristic of light-weight, high-temperature ceramic insulation during transient heating. The efficacy and the errors associated with the procedure are discussed.

  8. VERIFICATION OF A TOXIC ORGANIC SUBSTANCE TRANSPORT AND BIOACCUMULATION MODEL

    EPA Science Inventory

    A field verification of the Toxic Organic Substance Transport and Bioaccumulation Model (TOXIC) was conducted using the insecticide dieldrin and the herbicides alachlor and atrazine as the test compounds. The test sites were two Iowa reservoirs. The verification procedure include...

  9. 19 CFR 181.74 - Verification visit procedures.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... generates a reliable receipt, to the CBP officer who gave the notification provided for in § 181.73 of this... otherwise cooperate during the verification visit shall mean that the verification visit never took...

  10. 19 CFR 181.74 - Verification visit procedures.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... generates a reliable receipt, to the CBP officer who gave the notification provided for in § 181.73 of this... otherwise cooperate during the verification visit shall mean that the verification visit never took...

  11. Generic Protocol for the Verification of Ballast Water Treatment Technology

    EPA Science Inventory

    In anticipation of the need to address performance verification and subsequent approval of new and innovative ballast water treatment technologies for shipboard installation, the U.S Coast Guard and the Environmental Protection Agency‘s Environmental Technology Verification Progr...

  12. ETV INTERNATIONAL OUTREACH ACTIVITIES (ENVIRONMENTAL TECHNOLOGY VERIFICATION (ETV) PROGRAM

    EPA Science Inventory

    The Environmental Technology Verification (ETV) Program's international outrearch activities have extended as far as Canada, Germany, Taiwan and the Philippines. Vendors from Canada and Germany were hosted at verification tests of turbidimeters. In May 1999, EPA's ETV Coordinator...

  13. ENVIRONMENTAL TECHNOLOGY VERIFICATION PROGRAM FOR MONITORING AND CHARACTERIZATION

    EPA Science Inventory

    The Environmental Technology Verification Program is a service of the Environmental Protection Agency designed to accelerate the development and commercialization of improved environmental technology through third party verification and reporting of performance. The goal of ETV i...

  14. Methods for identification and verification using vacuum XRF system

    NASA Technical Reports Server (NTRS)

    Schramm, Fred (Inventor); Kaiser, Bruce (Inventor)

    2005-01-01

    Apparatus and methods in which one or more elemental taggants that are intrinsically located in an object are detected by x-ray fluorescence analysis under vacuum conditions to identify or verify the object's elemental content for elements with lower atomic numbers. By using x-ray fluorescence analysis, the apparatus and methods of the invention are simple and easy to use, as well as provide detection by a non line-of-sight method to establish the origin of objects, as well as their point of manufacture, authenticity, verification, security, and the presence of impurities. The invention is extremely advantageous because it provides the capability to measure lower atomic number elements in the field with a portable instrument.

  15. A Verification of MCNP6 FMESH Tally Capabilities

    SciTech Connect

    Swift, Alicia L.; McKigney, Edward A.; Schirato, Richard C.; Robinson, Alex Philip; Temple, Brian Allen

    2015-02-10

    This work serves to verify the MCNP6 FMESH capability through comparison to two types of data. FMESH tallies, binned in time, were generated on an ideal detector face for neutrons undergoing a single scatter in a graphite target. For verification, FMESH results were compared to analytic calculations of the nonrelativistic TOF for elastic and inelastic single neutron scatters (TOF for the purposes of this paper is the time for a neutron to travel from its scatter location in the graphite target to the detector face). FMESH tally results were also compared to F4 tally results, an MNCP tally that calculates fluence in the same way as the FMESH tally. The FMESH tally results agree well with the analytic results and the F4 tally; hence, it is believed that, for simple geometries, MCNP6 FMESH tallies represent the physics of neutron scattering very well.

  16. Efficient and Secure Fingerprint Verification for Embedded Devices

    NASA Astrophysics Data System (ADS)

    Yang, Shenglin; Sakiyama, Kazuo; Verbauwhede, Ingrid

    2006-12-01

    This paper describes a secure and memory-efficient embedded fingerprint verification system. It shows how a fingerprint verification module originally developed to run on a workstation can be transformed and optimized in a systematic way to run real-time on an embedded device with limited memory and computation power. A complete fingerprint recognition module is a complex application that requires in the order of 1000 M unoptimized floating-point instruction cycles. The goal is to run both the minutiae extraction and the matching engines on a small embedded processor, in our case a 50 MHz LEON-2 softcore. It does require optimization and acceleration techniques at each design step. In order to speed up the fingerprint signal processing phase, we propose acceleration techniques at the algorithm level, at the software level to reduce the execution cycle number, and at the hardware level to distribute the system work load. Thirdly, a memory trace map-based memory reduction strategy is used for lowering the system memory requirement. Lastly, at the hardware level, it requires the development of specialized coprocessors. As results of these optimizations, we achieve a 65% reduction on the execution time and a 67% reduction on the memory storage requirement for the minutiae extraction process, compared against the reference implementation. The complete operation, that is, fingerprint capture, feature extraction, and matching, can be done in real-time of less than 4 seconds

  17. Weather model verification using Sodankylä mast measurements

    NASA Astrophysics Data System (ADS)

    Kangas, Markku; Rontu, Laura; Fortelius, Carl; Aurela, Mika; Poikonen, Antti

    2016-04-01

    Sodankylä, in the heart of Arctic Research Centre of the Finnish Meteorological Institute (FMI ARC) in northern Finland, is an ideal site for atmospheric and environmental research in the boreal and sub-Arctic zone. With temperatures ranging from -50 to +30 °C, it provides a challenging testing ground for numerical weather forecasting (NWP) models as well as weather forecasting in general. An extensive set of measurements has been carried out in Sodankylä for more than 100 years. In 2000, a 48 m-high micrometeorological mast was erected in the area. In this article, the use of Sodankylä mast measurements in NWP model verification is described. Starting in 2000, with the NWP model HIRLAM and Sodankylä measurements, the verification system has now been expanded to include comparisons between 12 NWP models and seven measurement masts, distributed across Europe. A case study, comparing forecasted and observed radiation fluxes, is also presented. It was found that three different radiation schemes, applicable in NWP model HARMONIE-AROME, produced somewhat different downwelling longwave radiation fluxes during cloudy days, which however did not change the overall cold bias of the predicted screen-level temperature.

  18. Integrated Verification Experiment data collected as part of the Los Alamos National Laboratory's Source Region Program

    SciTech Connect

    Fitzgerald, T.J.; Carlos, R.C.; Argo, P.E.

    1993-01-21

    As part of the integrated verification experiment (IVE), we deployed a network of hf ionospheric sounders to detect the effects of acoustic waves generated by surface ground motion following underground nuclear tests at the Nevada Test Site. The network sampled up to four geographic locations in the ionosphere from almost directly overhead of the surface ground zero out to a horizontal range of 60 km. We present sample results for four of the IVEs: Misty Echo, Texarkana, Mineral Quarry, and Bexar.

  19. Interim Letter Report - Verification Survey of 19 Grids in the Lester Flat Area, David Witherspoon Inc. 1630 Site Knoxville, Tennessee

    SciTech Connect

    P.C. Weaver

    2008-10-17

    Perform verification surveys of 19 available grids located in the Lester Flat Area at the Davod Witherspoon Site. The survey grids included E11, E12, E13, F11, F12, F13, F14, F15, G15, G16, G17, H16, H17, H18, X16, X17, X18, K16, and J16.

  20. Earth Science Enterprise Scientific Data Purchase Project: Verification and Validation

    NASA Technical Reports Server (NTRS)

    Jenner, Jeff; Policelli, Fritz; Fletcher, Rosea; Holecamp, Kara; Owen, Carolyn; Nicholson, Lamar; Dartez, Deanna

    2000-01-01

    This paper presents viewgraphs on the Earth Science Enterprise Scientific Data Purchase Project's verification,and validation process. The topics include: 1) What is Verification and Validation? 2) Why Verification and Validation? 3) Background; 4) ESE Data Purchas Validation Process; 5) Data Validation System and Ingest Queue; 6) Shipment Verification; 7) Tracking and Metrics; 8) Validation of Contract Specifications; 9) Earth Watch Data Validation; 10) Validation of Vertical Accuracy; and 11) Results of Vertical Accuracy Assessment.