Science.gov

Sample records for distributed location verification

  1. Collaborative Localization and Location Verification in WSNs

    PubMed Central

    Miao, Chunyu; Dai, Guoyong; Ying, Kezhen; Chen, Qingzhang

    2015-01-01

    Localization is one of the most important technologies in wireless sensor networks. A lightweight distributed node localization scheme is proposed by considering the limited computational capacity of WSNs. The proposed scheme introduces the virtual force model to determine the location by incremental refinement. Aiming at solving the drifting problem and malicious anchor problem, a location verification algorithm based on the virtual force mode is presented. In addition, an anchor promotion algorithm using the localization reliability model is proposed to re-locate the drifted nodes. Extended simulation experiments indicate that the localization algorithm has relatively high precision and the location verification algorithm has relatively high accuracy. The communication overhead of these algorithms is relative low, and the whole set of reliable localization methods is practical as well as comprehensive. PMID:25954948

  2. Verification of LHS distributions.

    SciTech Connect

    Swiler, Laura Painton

    2006-04-01

    This document provides verification test results for normal, lognormal, and uniform distributions that are used in Sandia's Latin Hypercube Sampling (LHS) software. The purpose of this testing is to verify that the sample values being generated in LHS are distributed according to the desired distribution types. The testing of distribution correctness is done by examining summary statistics, graphical comparisons using quantile-quantile plots, and format statistical tests such as the Chisquare test, the Kolmogorov-Smirnov test, and the Anderson-Darling test. The overall results from the testing indicate that the generation of normal, lognormal, and uniform distributions in LHS is acceptable.

  3. 37 CFR 384.7 - Verification of royalty distributions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... distributions. 384.7 Section 384.7 Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF... BUSINESS ESTABLISHMENT SERVICES § 384.7 Verification of royalty distributions. (a) General. This section prescribes procedures by which any Copyright Owner may verify the royalty distributions made by...

  4. The Error Distribution of BATSE GRB Location

    NASA Technical Reports Server (NTRS)

    Briggs, Michael S.; Pendleton, Geoffrey N.; Kippen, R. Marc; Brainerd, J. J.; Hurley, Kevin; Connaughton, Valerie; Meegan, Charles A.

    1998-01-01

    We develop empirical probability models for BATSE GRB location errors by a Bayesian analysis of the separations between BATSE GRB locations and locations obtained with the InterPlanetary Network (IPN). Models are compared and their parameters estimated using 394 GRBs with single IPN annuli and 20 GRBs with intersecting IPN annuli. Most of the analysis is for the 4B (rev) BATSE catalog; earlier catalogs are also analyzed. The simplest model that provides a good representation of the error distribution has 78% of the locations in a 'core' term with a systematic error of 1.85 degrees and the remainder in an extended tail with a systematic error of 5.36 degrees, implying a 68% confidence region for bursts with negligible statistical errors of 2.3 degrees. There is some evidence for a more complicated model in which the error distribution depends on the BATSE datatype that was used to obtain the location. Bright bursts are typically located using the CONT datatype, and according to the more complicated model, the 68% confidence region for CONT-located bursts with negligible statistical errors is 2.0 degrees.

  5. Distributed Avionics and Software Verification for the Constellation Program

    NASA Technical Reports Server (NTRS)

    Hood, Laura E.; Adams, James E.

    2008-01-01

    This viewgraph presentation reviews the planned verification of the avionics and software being developed for the Constellation program.The Constellation Distributed System Integration Laboratory (DSIL) will consist of multiple System Integration Labs (SILs), Simulators, Emulators, Testbeds, and Control Centers interacting with each other over a broadband network to provide virtual test systems for multiple test scenarios.

  6. Modeling and Verification of Distributed Generation and Voltage Regulation Equipment for Unbalanced Distribution Power Systems; Annual Subcontract Report, June 2007

    SciTech Connect

    Davis, M. W.; Broadwater, R.; Hambrick, J.

    2007-07-01

    This report summarizes the development of models for distributed generation and distribution circuit voltage regulation equipment for unbalanced power systems and their verification through actual field measurements.

  7. Mobile agent location in distributed environments

    NASA Astrophysics Data System (ADS)

    Fountoukis, S. G.; Argyropoulos, I. P.

    2012-12-01

    An agent is a small program acting on behalf of a user or an application which plays the role of a user. Artificial intelligence can be encapsulated in agents so that they can be capable of both behaving autonomously and showing an elementary decision ability regarding movement and some specific actions. Therefore they are often called autonomous mobile agents. In a distributed system, they can move themselves from one processing node to another through the interconnecting network infrastructure. Their purpose is to collect useful information and to carry it back to their user. Also, agents are used to start, monitor and stop processes running on the individual interconnected processing nodes of computer cluster systems. An agent has a unique id to discriminate itself from other agents and a current position. The position can be expressed as the address of the processing node which currently hosts the agent. Very often, it is necessary for a user, a processing node or another agent to know the current position of an agent in a distributed system. Several procedures and algorithms have been proposed for the purpose of position location of mobile agents. The most basic of all employs a fixed computing node, which acts as agent position repository, receiving messages from all the moving agents and keeping records of their current positions. The fixed node, responds to position queries and informs users, other nodes and other agents about the position of an agent. Herein, a model is proposed that considers pairs and triples of agents instead of single ones. A location method, which is investigated in this paper, attempts to exploit this model.

  8. GENERIC VERIFICATION PROTOCOL: DISTRIBUTED GENERATION AND COMBINED HEAT AND POWER FIELD TESTING PROTOCOL

    EPA Science Inventory

    This report is a generic verification protocol by which EPA’s Environmental Technology Verification program tests newly developed equipment for distributed generation of electric power, usually micro-turbine generators and internal combustion engine generators. The protocol will ...

  9. DOE-EPRI distributed wind Turbine Verification Program (TVP III)

    SciTech Connect

    McGowin, C.; DeMeo, E.; Calvert, S.

    1997-12-31

    In 1992, the Electric Power Research Institute (EPRI) and the U.S. Department of Energy (DOE) initiated the Utility Wind Turbine Verification Program (TVP). The goal of the program is to evaluate prototype advanced wind turbines at several sites developed by U.S. electric utility companies. Two six MW wind projects have been installed under the TVP program by Central and South West Services in Fort Davis, Texas and Green Mountain Power Corporation in Searsburg, Vermont. In early 1997, DOE and EPRI selected five more utility projects to evaluate distributed wind generation using smaller {open_quotes}clusters{close_quotes} of wind turbines connected directly to the electricity distribution system. This paper presents an overview of the objectives, scope, and status of the EPRI-DOE TVP program and the existing and planned TVP projects.

  10. Reconstructing Spatial Distributions from Anonymized Locations

    SciTech Connect

    Horey, James L; Forrest, Stephanie; Groat, Michael

    2012-01-01

    Devices such as mobile phones, tablets, and sensors are often equipped with GPS that accurately report a person's location. Combined with wireless communication, these devices enable a wide range of new social tools and applications. These same qualities, however, leave location-aware applications vulnerable to privacy violations. This paper introduces the Negative Quad Tree, a privacy protection method for location aware applications. The method is broadly applicable to applications that use spatial density information, such as social applications that measure the popularity of social venues. The method employs a simple anonymization algorithm running on mobile devices, and a more complex reconstruction algorithm on a central server. This strategy is well suited to low-powered mobile devices. The paper analyzes the accuracy of the reconstruction method in a variety of simulated and real-world settings and demonstrates that the method is accurate enough to be used in many real-world scenarios.

  11. Design and Verification of a Distributed Communication Protocol

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar A.; Goodloe, Alwyn E.

    2009-01-01

    The safety of remotely operated vehicles depends on the correctness of the distributed protocol that facilitates the communication between the vehicle and the operator. A failure in this communication can result in catastrophic loss of the vehicle. To complicate matters, the communication system may be required to satisfy several, possibly conflicting, requirements. The design of protocols is typically an informal process based on successive iterations of a prototype implementation. Yet distributed protocols are notoriously difficult to get correct using such informal techniques. We present a formal specification of the design of a distributed protocol intended for use in a remotely operated vehicle, which is built from the composition of several simpler protocols. We demonstrate proof strategies that allow us to prove properties of each component protocol individually while ensuring that the property is preserved in the composition forming the entire system. Given that designs are likely to evolve as additional requirements emerge, we show how we have automated most of the repetitive proof steps to enable verification of rapidly changing designs.

  12. Distributed Engine Control Empirical/Analytical Verification Tools

    NASA Technical Reports Server (NTRS)

    DeCastro, Jonathan; Hettler, Eric; Yedavalli, Rama; Mitra, Sayan

    2013-01-01

    NASA's vision for an intelligent engine will be realized with the development of a truly distributed control system featuring highly reliable, modular, and dependable components capable of both surviving the harsh engine operating environment and decentralized functionality. A set of control system verification tools was developed and applied to a C-MAPSS40K engine model, and metrics were established to assess the stability and performance of these control systems on the same platform. A software tool was developed that allows designers to assemble easily a distributed control system in software and immediately assess the overall impacts of the system on the target (simulated) platform, allowing control system designers to converge rapidly on acceptable architectures with consideration to all required hardware elements. The software developed in this program will be installed on a distributed hardware-in-the-loop (DHIL) simulation tool to assist NASA and the Distributed Engine Control Working Group (DECWG) in integrating DCS (distributed engine control systems) components onto existing and next-generation engines.The distributed engine control simulator blockset for MATLAB/Simulink and hardware simulator provides the capability to simulate virtual subcomponents, as well as swap actual subcomponents for hardware-in-the-loop (HIL) analysis. Subcomponents can be the communication network, smart sensor or actuator nodes, or a centralized control system. The distributed engine control blockset for MATLAB/Simulink is a software development tool. The software includes an engine simulation, a communication network simulation, control algorithms, and analysis algorithms set up in a modular environment for rapid simulation of different network architectures; the hardware consists of an embedded device running parts of the CMAPSS engine simulator and controlled through Simulink. The distributed engine control simulation, evaluation, and analysis technology provides unique

  13. The Role of Experience in Location Estimation: Target Distributions Shift Location Memory Biases

    ERIC Educational Resources Information Center

    Lipinski, John; Simmering, Vanessa R.; Johnson, Jeffrey S.; Spencer, John P.

    2010-01-01

    Research based on the Category Adjustment model concluded that the spatial distribution of target locations does not influence location estimation responses [Huttenlocher, J., Hedges, L., Corrigan, B., & Crawford, L. E. (2004). Spatial categories and the estimation of location. "Cognition, 93", 75-97]. This conflicts with earlier results showing…

  14. Radionuclide Inventory Distribution Project Data Evaluation and Verification White Paper

    SciTech Connect

    NSTec Environmental Restoration

    2010-05-17

    Testing of nuclear explosives caused widespread contamination of surface soils on the Nevada Test Site (NTS). Atmospheric tests produced the majority of this contamination. The Radionuclide Inventory and Distribution Program (RIDP) was developed to determine distribution and total inventory of radionuclides in surface soils at the NTS to evaluate areas that may present long-term health hazards. The RIDP achieved this objective with aerial radiological surveys, soil sample results, and in situ gamma spectroscopy. This white paper presents the justification to support the use of RIDP data as a guide for future evaluation and to support closure of Soils Sub-Project sites under the purview of the Federal Facility Agreement and Consent Order. Use of the RIDP data as part of the Data Quality Objective process is expected to provide considerable cost savings and accelerate site closures. The following steps were completed: - Summarize the RIDP data set and evaluate the quality of the data. - Determine the current uses of the RIDP data and cautions associated with its use. - Provide recommendations for enhancing data use through field verification or other methods. The data quality is sufficient to utilize RIDP data during the planning process for site investigation and closure. Project planning activities may include estimating 25-millirem per industrial access year dose rate boundaries, optimizing characterization efforts, projecting final end states, and planning remedial actions. In addition, RIDP data may be used to identify specific radionuclide distributions, and augment other non-radionuclide dose rate data. Finally, the RIDP data can be used to estimate internal and external dose rates. The data quality is sufficient to utilize RIDP data during the planning process for site investigation and closure. Project planning activities may include estimating 25-millirem per industrial access year dose rate boundaries, optimizing characterization efforts, projecting final

  15. Automated fault location and diagnosis on electric power distribution feeders

    SciTech Connect

    Zhu, J.; Lubkeman, D.L.; Girgis, A.A.

    1997-04-01

    This paper presents new techniques for locating and diagnosing faults on electric power distribution feeders. The proposed fault location and diagnosis scheme is capable of accurately identifying the location of a fault upon its occurrence, based on the integration of information available from disturbance recording devices with knowledge contained in a distribution feeder database. The developed fault location and diagnosis system can also be applied to the investigation of temporary faults that may not result in a blown fuse. The proposed fault location algorithm is based on the steady-state analysis of the faulted distribution network. To deal with the uncertainties inherent in the system modeling and the phasor estimation, the fault location algorithm has been adapted to estimate fault regions based on probabilistic modeling and analysis. Since the distribution feeder is a radial network, multiple possibilities of fault locations could be computed with measurements available only at the substation. To identify the actual fault location, a fault diagnosis algorithm has been developed to prune down and rank the possible fault locations by integrating the available pieces of evidence. Testing of the developed fault location and diagnosis system using field data has demonstrated its potential for practical use.

  16. Fault Location Methods for Ungrounded Distribution Systems Using Local Measurements

    NASA Astrophysics Data System (ADS)

    Xiu, Wanjing; Liao, Yuan

    2013-08-01

    This article presents novel fault location algorithms for ungrounded distribution systems. The proposed methods are capable of locating faults by using obtained voltage and current measurements at the local substation. Two types of fault location algorithms, using line to neutral and line to line measurements, are presented. The network structure and parameters are assumed to be known. The network structure needs to be updated based on information obtained from utility telemetry system. With the help of bus impedance matrix, local voltage changes due to the fault can be expressed as a function of fault currents. Since the bus impedance matrix contains information about fault location, superimposed voltages at local substation can be expressed as a function of fault location, through which fault location can be solved. Simulation studies have been carried out based on a sample distribution power system. From the evaluation study, it is evinced that very accurate fault location estimates are obtained from both types of methods.

  17. A expert system for locating distribution system faults

    SciTech Connect

    Hsu, Y.Y.; Lu, F.C.; Chien, Y. . Dept. of Electrical Engineering); Liu, J.P.; Lin, J.T. ); Yu, H.S.; Kuo, R.T )

    1991-01-01

    A rule-based expert system is designed to locate the faults in a distribution system. Distribution system component data and network topology are stored in the database. A set of heuristic rules are compiled from the dispatchers' experience and are imbedded in the rule base. To locate distribution system fault, an inference engine is developed to perform deductive reasonings on the rules in the knowledge base. The inference engine comprises three major parts: the dynamic searching method, the backtracking approach, and the set intersection operation. The expert system is implemented on a personal computer using the artificial intelligence language PROLOG. To demonstrate the effectiveness of the proposed approach, the expert system has been applied to locate faults in a real underground distribution system.

  18. Modeling and verification of distributed systems with labeled predicate transition nets

    NASA Astrophysics Data System (ADS)

    Lloret, Jean-Christophe

    Two main steps in the design of distributed systems are modeling and verification. Petri nets and CCS are two basic formal models. CCS is a modular language supporting compositional verification. Conversely, the petri net theory requires an accurate description of parallelism and focuses on property global verification. A structuring technique based on CCS concepts is introduced for predicate/transition nets. It consists of a high level petri net that permits expression of communication with value passing. In particular, a petri net composition operator, that can be interpreted as a multi-rendezvous between communicating systems, is defined. The multi rendezvous allows abstract modeling, with small state graphs. The developed formalism is highly convenient for refining abstract models relative to less abstract levels. Based on this work, a software tool, supporting distributed system design and verification, is developed. The advantage of this approach is shown in many research and industrial applications.

  19. Ground vibration tests of a high fidelity truss for verification of on orbit damage location techniques

    NASA Technical Reports Server (NTRS)

    Kashangaki, Thomas A. L.

    1992-01-01

    This paper describes a series of modal tests that were performed on a cantilevered truss structure. The goal of the tests was to assemble a large database of high quality modal test data for use in verification of proposed methods for on orbit model verification and damage detection in flexible truss structures. A description of the hardware is provided along with details of the experimental setup and procedures for 16 damage cases. Results from selected cases are presented and discussed. Differences between ground vibration testing and on orbit modal testing are also described.

  20. Discrete Wavelet Transform for Fault Locations in Underground Distribution System

    NASA Astrophysics Data System (ADS)

    Apisit, C.; Ngaopitakkul, A.

    2010-10-01

    In this paper, a technique for detecting faults in underground distribution system is presented. Discrete Wavelet Transform (DWT) based on traveling wave is employed in order to detect the high frequency components and to identify fault locations in the underground distribution system. The first peak time obtained from the faulty bus is employed for calculating the distance of fault from sending end. The validity of the proposed technique is tested with various fault inception angles, fault locations and faulty phases. The result is found that the proposed technique provides satisfactory result and will be very useful in the development of power systems protection scheme.

  1. The Error Distribution of BATSE Gamma-Ray Burst Locations

    NASA Technical Reports Server (NTRS)

    Briggs, Michael S.; Pendleton, Geoffrey N.; Kippen, R. Marc; Brainerd, J. J.; Hurley, Kevin; Connaughton, Valerie; Meegan, Charles A.

    1999-01-01

    Empirical probability models for BATSE gamma-ray burst (GRB) location errors are developed via a Bayesian analysis of the separations between BATSE GRB locations and locations obtained with the Interplanetary Network (IPN). Models are compared and their parameters estimated using 392 GRBs with single IPN annuli and 19 GRBs with intersecting IPN annuli. Most of the analysis is for the 4Br BATSE catalog; earlier catalogs are also analyzed. The simplest model that provides a good representation of the error distribution has 78% of the probability in a "core" term with a systematic error of 1.85 deg and the remainder in an extended tail with a systematic error of 5.1 deg, which implies a 68% confidence radius for bursts with negligible statistical uncertainties of 2.2 deg. There is evidence for a more complicated model in which the error distribution depends on the BATSE data type that was used to obtain the location. Bright bursts are typically located using the CONT data type, and according to the more complicated model, the 68% confidence radius for CONT-located bursts with negligible statistical uncertainties is 2.0 deg.

  2. 37 CFR 380.7 - Verification of royalty distributions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... distributions. 380.7 Section 380.7 Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF... royalty distributions. (a) General. This section prescribes procedures by which any Copyright Owner or Performer may verify the royalty distributions made by the Collective; Provided, however, that...

  3. Logistics distribution centers location problem and algorithm under fuzzy environment

    NASA Astrophysics Data System (ADS)

    Yang, Lixing; Ji, Xiaoyu; Gao, Ziyou; Li, Keping

    2007-11-01

    Distribution centers location problem is concerned with how to select distribution centers from the potential set so that the total relevant cost is minimized. This paper mainly investigates this problem under fuzzy environment. Consequentially, chance-constrained programming model for the problem is designed and some properties of the model are investigated. Tabu search algorithm, genetic algorithm and fuzzy simulation algorithm are integrated to seek the approximate best solution of the model. A numerical example is also given to show the application of the algorithm.

  4. Abstractions for Fault-Tolerant Distributed System Verification

    NASA Technical Reports Server (NTRS)

    Pike, Lee S.; Maddalon, Jeffrey M.; Miner, Paul S.; Geser, Alfons

    2004-01-01

    Four kinds of abstraction for the design and analysis of fault tolerant distributed systems are discussed. These abstractions concern system messages, faults, fault masking voting, and communication. The abstractions are formalized in higher order logic, and are intended to facilitate specifying and verifying such systems in higher order theorem provers.

  5. Solute location in a nanoconfined liquid depends on charge distribution

    SciTech Connect

    Harvey, Jacob A.; Thompson, Ward H.

    2015-07-28

    Nanostructured materials that can confine liquids have attracted increasing attention for their diverse properties and potential applications. Yet, significant gaps remain in our fundamental understanding of such nanoconfined liquids. Using replica exchange molecular dynamics simulations of a nanoscale, hydroxyl-terminated silica pore system, we determine how the locations explored by a coumarin 153 (C153) solute in ethanol depend on its charge distribution, which can be changed through a charge transfer electronic excitation. The solute position change is driven by the internal energy, which favors C153 at the pore surface compared to the pore interior, but less so for the more polar, excited-state molecule. This is attributed to more favorable non-specific solvation of the large dipole moment excited-state C153 by ethanol at the expense of hydrogen-bonding with the pore. It is shown that a change in molecule location resulting from shifts in the charge distribution is a general result, though how the solute position changes will depend upon the specific system. This has important implications for interpreting measurements and designing applications of mesoporous materials.

  6. The verification of lightning location accuracy in Finland deduced from lightning strikes to trees

    NASA Astrophysics Data System (ADS)

    Mäkelä, Antti; Mäkelä, Jakke; Haapalainen, Jussi; Porjo, Niko

    2016-05-01

    We present a new method to determine the ground truth and accuracy of lightning location systems (LLS), using natural lightning strikes to trees. Observations of strikes to trees are being collected with a Web-based survey tool at the Finnish Meteorological Institute. Since the Finnish thunderstorms tend to have on average a low flash rate, it is often possible to identify from the LLS data unambiguously the stroke that caused damage to a given tree. The coordinates of the tree are then the ground truth for that stroke. The technique has clear advantages over other methods used to determine the ground truth. Instrumented towers and rocket launches measure upward-propagating lightning. Video and audio records, even with triangulation, are rarely capable of high accuracy. We present data for 36 quality-controlled tree strikes in the years 2007-2008. We show that the average inaccuracy of the lightning location network for that period was 600 m. In addition, we show that the 50% confidence ellipse calculated by the lightning location network and used operationally for describing the location accuracy is physically meaningful: half of all the strikes were located within the uncertainty ellipse of the nearest recorded stroke. Using tree strike data thus allows not only the accuracy of the LLS to be estimated but also the reliability of the uncertainty ellipse. To our knowledge, this method has not been attempted before for natural lightning.

  7. Design and verification of distributed logic controllers with application of Petri nets

    SciTech Connect

    Wiśniewski, Remigiusz; Grobelna, Iwona; Grobelny, Michał; Wiśniewska, Monika

    2015-12-31

    The paper deals with the designing and verification of distributed logic controllers. The control system is initially modelled with Petri nets and formally verified against structural and behavioral properties with the application of the temporal logic and model checking technique. After that it is decomposed into separate sequential automata that are working concurrently. Each of them is re-verified and if the validation is successful, the system can be finally implemented.

  8. Towards the Formal Verification of a Distributed Real-Time Automotive System

    NASA Technical Reports Server (NTRS)

    Endres, Erik; Mueller, Christian; Shadrin, Andrey; Tverdyshev, Sergey

    2010-01-01

    We present the status of a project which aims at building, formally and pervasively verifying a distributed automotive system. The target system is a gate-level model which consists of several interconnected electronic control units with independent clocks. This model is verified against the specification as seen by a system programmer. The automotive system is implemented on several FPGA boards. The pervasive verification is carried out using combination of interactive theorem proving (Isabelle/HOL) and model checking (LTL).

  9. Verification of the use of completion-location analysis for initial assessment of reservoir heterogeneity

    SciTech Connect

    McDowell, R.R.; Avary, K.L.; Hohn, M.E.; Matchen, D.L. )

    1996-01-01

    In 1991, a technique (completion-location analysis) was developed for a U.S. DOE-funded study to give a preliminary assessment of field-scale reservoir heterogeneity in two West Virginia oil fields (Granny Creek and Rock Creek). The study's conclusions regarding heterogeneity agreed with initial predictions. However, as these fields were investigated specifically because they were thought to be heterogeneous, this test of the analysis was biased. In 1995, as part of a proposal to study siliciclastic strandplain reservoirs, the Jacksonburg- Stringtown field in West Virginia, was selected because it met the depositional criterion and was still being actively produced. Completion-location analysis was undertaken on 214 producing oil wells from the field. Analysis indicated that drilling in the fields is clustered into eight time periods (1890-1903, 1904-1911, 1912-1916, 1917-1934, 1935-1953, 1954-1975, 1975-1985, and 1986-1995). Mapping of the locations of wells for each time period indicated that from 1890-1903 approximately 50% of the current geographic extent of the field was defined. Drilling in the periods 1935-1953, 1954-1975, 1975-1985, and 1985-1995 added significantly to the extent of the field - these episodes, especially 1986-1995, represent the discovery of new production. On this basis, a preliminary prediction was made that Jacksonburg-Stringtown field should exhibit a relatively high degree of reservoir heterogeneity. Subsequent discussions with the producer revealed that the reservoir varies considerably in pay thickness and quality across the field, has localized areas with high water injection rates and early water breakthrough, and has areas of anomalously high production. This suggests significant reservoir heterogeneity and appears to verify the utility of completion-location analysis.

  10. Verification of the use of completion-location analysis for initial assessment of reservoir heterogeneity

    SciTech Connect

    McDowell, R.R.; Avary, K.L.; Hohn, M.E.; Matchen, D.L.

    1996-12-31

    In 1991, a technique (completion-location analysis) was developed for a U.S. DOE-funded study to give a preliminary assessment of field-scale reservoir heterogeneity in two West Virginia oil fields (Granny Creek and Rock Creek). The study`s conclusions regarding heterogeneity agreed with initial predictions. However, as these fields were investigated specifically because they were thought to be heterogeneous, this test of the analysis was biased. In 1995, as part of a proposal to study siliciclastic strandplain reservoirs, the Jacksonburg- Stringtown field in West Virginia, was selected because it met the depositional criterion and was still being actively produced. Completion-location analysis was undertaken on 214 producing oil wells from the field. Analysis indicated that drilling in the fields is clustered into eight time periods (1890-1903, 1904-1911, 1912-1916, 1917-1934, 1935-1953, 1954-1975, 1975-1985, and 1986-1995). Mapping of the locations of wells for each time period indicated that from 1890-1903 approximately 50% of the current geographic extent of the field was defined. Drilling in the periods 1935-1953, 1954-1975, 1975-1985, and 1985-1995 added significantly to the extent of the field - these episodes, especially 1986-1995, represent the discovery of new production. On this basis, a preliminary prediction was made that Jacksonburg-Stringtown field should exhibit a relatively high degree of reservoir heterogeneity. Subsequent discussions with the producer revealed that the reservoir varies considerably in pay thickness and quality across the field, has localized areas with high water injection rates and early water breakthrough, and has areas of anomalously high production. This suggests significant reservoir heterogeneity and appears to verify the utility of completion-location analysis.

  11. Evaluation of gafchromic EBT film for intensity modulated radiation therapy dose distribution verification

    PubMed Central

    Sankar, A.; Kurup, P. G. Goplakrishna; Murali, V.; Ayyangar, Komanduri M.; Nehru, R. Mothilal; Velmurugan, J.

    2006-01-01

    This work was undertaken with the intention of investigating the possibility of clinical use of commercially available self-developing radiochromic film – Gafchromic EBT film – for IMRT dose verification. The dose response curves were generated for the films using VXR-16 film scanner. The results obtained with EBT films were compared with the results of Kodak EDR2 films. It was found that the EBT film has a linear response between the dose ranges of 0 and 600 cGy. The dose-related characteristics of the EBT film, like post-irradiation color growth with time, film uniformity and effect of scanning orientation, were studied. There is up to 8.6% increase in the color density between 2 and 40 h after irradiation. There was a considerable variation, up to 8.5%, in the film uniformity over its sensitive region. The quantitative difference between calculated and measured dose distributions was analyzed using Gamma index with the tolerance of 3% dose difference and 3 mm distance agreement. EDR2 films showed good and consistent results with the calculated dose distribution, whereas the results obtained using EBT were inconsistent. The variation in the film uniformity limits the use of EBT film for conventional large field IMRT verification. For IMRT of smaller field size (4.5 × 4.5 cm), the results obtained with EBT were comparable with results of EDR2 films. PMID:21206669

  12. Three-dimensional gamma analysis of dose distributions in individual structures for IMRT dose verification.

    PubMed

    Tomiyama, Yuuki; Araki, Fujio; Oono, Takeshi; Hioki, Kazunari

    2014-07-01

    Our purpose in this study was to implement three-dimensional (3D) gamma analysis for structures of interest such as the planning target volume (PTV) or clinical target volume (CTV), and organs at risk (OARs) for intensity-modulated radiation therapy (IMRT) dose verification. IMRT dose distributions for prostate and head and neck (HN) cancer patients were calculated with an analytical anisotropic algorithm in an Eclipse (Varian Medical Systems) treatment planning system (TPS) and by Monte Carlo (MC) simulation. The MC dose distributions were calculated with EGSnrc/BEAMnrc and DOSXYZnrc user codes under conditions identical to those for the TPS. The prescribed doses were 76 Gy/38 fractions with five-field IMRT for the prostate and 33 Gy/17 fractions with seven-field IMRT for the HN. TPS dose distributions were verified by the gamma passing rates for the whole calculated volume, PTV or CTV, and OARs by use of 3D gamma analysis with reference to MC dose distributions. The acceptance criteria for the 3D gamma analysis were 3/3 and 2 %/2 mm for a dose difference and a distance to agreement. The gamma passing rates in PTV and OARs for the prostate IMRT plan were close to 100 %. For the HN IMRT plan, the passing rates of 2 %/2 mm in CTV and OARs were substantially lower because inhomogeneous tissues such as bone and air in the HN are included in the calculation area. 3D gamma analysis for individual structures is useful for IMRT dose verification. PMID:24796955

  13. Distribution and Location of Genetic Effects for Dairy Traits

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genetic effects for many dairy traits and for total economic merit are fairly evenly distributed across all chromosomes. A high-density scan using 38,416 SNP markers for 5,285 bulls confirmed two previously-known major genes on Bos taurus autosomes (BTA) 6 and 14 but revealed few other large effects...

  14. Distribution and Location of Genetic Effects for Dairy Traits

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genetic effects for many dairy traits and for total economic merit are fairly evenly distributed across all chromosomes. A high-density scan using 39,314 SNP markers for 5,285 bulls confirmed two previously-known major genes on BTA 6 and 14 but revealed few other large effects. Markers on BTA 18 had...

  15. Direct and full-scale experimental verifications towards ground-satellite quantum key distribution

    NASA Astrophysics Data System (ADS)

    Wang, Jian-Yu; Yang, Bin; Liao, Sheng-Kai; Zhang, Liang; Shen, Qi; Hu, Xiao-Fang; Wu, Jin-Cai; Yang, Shi-Ji; Jiang, Hao; Tang, Yan-Lin; Zhong, Bo; Liang, Hao; Liu, Wei-Yue; Hu, Yi-Hua; Huang, Yong-Mei; Qi, Bo; Ren, Ji-Gang; Pan, Ge-Sheng; Yin, Juan; Jia, Jian-Jun; Chen, Yu-Ao; Chen, Kai; Peng, Cheng-Zhi; Pan, Jian-Wei

    2013-05-01

    Quantum key distribution (QKD) provides the only intrinsically unconditional secure method for communication based on the principle of quantum mechanics. Compared with fibre-based demonstrations, free-space links could provide the most appealing solution for communication over much larger distances. Despite significant efforts, all realizations to date rely on stationary sites. Experimental verifications are therefore extremely crucial for applications to a typical low Earth orbit satellite. To achieve direct and full-scale verifications of our set-up, we have carried out three independent experiments with a decoy-state QKD system, and overcome all conditions. The system is operated on a moving platform (using a turntable), on a floating platform (using a hot-air balloon), and with a high-loss channel to demonstrate performances under conditions of rapid motion, attitude change, vibration, random movement of satellites, and a high-loss regime. The experiments address wide ranges of all leading parameters relevant to low Earth orbit satellites. Our results pave the way towards ground-satellite QKD and a global quantum communication network.

  16. Mass and galaxy distributions of four massive galaxy clusters from Dark Energy Survey Science Verification data

    SciTech Connect

    Melchior, P.; Suchyta, E.; Huff, E.; Hirsch, M.; Kacprzak, T.; Rykoff, E.; Gruen, D.; Armstrong, R.; Bacon, D.; Bechtol, K.; Bernstein, G. M.; Bridle, S.; Clampitt, J.; Honscheid, K.; Jain, B.; Jouvel, S.; Krause, E.; Lin, H.; MacCrann, N.; Patton, K.; Plazas, A.; Rowe, B.; Vikram, V.; Wilcox, H.; Young, J.; Zuntz, J.; Abbott, T.; Abdalla, F. B.; Allam, S. S.; Banerji, M.; Bernstein, J. P.; Bernstein, R. A.; Bertin, E.; Buckley-Geer, E.; Burke, D. L.; Castander, F. J.; da Costa, L. N.; Cunha, C. E.; Depoy, D. L.; Desai, S.; Diehl, H. T.; Doel, P.; Estrada, J.; Evrard, A. E.; Neto, A. F.; Fernandez, E.; Finley, D. A.; Flaugher, B.; Frieman, J. A.; Gaztanaga, E.; Gerdes, D.; Gruendl, R. A.; Gutierrez, G. R.; Jarvis, M.; Karliner, I.; Kent, S.; Kuehn, K.; Kuropatkin, N.; Lahav, O.; Maia, M. A. G.; Makler, M.; Marriner, J.; Marshall, J. L.; Merritt, K. W.; Miller, C. J.; Miquel, R.; Mohr, J.; Neilsen, E.; Nichol, R. C.; Nord, B. D.; Reil, K.; Roe, N. A.; Roodman, A.; Sako, M.; Sanchez, E.; Santiago, B. X.; Schindler, R.; Schubnell, M.; Sevilla-Noarbe, I.; Sheldon, E.; Smith, C.; Soares-Santos, M.; Swanson, M. E. C.; Sypniewski, A. J.; Tarle, G.; Thaler, J.; Thomas, D.; Tucker, D. L.; Walker, A.; Wechsler, R.; Weller, J.; Wester, W.

    2015-03-31

    We measure the weak-lensing masses and galaxy distributions of four massive galaxy clusters observed during the Science Verification phase of the Dark Energy Survey. This pathfinder study is meant to 1) validate the DECam imager for the task of measuring weak-lensing shapes, and 2) utilize DECam's large field of view to map out the clusters and their environments over 90 arcmin. We conduct a series of rigorous tests on astrometry, photometry, image quality, PSF modelling, and shear measurement accuracy to single out flaws in the data and also to identify the optimal data processing steps and parameters. We find Science Verification data from DECam to be suitable for the lensing analysis described in this paper. The PSF is generally well-behaved, but the modelling is rendered difficult by a flux-dependent PSF width and ellipticity. We employ photometric redshifts to distinguish between foreground and background galaxies, and a red-sequence cluster finder to provide cluster richness estimates and cluster-galaxy distributions. By fitting NFW profiles to the clusters in this study, we determine weak-lensing masses that are in agreement with previous work. For Abell 3261, we provide the first estimates of redshift, weak-lensing mass, and richness. Additionally, the cluster-galaxy distributions indicate the presence of filamentary structures attached to 1E 0657-56 and RXC J2248.7-4431, stretching out as far as 1degree (approximately 20 Mpc), showcasing the potential of DECam and DES for detailed studies of degree-scale features on the sky.

  17. Mass and galaxy distributions of four massive galaxy clusters from Dark Energy Survey Science Verification data

    SciTech Connect

    Melchior, P.; et al.

    2015-05-21

    We measure the weak-lensing masses and galaxy distributions of four massive galaxy clusters observed during the Science Verification phase of the Dark Energy Survey. This pathfinder study is meant to 1) validate the DECam imager for the task of measuring weak-lensing shapes, and 2) utilize DECam's large field of view to map out the clusters and their environments over 90 arcmin. We conduct a series of rigorous tests on astrometry, photometry, image quality, PSF modeling, and shear measurement accuracy to single out flaws in the data and also to identify the optimal data processing steps and parameters. We find Science Verification data from DECam to be suitable for the lensing analysis described in this paper. The PSF is generally well-behaved, but the modeling is rendered difficult by a flux-dependent PSF width and ellipticity. We employ photometric redshifts to distinguish between foreground and background galaxies, and a red-sequence cluster finder to provide cluster richness estimates and cluster-galaxy distributions. By fitting NFW profiles to the clusters in this study, we determine weak-lensing masses that are in agreement with previous work. For Abell 3261, we provide the first estimates of redshift, weak-lensing mass, and richness. In addition, the cluster-galaxy distributions indicate the presence of filamentary structures attached to 1E 0657-56 and RXC J2248.7-4431, stretching out as far as 1 degree (approximately 20 Mpc), showcasing the potential of DECam and DES for detailed studies of degree-scale features on the sky.

  18. Mass and galaxy distributions of four massive galaxy clusters from Dark Energy Survey Science Verification data

    DOE PAGESBeta

    Melchior, P.; Suchyta, E.; Huff, E.; Hirsch, M.; Kacprzak, T.; Rykoff, E.; Gruen, D.; Armstrong, R.; Bacon, D.; Bechtol, K.; et al

    2015-03-31

    We measure the weak-lensing masses and galaxy distributions of four massive galaxy clusters observed during the Science Verification phase of the Dark Energy Survey. This pathfinder study is meant to 1) validate the DECam imager for the task of measuring weak-lensing shapes, and 2) utilize DECam's large field of view to map out the clusters and their environments over 90 arcmin. We conduct a series of rigorous tests on astrometry, photometry, image quality, PSF modelling, and shear measurement accuracy to single out flaws in the data and also to identify the optimal data processing steps and parameters. We find Sciencemore » Verification data from DECam to be suitable for the lensing analysis described in this paper. The PSF is generally well-behaved, but the modelling is rendered difficult by a flux-dependent PSF width and ellipticity. We employ photometric redshifts to distinguish between foreground and background galaxies, and a red-sequence cluster finder to provide cluster richness estimates and cluster-galaxy distributions. By fitting NFW profiles to the clusters in this study, we determine weak-lensing masses that are in agreement with previous work. For Abell 3261, we provide the first estimates of redshift, weak-lensing mass, and richness. Additionally, the cluster-galaxy distributions indicate the presence of filamentary structures attached to 1E 0657-56 and RXC J2248.7-4431, stretching out as far as 1degree (approximately 20 Mpc), showcasing the potential of DECam and DES for detailed studies of degree-scale features on the sky.« less

  19. Mass and galaxy distributions of four massive galaxy clusters from Dark Energy Survey Science Verification data

    NASA Astrophysics Data System (ADS)

    Melchior, P.; Suchyta, E.; Huff, E.; Hirsch, M.; Kacprzak, T.; Rykoff, E.; Gruen, D.; Armstrong, R.; Bacon, D.; Bechtol, K.; Bernstein, G. M.; Bridle, S.; Clampitt, J.; Honscheid, K.; Jain, B.; Jouvel, S.; Krause, E.; Lin, H.; MacCrann, N.; Patton, K.; Plazas, A.; Rowe, B.; Vikram, V.; Wilcox, H.; Young, J.; Zuntz, J.; Abbott, T.; Abdalla, F. B.; Allam, S. S.; Banerji, M.; Bernstein, J. P.; Bernstein, R. A.; Bertin, E.; Buckley-Geer, E.; Burke, D. L.; Castander, F. J.; da Costa, L. N.; Cunha, C. E.; Depoy, D. L.; Desai, S.; Diehl, H. T.; Doel, P.; Estrada, J.; Evrard, A. E.; Neto, A. Fausti; Fernandez, E.; Finley, D. A.; Flaugher, B.; Frieman, J. A.; Gaztanaga, E.; Gerdes, D.; Gruendl, R. A.; Gutierrez, G. R.; Jarvis, M.; Karliner, I.; Kent, S.; Kuehn, K.; Kuropatkin, N.; Lahav, O.; Maia, M. A. G.; Makler, M.; Marriner, J.; Marshall, J. L.; Merritt, K. W.; Miller, C. J.; Miquel, R.; Mohr, J.; Neilsen, E.; Nichol, R. C.; Nord, B. D.; Reil, K.; Roe, N. A.; Roodman, A.; Sako, M.; Sanchez, E.; Santiago, B. X.; Schindler, R.; Schubnell, M.; Sevilla-Noarbe, I.; Sheldon, E.; Smith, C.; Soares-Santos, M.; Swanson, M. E. C.; Sypniewski, A. J.; Tarle, G.; Thaler, J.; Thomas, D.; Tucker, D. L.; Walker, A.; Wechsler, R.; Weller, J.; Wester, W.

    2015-05-01

    We measure the weak lensing masses and galaxy distributions of four massive galaxy clusters observed during the Science Verification phase of the Dark Energy Survey (DES). This pathfinder study is meant to (1) validate the Dark Energy Camera (DECam) imager for the task of measuring weak lensing shapes, and (2) utilize DECam's large field of view to map out the clusters and their environments over 90 arcmin. We conduct a series of rigorous tests on astrometry, photometry, image quality, point spread function (PSF) modelling, and shear measurement accuracy to single out flaws in the data and also to identify the optimal data processing steps and parameters. We find Science Verification data from DECam to be suitable for the lensing analysis described in this paper. The PSF is generally well behaved, but the modelling is rendered difficult by a flux-dependent PSF width and ellipticity. We employ photometric redshifts to distinguish between foreground and background galaxies, and a red-sequence cluster finder to provide cluster richness estimates and cluster-galaxy distributions. By fitting Navarro-Frenk-White profiles to the clusters in this study, we determine weak lensing masses that are in agreement with previous work. For Abell 3261, we provide the first estimates of redshift, weak lensing mass, and richness. In addition, the cluster-galaxy distributions indicate the presence of filamentary structures attached to 1E 0657-56 and RXC J2248.7-4431, stretching out as far as 1°(approximately 20 Mpc), showcasing the potential of DECam and DES for detailed studies of degree-scale features on the sky.

  20. Verification of spatial and temporal pressure distributions in segmented solid rocket motors

    NASA Technical Reports Server (NTRS)

    Salita, Mark

    1989-01-01

    A wide variety of analytical tools are in use today to predict the history and spatial distributions of pressure in the combustion chambers of solid rocket motors (SRMs). Experimental and analytical methods are presented here that allow the verification of many of these predictions. These methods are applied to the redesigned space shuttle booster (RSRM). Girth strain-gage data is compared to the predictions of various one-dimensional quasisteady analyses in order to verify the axial drop in motor static pressure during ignition transients as well as quasisteady motor operation. The results of previous modeling of radial flows in the bore, slots, and around grain overhangs are supported by approximate analytical and empirical techniques presented here. The predictions of circumferential flows induced by inhibitor asymmetries, nozzle vectoring, and propellant slump are compared to each other and to subscale cold air and water tunnel measurements to ascertain their validity.

  1. Dosimetric verification of stereotactic radiosurgery/stereotactic radiotherapy dose distributions using Gafchromic EBT3

    SciTech Connect

    Cusumano, Davide; Fumagalli, Maria L.; Marchetti, Marcello; Fariselli, Laura; De Martin, Elena

    2015-10-01

    Aim of this study is to examine the feasibility of using the new Gafchromic EBT3 film in a high-dose stereotactic radiosurgery and radiotherapy quality assurance procedure. Owing to the reduced dimensions of the involved lesions, the feasibility of scanning plan verification films on the scanner plate area with the best uniformity rather than using a correction mask was evaluated. For this purpose, signal values dispersion and reproducibility of film scans were investigated. Uniformity was then quantified in the selected area and was found to be within 1.5% for doses up to 8 Gy. A high-dose threshold level for analyses using this procedure was established evaluating the sensitivity of the irradiated films. Sensitivity was found to be of the order of centiGray for doses up to 6.2 Gy and decreasing for higher doses. The obtained results were used to implement a procedure comparing dose distributions delivered with a CyberKnife system to planned ones. The procedure was validated through single beam irradiation on a Gafchromic film. The agreement between dose distributions was then evaluated for 13 patients (brain lesions, 5 Gy/die prescription isodose ~80%) using gamma analysis. Results obtained using Gamma test criteria of 5%/1 mm show a pass rate of 94.3%. Gamma frequency parameters calculation for EBT3 films showed to strongly depend on subtraction of unexposed film pixel values from irradiated ones. In the framework of the described dosimetric procedure, EBT3 films proved to be effective in the verification of high doses delivered to lesions with complex shapes and adjacent to organs at risk.

  2. SLR data screening; location of peak of data distribution

    NASA Technical Reports Server (NTRS)

    Sinclair, Andrew T.

    1993-01-01

    At the 5th Laser Ranging Instrumentation Workshop held at Herstmonceux in 1984, consideration was given to the formation of on-site normal points by laser stations, and an algorithm was formulated. The algorithm included a recommendation that an iterated 3.0 x rms rejection criterion should be used to screen the data, and that arithmetic means should be formed within the normal point bins of the retained data. From Sept. 1990 onwards, this algorithm and screening criterion have been brought into effect by various laser stations for forming on-site normal points, and small variants of the algorithm are used by most analysis centers for forming normal points from full-rate data, although the data screening criterion they use ranges from about 2.5 to 3.0 x rms. At the CSTG Satellite Laser Ranging (SLR) Subcommission, a working group was set up in Mar. 1991 to review the recommended screening procedure. This paper has been influenced by the discussions of this working group, although the views expressed are primarily those of this author. The main thrust of this paper is that, particularly for single photon systems, a more important issue than data screening is the determination of the peak of a data distribution and hence, the determination of the bias of the peak from the mean. Several methods of determining the peak are discussed.

  3. Software for Statistical Analysis of Weibull Distributions with Application to Gear Fatigue Data: User Manual with Verification

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    2002-01-01

    The Weibull distribution has been widely adopted for the statistical description and inference of fatigue data. This document provides user instructions, examples, and verification for software to analyze gear fatigue test data. The software was developed presuming the data are adequately modeled using a two-parameter Weibull distribution. The calculations are based on likelihood methods, and the approach taken is valid for data that include type 1 censoring. The software was verified by reproducing results published by others.

  4. Software for Statistical Analysis of Weibull Distributions with Application to Gear Fatigue Data: User Manual with Verification

    NASA Technical Reports Server (NTRS)

    Kranz, Timothy L.

    2002-01-01

    The Weibull distribution has been widely adopted for the statistical description and inference of fatigue data. This document provides user instructions, examples, and verification for software to analyze gear fatigue test data. The software was developed presuming the data are adequately modeled using a two-parameter Weibull distribution. The calculations are based on likelihood methods, and the approach taken is valid for data that include type I censoring. The software was verified by reproducing results published by others.

  5. Study On Burst Location Technology under Steady-state in Water Distribution System

    NASA Astrophysics Data System (ADS)

    Liu, Xianpin; Li, Shuping; Wang, Shaowei; He, Fang; He, Zhixun; Cao, Guodong

    2010-11-01

    According to the characteristics of hydraulic information under the state of burst in water distribution system, to get the correlation of monitoring values and burst location and locate the position of burst on time by mathematical fitting. This method can effectively make use of the information of SCADA in water distribution system to active locating burst position. A new idea of burst location in water distribution systems to shorten the burst time, reduce the impact on urban water supply, economic losses and waste of water resources.

  6. Pretreatment verification of IMRT absolute dose distributions using a commercial a-Si EPID

    SciTech Connect

    Talamonti, C.; Casati, M.; Bucciolini, M.

    2006-11-15

    A commercial amorphous silicon electronic portal imaging device (EPID) has been studied to investigate its potential in the field of pretreatment verifications of step and shoot, intensity modulated radiation therapy (IMRT), 6 MV photon beams. The EPID was calibrated to measure absolute exit dose in a water-equivalent phantom at patient level, following an experimental approach, which does not require sophisticated calculation algorithms. The procedure presented was specifically intended to replace the time-consuming in-phantom film dosimetry. The dosimetric response was characterized on the central axis in terms of stability, linearity, and pulse repetition frequency dependence. The a-Si EPID demonstrated a good linearity with dose (within 2% from 1 monitor unit), which represent a prerequisite for the application in IMRT. A series of measurements, in which phantom thickness, air gap between the phantom and the EPID, field size and position of measurement of dose in the phantom (entrance or exit) varied, was performed to find the optimal calibration conditions, for which the field size dependence is minimized. In these conditions (20 cm phantom thickness, 56 cm air gap, exit dose measured at the isocenter), the introduction of a filter for the low-energy scattered radiation allowed us to define a universal calibration factor, independent of field size. The off-axis extension of the dose calibration was performed by applying a radial correction for the beam profile, distorted due to the standard flood field calibration of the device. For the acquisition of IMRT fields, it was necessary to employ home-made software and a specific procedure. This method was applied for the measurement of the dose distributions for 15 clinical IMRT fields. The agreement between the dose distributions, quantified by the gamma index, was found, on average, in 97.6% and 98.3% of the analyzed points for EPID versus TPS and for EPID versus FILM, respectively, thus suggesting a great

  7. Redshift Distributions of Galaxies in the DES Science Verification Shear Catalogue and Implications for Weak Lensing

    SciTech Connect

    Bonnett, C.

    2015-07-21

    We present photometric redshift estimates for galaxies used in the weak lensing analysis of the Dark Energy Survey Science Verification (DES SV) data. Four model- or machine learning-based photometric redshift methods { annz2, bpz calibrated against BCC-U fig simulations, skynet, and tpz { are analysed. For training, calibration, and testing of these methods, we also construct a catalogue of spectroscopically confirmed galaxies matched against DES SV data. The performance of the methods is evalu-ated against the matched spectroscopic catalogue, focusing on metrics relevant for weak lensing analyses, with additional validation against COSMOS photo-zs. From the galaxies in the DES SV shear catalogue, which have mean redshift 0.72 ±0.01 over the range 0:3 < z < 1:3, we construct three tomographic bins with means of z = {0.45; 0.67,1.00g}. These bins each have systematic uncertainties δz ≲ 0.05 in the mean of the fiducial skynet photo-z n(z). We propagate the errors in the redshift distributions through to their impact on cosmological parameters estimated with cosmic shear, and find that they cause shifts in the value of σ8 of approx. 3%. This shift is within the one sigma statistical errors on σ8 for the DES SV shear catalog. We also found that further study of the potential impact of systematic differences on the critical surface density, Σcrit, contained levels of bias safely less than the statistical power of DES SV data. We recommend a final Gaussian prior for the photo-z bias in the mean of n(z) of width 0:05 for each of the three tomographic bins, and show that this is a sufficient bias model for the corresponding cosmology analysis.

  8. Distribution and location of Daxx in cervical epithelial cells with high risk human papillomavirus positive

    PubMed Central

    2014-01-01

    Aims To provide the basis for further exploring the effect and its mechanism of Death domain associated protein (Daxx) on the progress of cervical carcinoma induced by human papillomavirus (HPV), the distribution and location of Daxx in cervical carcinoma with high risk HPV(HR-HPV) positive was analyzed. Methods The samples of normal cervical epithelial cells, cervical intraepithelial neoplasia grade I (CINI), CINII CINIII and cervical cancers were collected. Immunohistochemistry assay was used to analyze the distributions and locations of Daxx in the cervical tissue. Indirect immunoinfluorescence test was utilized to observe the locations of Daxx in Caski cells with HPV16 positive. Results Under the light microscopy, the brown signals of Daxx distributed in the nuclei of normal cervical epithelial cells; Daxx mainly distributed in nuclear membrane and there were a small amount of Daxx in the nuclei in CINI. Daxx intensively distributed in the cytoplasm and cell membrane in CINII, CINIII and cervical cancer. Under fluorescent microscopy, the distribution and location of Daxx in Caski cells was similarly to that in cervical cells of CINII, CINIII and cervical cancer. Conclusion In the progress of the cervical cancer, Daxx gradually translocates from nucleus into nuclear membrane, cytoplasm and cell membrane. Daxx locates in the cytoplasm and cell membrane in CINII, CINIII and cervical cancer. Virtual slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/4671548951113870. PMID:24398161

  9. Fault location of underground distribution network based on RBF network optimized by improved PSO algorithm

    NASA Astrophysics Data System (ADS)

    Tian, Shu; Zhao, Min

    2013-03-01

    To solve the difficult problem that exists in the location of single-phase ground fault for coal mine underground distribution network, a fault location method using RBF network optimized by improved PSO algorithm based on the mapping relationship between wavelet packet transform modulus maxima of specific frequency bands transient state zero sequence current in the fault line and fault point position is presented. The simulation analysis results in the cases of different transition resistances and fault distances show that the RBF network optimized by improved PSO algorithm can obtain accurate and reliable fault location results, and the fault location perfor- mance is better than traditional RBF network.

  10. A Distribution-class Locational Marginal Price (DLMP) Index for Enhanced Distribution Systems

    NASA Astrophysics Data System (ADS)

    Akinbode, Oluwaseyi Wemimo

    The smart grid initiative is the impetus behind changes that are expected to culminate into an enhanced distribution system with the communication and control infrastructure to support advanced distribution system applications and resources such as distributed generation, energy storage systems, and price responsive loads. This research proposes a distribution-class analog of the transmission LMP (DLMP) as an enabler of the advanced applications of the enhanced distribution system. The DLMP is envisioned as a control signal that can incentivize distribution system resources to behave optimally in a manner that benefits economic efficiency and system reliability and that can optimally couple the transmission and the distribution systems. The DLMP is calculated from a two-stage optimization problem; a transmission system OPF and a distribution system OPF. An iterative framework that ensures accurate representation of the distribution system's price sensitive resources for the transmission system problem and vice versa is developed and its convergence problem is discussed. As part of the DLMP calculation framework, a DCOPF formulation that endogenously captures the effect of real power losses is discussed. The formulation uses piecewise linear functions to approximate losses. This thesis explores, with theoretical proofs, the breakdown of the loss approximation technique when non-positive DLMPs/LMPs occur and discusses a mixed integer linear programming formulation that corrects the breakdown. The DLMP is numerically illustrated in traditional and enhanced distribution systems and its superiority to contemporary pricing mechanisms is demonstrated using price responsive loads. Results show that the impact of the inaccuracy of contemporary pricing schemes becomes significant as flexible resources increase. At high elasticity, aggregate load consumption deviated from the optimal consumption by up to about 45 percent when using a flat or time-of-use rate. Individual load

  11. Geometric verification

    NASA Technical Reports Server (NTRS)

    Grebowsky, G. J.

    1982-01-01

    Present LANDSAT data formats are reviewed to clarify how the geodetic location and registration capabilities were defined for P-tape products and RBV data. Since there is only one geometric model used in the master data processor, geometric location accuracy of P-tape products depends on the absolute accuracy of the model and registration accuracy is determined by the stability of the model. Due primarily to inaccuracies in data provided by the LANDSAT attitude management system, desired accuracies are obtained only by using ground control points and a correlation process. The verification of system performance with regards to geodetic location requires the capability to determine pixel positions of map points in a P-tape array. Verification of registration performance requires the capability to determine pixel positions of common points (not necessarily map points) in 2 or more P-tape arrays for a given world reference system scene. Techniques for registration verification can be more varied and automated since map data are not required. The verification of LACIE extractions is used as an example.

  12. A location-routing-inventory model for designing multisource distribution networks

    NASA Astrophysics Data System (ADS)

    Ahmadi-Javid, Amir; Seddighi, Amir Hossein

    2012-06-01

    This article studies a ternary-integration problem that incorporates location, inventory and routing decisions in designing a multisource distribution network. The objective of the problem is to minimize the total cost of location, routing and inventory. A mixed-integer programming formulation is first presented, and then a three-phase heuristic is developed to solve large-sized instances of the problem. The numerical study indicates that the proposed heuristic is both effective and efficient.

  13. A novel multi-human location method for distributed binary pyroelectric infrared sensor tracking system: Region partition using PNN and bearing-crossing location

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Li, Xiaoshan; Luo, Jing

    2015-01-01

    This paper proposes a novel multi-human location method for distributed binary pyroelectric infrared sensor tracking system based on region partition using probabilistic neural network and bearing-crossing location. The detection space of system is divided into many sub-regions and encoded uniformly. The human region is located by an integrated neural network classifier, which is developed based on the probabilistic neural network ensembles and the Bagging algorithm. The location of a human target can be achieved by first determining a coarse location by this classifier and then a fine location using our previous bearing-crossing location method. Simulation and experimental results have shown that the human region can be judged rapidly and the false detection points of multi-human location can be eliminated effectively. Compared with the bearing-crossing location method, the novel method has significantly improved the locating and tracking accuracy of multiple human targets in infrared sensor tracking system.

  14. Development of Micro Discharge Locator for Distribution Line using Analogue Signal Processing

    NASA Astrophysics Data System (ADS)

    Kumazawa, Takao; Oka, Fujio

    Micro discharges (MDs) such as spark or partial discharges on distribution lines, which occur by degradation of insulators, insulated wires, bushings, etc., may cause television interference or ground fault. A technique for locating MDs using differences in arrival time of electromagnetic pulses radiated from the MDs has been investigated recently. However, the technique requires a large and expensive apparatus such as a digital storage oscilloscope able to record the received pulse signals very fast. We investigated a new technique to estimate the direction of arrival (DOA) of the electromagnetic pulses using analogue signal processing, and produced a prototype of a MD locator. In order to evaluate the estimation error of DOA, we performed several experiments to locate spark discharges about 50nC/pulse on testing distribution line by using the MD locator. The average estimation error was about 5 degree, and the error of azimuth was several times larger than that of elevation in most cases. This reason is considered that resolving power of azimuth became lower than that of elevation owing to configuration of receiving antennas. We also tried to locate MDs on real distribution lines, and confirmed that there was no significant influence of reflected or carrier waves on DOA estimation.

  15. Using fuzzy sets to model the uncertainty in the fault location process of distribution networks

    SciTech Connect

    Jaerventausta, P.; Verho, P.; Partanen, J. )

    1994-04-01

    In the computerized fault diagnosis of distribution networks the heuristic knowledge of the control center operators can be combined with the information obtained from the network data base and SCADA system. However, the nature of the heuristic knowledge is inexact and uncertain. Also the information obtained from the remote control system contains uncertainty and may be incorrect, conflicting or inadequate. This paper proposes a method based on fuzzy set theory to deal with the uncertainty involved in the process of locating faults in distribution networks. The method is implemented in a prototype version of the distribution network operation support system.

  16. A method to optimize sampling locations for measuring indoor air distributions

    NASA Astrophysics Data System (ADS)

    Huang, Yan; Shen, Xiong; Li, Jianmin; Li, Bingye; Duan, Ran; Lin, Chao-Hsin; Liu, Junjie; Chen, Qingyan

    2015-02-01

    Indoor air distributions, such as the distributions of air temperature, air velocity, and contaminant concentrations, are very important to occupants' health and comfort in enclosed spaces. When point data is collected for interpolation to form field distributions, the sampling locations (the locations of the point sensors) have a significant effect on time invested, labor costs and measuring accuracy on field interpolation. This investigation compared two different sampling methods: the grid method and the gradient-based method, for determining sampling locations. The two methods were applied to obtain point air parameter data in an office room and in a section of an economy-class aircraft cabin. The point data obtained was then interpolated to form field distributions by the ordinary Kriging method. Our error analysis shows that the gradient-based sampling method has 32.6% smaller error of interpolation than the grid sampling method. We acquired the function between the interpolation errors and the sampling size (the number of sampling points). According to the function, the sampling size has an optimal value and the maximum sampling size can be determined by the sensor and system errors. This study recommends the gradient-based sampling method for measuring indoor air distributions.

  17. Optimal Capacity and Location Assessment of Natural Gas Fired Distributed Generation in Residential Areas

    NASA Astrophysics Data System (ADS)

    Khalil, Sarah My

    With ever increasing use of natural gas to generate electricity, installed natural gas fired microturbines are found in residential areas to generate electricity locally. This research work discusses a generalized methodology for assessing optimal capacity and locations for installing natural gas fired microturbines in a distribution residential network. The overall objective is to place microturbines to minimize the system power loss occurring in the electrical distribution network; in such a way that the electric feeder does not need any up-gradation. The IEEE 123 Node Test Feeder is selected as the test bed for validating the developed methodology. Three-phase unbalanced electric power flow is run in OpenDSS through COM server, and the gas distribution network is analyzed using GASWorkS. The continual sensitivity analysis methodology is developed to select multiple DG locations and annual simulation is run to minimize annual average losses. The proposed placement of microturbines must be feasible in the gas distribution network and should not result into gas pipeline reinforcement. The corresponding gas distribution network is developed in GASWorkS software, and nodal pressures of the gas system are checked for various cases to investigate if the existing gas distribution network can accommodate the penetration of selected microturbines. The results indicate the optimal locations suitable to place microturbines and capacity that can be accommodated by the system, based on the consideration of overall minimum annual average losses as well as the guarantee of nodal pressure provided by the gas distribution network. The proposed method is generalized and can be used for any IEEE test feeder or an actual residential distribution network.

  18. Acoustic Emission Source Location Using a Distributed Feedback Fiber Laser Rosette

    PubMed Central

    Huang, Wenzhu; Zhang, Wentao; Li, Fang

    2013-01-01

    This paper proposes an approach for acoustic emission (AE) source localization in a large marble stone using distributed feedback (DFB) fiber lasers. The aim of this study is to detect damage in structures such as those found in civil applications. The directional sensitivity of DFB fiber laser is investigated by calculating location coefficient using a method of digital signal analysis. In this, autocorrelation is used to extract the location coefficient from the periodic AE signal and wavelet packet energy is calculated to get the location coefficient of a burst AE source. Normalization is processed to eliminate the influence of distance and intensity of AE source. Then a new location algorithm based on the location coefficient is presented and tested to determine the location of AE source using a Delta (Δ) DFB fiber laser rosette configuration. The advantage of the proposed algorithm over the traditional methods based on fiber Bragg Grating (FBG) include the capability of: having higher strain resolution for AE detection and taking into account two different types of AE source for location. PMID:24141266

  19. A simple method to determine leakage location in water distribution based on pressure profiles

    NASA Astrophysics Data System (ADS)

    Prihtiadi, Hafizh; Azwar, Azrul; Djamal, Mitra

    2016-03-01

    Nowadays, the pipeline leak is a serious problem for water distributions in big cities and the government that needs action and a great solution. Several techniques have been developed to improve the accuracy, the limitation of losses, and decrease environmental damage. However, these methods need highly costs and complexity equipment. This paper presents a simple method to determine leak location with the gradient intersection method calculations. A simple water distribution system have been built on PVC pipeline along 4m, diameter 15mm and 12 pressure sensors which placed into the pipeline. Each sensor measured the pressure for each point and send the data to microcontroller. The artificial hole was made between the sixth and seventh of sensor. With three holes, the system calculated and analyzed the leak location with error 3.67%.

  20. Location of lightning stroke on OPGW by use of distributed optical fiber sensor

    NASA Astrophysics Data System (ADS)

    Lu, Lidong; Liang, Yun; Li, Binglin; Guo, Jinghong; Zhang, Hao; Zhang, Xuping

    2014-12-01

    A new method based on a distributed optical fiber sensor (DOFS) to locate the position of lightning stroke on the optical fiber ground wire (OPGW) is proposed and experimentally demonstrated. In the method, the lightning stroke process is considered to be a heat release process at the lightning stroke position, so Brillouin optical time domain reflectometry (BOTDR) with spatial resolution of 2m is used as the distributed temperature sensor. To simulate the lightning stroke process, an electric anode with high pulsed current and a negative electrode (the OPGW) are adopted to form a lightning impulse system with duration time of 200ms. In the experiment, lightning strokes with the quantity of electric discharge of 100 Coul and 200 Coul are generated respectively, and the DOFS can sensitively capture the temperature change of the lightning stroke position in the transient electric discharging process. Experimental results show that DOFS is a feasible instrument to locate the lightning stroke on the OPGW and it has excellent potential for the maintenance of electric power transmission line. Additionally, as the range of lightning stroke is usually within 10cm and the spatial resolution of a typical DOFS is beyond 1m, the temperature characteristics in a small area cannot be accurately represented by a DOFS with a large spatial resolution. Therefore, for further application of distributed optical fiber temperature sensors for lightning stroke location on OPGW, such as BOTDR and ROTDR, it is important to enhance the spatial resolution.

  1. Estimation of Distributed Fermat-Point Location for Wireless Sensor Networking

    PubMed Central

    Huang, Po-Hsian; Chen, Jiann-Liang; Larosa, Yanuarius Teofilus; Chiang, Tsui-Lien

    2011-01-01

    This work presents a localization scheme for use in wireless sensor networks (WSNs) that is based on a proposed connectivity-based RF localization strategy called the distributed Fermat-point location estimation algorithm (DFPLE). DFPLE applies triangle area of location estimation formed by intersections of three neighboring beacon nodes. The Fermat point is determined as the shortest path from three vertices of the triangle. The area of estimated location then refined using Fermat point to achieve minimum error in estimating sensor nodes location. DFPLE solves problems of large errors and poor performance encountered by localization schemes that are based on a bounding box algorithm. Performance analysis of a 200-node development environment reveals that, when the number of sensor nodes is below 150, the mean error decreases rapidly as the node density increases, and when the number of sensor nodes exceeds 170, the mean error remains below 1% as the node density increases. Second, when the number of beacon nodes is less than 60, normal nodes lack sufficient beacon nodes to enable their locations to be estimated. However, the mean error changes slightly as the number of beacon nodes increases above 60. Simulation results revealed that the proposed algorithm for estimating sensor positions is more accurate than existing algorithms, and improves upon conventional bounding box strategies. PMID:22163851

  2. Comparison of Kodak EDR2 and Gafchromic EBT film for intensity-modulated radiation therapy dose distribution verification

    SciTech Connect

    Sankar, A. . E-mail: asankar_phy@yahoo.co.in; Ayyangar, Komanduri M.; Nehru, R. Mothilal; Gopalakrishna Kurup, P.G.; Murali, V.; Enke, Charles A.; Velmurugan, J.

    2006-01-01

    The quantitative dose validation of intensity-modulated radiation therapy (IMRT) plans require 2-dimensional (2D) high-resolution dosimetry systems with uniform response over its sensitive region. The present work deals with clinical use of commercially available self-developing Radio Chromic Film, Gafchromic EBT film, for IMRT dose verification. Dose response curves were generated for the films using a VXR-16 film scanner. The results obtained with EBT films were compared with the results of Kodak extended dose range 2 (EDR2) films. The EBT film had a linear response between the dose range of 0 to 600 cGy. The dose-related characteristics of the EBT film, such as post irradiation color growth with time, film uniformity, and effect of scanning orientation, were studied. There was up to 8.6% increase in the color density between 2 to 40 hours after irradiation. There was a considerable variation, up to 8.5%, in the film uniformity over its sensitive region. The quantitative differences between calculated and measured dose distributions were analyzed using DTA and Gamma index with the tolerance of 3% dose difference and 3-mm distance agreement. The EDR2 films showed consistent results with the calculated dose distributions, whereas the results obtained using EBT were inconsistent. The variation in the film uniformity limits the use of EBT film for conventional large-field IMRT verification. For IMRT of smaller field sizes (4.5 x 4.5 cm), the results obtained with EBT were comparable with results of EDR2 films.

  3. Distributed fiber sensing system with wide frequency response and accurate location

    NASA Astrophysics Data System (ADS)

    Shi, Yi; Feng, Hao; Zeng, Zhoumo

    2016-02-01

    A distributed fiber sensing system merging Mach-Zehnder interferometer and phase-sensitive optical time domain reflectometer (Φ-OTDR) is demonstrated for vibration measurement, which requires wide frequency response and accurate location. Two narrow line-width lasers with delicately different wavelengths are used to constitute the interferometer and reflectometer respectively. A narrow band Fiber Bragg Grating is responsible for separating the two wavelengths. In addition, heterodyne detection is applied to maintain the signal to noise rate of the locating signal. Experiment results show that the novel system has a wide frequency from 1 Hz to 50 MHz, limited by the sample frequency of data acquisition card, and a spatial resolution of 20 m, according to 200 ns pulse width, along 2.5 km fiber link.

  4. Tomotherapy dose distribution verification using MAGIC-f polymer gel dosimetry

    SciTech Connect

    Pavoni, J. F.; Pike, T. L.; Snow, J.; DeWerd, L.; Baffa, O.

    2012-05-15

    Purpose: This paper presents the application of MAGIC-f gel in a three-dimensional dose distribution measurement and its ability to accurately measure the dose distribution from a tomotherapy unit. Methods: A prostate intensity-modulated radiation therapy (IMRT) irradiation was simulated in the gel phantom and the treatment was delivered by a TomoTherapy equipment. Dose distribution was evaluated by the R2 distribution measured in magnetic resonance imaging. Results: A high similarity was found by overlapping of isodoses of the dose distribution measured with the gel and expected by the treatment planning system (TPS). Another analysis was done by comparing the relative absorbed dose profiles in the measured and in the expected dose distributions extracted along indicated lines of the volume and the results were also in agreement. The gamma index analysis was also applied to the data and a high pass rate was achieved (88.4% for analysis using 3%/3 mm and of 96.5% using 4%/4 mm). The real three-dimensional analysis compared the dose-volume histograms measured for the planning volumes and expected by the treatment planning, being the results also in good agreement by the overlapping of the curves. Conclusions: These results show that MAGIC-f gel is a promise for tridimensional dose distribution measurements.

  5. Experimental verification of reconstructed absorbers embedded in scattering media by optical power ratio distribution.

    PubMed

    Yamaoki, Toshihiko; Hamada, Hiroaki; Matoba, Osamu

    2016-09-01

    Experimental investigation to show the effectiveness of the extraction method of absorber information in a scattering medium by taking the output power ratio distribution is presented. In the experiment, two metallic wires sandwiched by three homogeneous scattering media are used as absorbers in transmission geometry. The output power ratio distributions can extract the influence of the absorbers to enhance the optical signal. The peak position of the output power ratio distributions agree with the results suggested by numerical simulation. From the reconstructed results of tomography in the scattering media, we have confirmed that the tomographic image of two wires can distinguish them successfully from 41×21 output power ratio distributions by using continuous-wave light. PMID:27607261

  6. Location, Location, Location!

    ERIC Educational Resources Information Center

    Ramsdell, Kristin

    2004-01-01

    Of prime importance in real estate, location is also a key element in the appeal of romances. Popular geographic settings and historical periods sell, unpopular ones do not--not always with a logical explanation, as the author discovered when she conducted a survey on this topic last year. (Why, for example, are the French Revolution and the…

  7. Data-base location problems in distributed data-base management systems

    SciTech Connect

    Chahande, A.I.

    1989-01-01

    Recent years have witnessed an increasing number of systems, usually heterogeneous, that are geographically distributed and connected by high capacity communication channels, eg. ARPANET system, CYCLADES network, TymNET, etc. In the design and management of such systems, a major portion of the planning is concerned with storing large quantities of information (data) at judiciously selected nodes in the network, in adherence to some optimal criterion. This necessitates analysis pertaining to information storage costs, transaction (update and query) costs, response times, processing locality, etc. There are essentially two definitions of optimality - Cost measures and Performance measures. The two measures of optimality parallel each other. This research essentially considers the minimal cost objective, but incorporates the performance objectives as well, by considering cost penalties for sub-optimal performance. The distributed database design problem is fully characterized by two sub-problems: (a) Design of the Fragmentation Schema, and (b) Designing the Allocation Schema for these fragments. These problems have been addressed independently in the literature. This research, appreciating the mutual interdependence of the issues, attempts the distributed database location problem considering both aspects in unison (logical as well as physical criteria). The problem can be succinctly stated as follows: Given the set of user nodes with their respective transaction (update and query) frequencies, and a set of application programs, the database location problem assigns copies of various database files (or fragments thereof) to candidate nodes, such that the total cost is minimized. The decision must trade-off the cost of accessing, which is reduced by additional copies, against the cost of updating and storing these additional copies.

  8. Generation and use of measurement-based 3-D dose distributions for 3-D dose calculation verification.

    PubMed

    Stern, R L; Fraass, B A; Gerhardsson, A; McShan, D L; Lam, K L

    1992-01-01

    A 3-D radiation therapy treatment planning system calculates dose to an entire volume of points and therefore requires a 3-D distribution of measured dose values for quality assurance and dose calculation verification. To measure such a volumetric distribution with a scanning ion chamber is prohibitively time consuming. A method is presented for the generation of a 3-D grid of dose values based on beam's-eye-view (BEV) film dosimetry. For each field configuration of interest, a set of BEV films at different depths is obtained and digitized, and the optical densities are converted to dose. To reduce inaccuracies associated with film measurement of megavoltage photon depth doses, doses on the different planes are normalized using an ion-chamber measurement of the depth dose. A 3-D grid of dose values is created by interpolation between BEV planes along divergent beam rays. This matrix of measurement-based dose values can then be compared to calculations over the entire volume of interest. This method is demonstrated for three different field configurations. Accuracy of the film-measured dose values is determined by 1-D and 2-D comparisons with ion chamber measurements. Film and ion chamber measurements agree within 2% in the central field regions and within 2.0 mm in the penumbral regions. PMID:1620042

  9. Approaches to verification of two-dimensional water quality models

    SciTech Connect

    Butkus, S.R. . Water Quality Dept.)

    1990-11-01

    The verification of a water quality model is the one procedure most needed by decision making evaluating a model predictions, but is often not adequate or done at all. The results of a properly conducted verification provide the decision makers with an estimate of the uncertainty associated with model predictions. Several statistical tests are available for quantifying of the performance of a model. Six methods of verification were evaluated using an application of the BETTER two-dimensional water quality model for Chickamauga reservoir. Model predictions for ten state variables were compared to observed conditions from 1989. Spatial distributions of the verification measures showed the model predictions were generally adequate, except at a few specific locations in the reservoir. The most useful statistics were the mean standard error of the residuals. Quantifiable measures of model performance should be calculated during calibration and verification of future applications of the BETTER model. 25 refs., 5 figs., 7 tabs.

  10. FDTD verification of deep-set brain tumor hyperthermia using a spherical microwave source distribution

    SciTech Connect

    Dunn, D.; Rappaport, C.M.; Terzuoli, A.J. Jr.

    1996-10-01

    Although use of noninvasive microwave hyperthermia to treat cancer is problematic in many human body structures, careful selection of the source electric field distribution around the entire surface of the head can generate a tightly focused global power density maximum at the deepest point within the brain. An analytic prediction of the optimum volume field distribution in a layered concentric head model based on summing spherical harmonic modes is derived and presented. This ideal distribution is then verified using a three-dimensional finite difference time domain (TDTD) simulation with a discretized, MRI-based head model excited by the spherical source. The numerical computation gives a very similar dissipated power pattern as the analytic prediction. This study demonstrates that microwave hyperthermia can theoretically be a feasible cancer treatment modality for tumors in the head, providing a well-resolved hot-spot at depth without overheating any other healthy tissue.

  11. Verification, Validation, and Accreditation Challenges of Distributed Simulation for Space Exploration Technology

    NASA Technical Reports Server (NTRS)

    Thomas, Danny; Hartway, Bobby; Hale, Joe

    2006-01-01

    Throughout its rich history, NASA has invested heavily in sophisticated simulation capabilities. These capabilities reside in NASA facilities across the country - and with partners around the world. NASA s Exploration Systems Mission Directorate (ESMD) has the opportunity to leverage these considerable investments to resolve technical questions relating to its missions. The distributed nature of the assets, both in terms of geography and organization, present challenges to their combined and coordinated use, but precedents of geographically distributed real-time simulations exist. This paper will show how technological advances in simulation can be employed to address the issues associated with netting NASA simulation assets.

  12. Gas Chromatographic Verification of a Mathematical Model: Product Distribution Following Methanolysis Reactions.

    ERIC Educational Resources Information Center

    Lam, R. B.; And Others

    1983-01-01

    Investigated application of binomial statistics to equilibrium distribution of ester systems by employing gas chromatography to verify the mathematical model used. Discusses model development and experimental techniques, indicating the model enables a straightforward extension to symmetrical polyfunctional esters and presents a mathematical basis…

  13. Monte Carlo verification of IMRT dose distributions from a commercial treatment planning optimization system

    NASA Astrophysics Data System (ADS)

    Ma, C.-M.; Pawlicki, T.; Jiang, S. B.; Li, J. S.; Deng, J.; Mok, E.; Kapur, A.; Xing, L.; Ma, L.; Boyer, A. L.

    2000-09-01

    The purpose of this work was to use Monte Carlo simulations to verify the accuracy of the dose distributions from a commercial treatment planning optimization system (Corvus, Nomos Corp., Sewickley, PA) for intensity-modulated radiotherapy (IMRT). A Monte Carlo treatment planning system has been implemented clinically to improve and verify the accuracy of radiotherapy dose calculations. Further modifications to the system were made to compute the dose in a patient for multiple fixed-gantry IMRT fields. The dose distributions in the experimental phantoms and in the patients were calculated and used to verify the optimized treatment plans generated by the Corvus system. The Monte Carlo calculated IMRT dose distributions agreed with the measurements to within 2% of the maximum dose for all the beam energies and field sizes for both the homogeneous and heterogeneous phantoms. The dose distributions predicted by the Corvus system, which employs a finite-size pencil beam (FSPB) algorithm, agreed with the Monte Carlo simulations and measurements to within 4% in a cylindrical water phantom with various hypothetical target shapes. Discrepancies of more than 5% (relative to the prescribed target dose) in the target region and over 20% in the critical structures were found in some IMRT patient calculations. The FSPB algorithm as implemented in the Corvus system is adequate for homogeneous phantoms (such as prostate) but may result in significant under- or over-estimation of the dose in some cases involving heterogeneities such as the air-tissue, lung-tissue and tissue-bone interfaces.

  14. Commercial milk distribution profiles and production locations. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Deonigi, D.E.; Anderson, D.M.; Wilfert, G.L.

    1993-12-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project was established to estimate radiation doses that people could have received from nuclear operations at the Hanford Site since 1944. For this period iodine-131 is the most important offsite contributor to radiation doses from Hanford operations. Consumption of milk from cows that ate vegetation contaminated by iodine-131 is the dominant radiation pathway for individuals who drank milk. Information has been developed on commercial milk cow locations and commercial milk distribution during 1945 and 1951. The year 1945 was selected because during 1945 the largest amount of iodine-131 was released from Hanford facilities in a calendar year; therefore, 1945 was the year in which an individual was likely to have received the highest dose. The year 1951 was selected to provide data for comparing the changes that occurred in commercial milk flows (i.e., sources, processing locations, and market areas) between World War II and the post-war period. To estimate the doses people could have received from this milk flow, it is necessary to estimate the amount of milk people consumed, the source of the milk, the specific feeding regime used for milk cows, and the amount of iodine-131 contamination deposited on feed.

  15. Commercial milk distribution profiles and production locations. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Deonigi, D.E.; Anderson, D.M.; Wilfert, G.L.

    1994-04-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project was established to estimate radiation doses that people could have received from nuclear operations at the Hanford Site since 1944. For this period iodine-131 is the most important offsite contributor to radiation doses from Hanford operations. Consumption of milk from cows that ate vegetation contaminated by iodine-131 is the dominant radiation pathway for individuals who drank milk (Napier 1992). Information has been developed on commercial milk cow locations and commercial milk distribution during 1945 and 1951. The year 1945 was selected because during 1945 the largest amount of iodine-131 was released from Hanford facilities in a calendar year (Heeb 1993); therefore, 1945 was the year in which an individual was likely to have received the highest dose. The year 1951 was selected to provide data for comparing the changes that occurred in commercial milk flows (i.e., sources, processing locations, and market areas) between World War II and the post-war period. To estimate the doses people could have received from this milk flow, it is necessary to estimate the amount of milk people consumed, the source of the milk, the specific feeding regime used for milk cows, and the amount of iodine-131 contamination deposited on feed.

  16. Locating illicit connections in storm water sewers using fiber-optic distributed temperature sensing.

    PubMed

    Hoes, O A C; Schilperoort, R P S; Luxemburg, W M J; Clemens, F H L R; van de Giesen, N C

    2009-12-01

    A newly developed technique using distributed temperature sensing (DTS) has been developed to find illicit household sewage connections to storm water systems in the Netherlands. DTS allows for the accurate measurement of temperature along a fiber-optic cable, with high spatial (2m) and temporal (30s) resolution. We inserted a fiber-optic cable of 1300m in two storm water drains. At certain locations, significant temperature differences with an intermittent character were measured, indicating inflow of water that was not storm water. In all cases, we found that foul water from households or companies entered the storm water system through an illicit sewage connection. The method of using temperature differences for illicit connection detection in storm water networks is discussed. The technique of using fiber-optic cables for distributed temperature sensing is explained in detail. The DTS method is a reliable, inexpensive and practically feasible method to detect illicit connections to storm water systems, which does not require access to private property. PMID:19735929

  17. Aerosol number size distributions over a coastal semi urban location: Seasonal changes and ultrafine particle bursts.

    PubMed

    Babu, S Suresh; Kompalli, Sobhan Kumar; Moorthy, K Krishna

    2016-09-01

    Number-size distribution is one of the important microphysical properties of atmospheric aerosols that influence aerosol life cycle, aerosol-radiation interaction as well as aerosol-cloud interactions. Making use of one-yearlong measurements of aerosol particle number-size distributions (PNSD) over a broad size spectrum (~15-15,000nm) from a tropical coastal semi-urban location-Trivandrum (Thiruvananthapuram), the size characteristics, their seasonality and response to mesoscale and synoptic scale meteorology are examined. While the accumulation mode contributed mostly to the annual mean concentration, ultrafine particles (having diameter <100nm) contributed as much as 45% to the total concentration, and thus constitute a strong reservoir, that would add to the larger particles through size transformation. The size distributions were, in general, bimodal with well-defined modes in the accumulation and coarse regimes, with mode diameters lying in the range 141 to 167nm and 1150 to 1760nm respectively, in different seasons. Despite the contribution of the coarse sized particles to the total number concentration being meager, they contributed significantly to the surface area and volume, especially during transport of marine air mass highlighting the role of synoptic air mass changes. Significant diurnal variation occurred in the number concentrations, geometric mean diameters, which is mostly attributed to the dynamics of the local coastal atmospheric boundary layer and the effect of mesoscale land/sea breeze circulation. Bursts of ultrafine particles (UFP) occurred quite frequently, apparently during periods of land-sea breeze transitions, caused by the strong mixing of precursor-rich urban air mass with the cleaner marine air mass; the resulting turbulence along with boundary layer dynamics aiding the nucleation. These ex-situ particles were observed at the surface due to the transport associated with boundary layer dynamics. The particle growth rates from

  18. Detecting surface runoff location in a small catchment using distributed and simple observation method

    NASA Astrophysics Data System (ADS)

    Dehotin, Judicaël; Breil, Pascal; Braud, Isabelle; de Lavenne, Alban; Lagouy, Mickaël; Sarrazin, Benoît

    2015-06-01

    Surface runoff is one of the hydrological processes involved in floods, pollution transfer, soil erosion and mudslide. Many models allow the simulation and the mapping of surface runoff and erosion hazards. Field observations of this hydrological process are not common although they are crucial to evaluate surface runoff models and to investigate or assess different kinds of hazards linked to this process. In this study, a simple field monitoring network is implemented to assess the relevance of a surface runoff susceptibility mapping method. The network is based on spatially distributed observations (nine different locations in the catchment) of soil water content and rainfall events. These data are analyzed to determine if surface runoff occurs. Two surface runoff mechanisms are considered: surface runoff by saturation of the soil surface horizon and surface runoff by infiltration excess (also called hortonian runoff). The monitoring strategy includes continuous records of soil surface water content and rainfall with a 5 min time step. Soil infiltration capacity time series are calculated using field soil water content and in situ measurements of soil hydraulic conductivity. Comparison of soil infiltration capacity and rainfall intensity time series allows detecting the occurrence of surface runoff by infiltration-excess. Comparison of surface soil water content with saturated water content values allows detecting the occurrence of surface runoff by saturation of the soil surface horizon. Automatic records were complemented with direct field observations of surface runoff in the experimental catchment after each significant rainfall event. The presented observation method allows the identification of fast and short-lived surface runoff processes at a small spatial and temporal resolution in natural conditions. The results also highlight the relationship between surface runoff and factors usually integrated in surface runoff mapping such as topography, rainfall

  19. Relation Between Sprite Distribution and Source Locations of VHF Pulses Derived From JEM- GLIMS Measurements

    NASA Astrophysics Data System (ADS)

    Sato, Mitsuteru; Mihara, Masahiro; Ushio, Tomoo; Morimoto, Takeshi; Kikuchi, Hiroshi; Adachi, Toru; Suzuki, Makoto; Yamazaki, Atsushi; Takahashi, Yukihiro

    2015-04-01

    JEM-GLIMS is continuing the comprehensive nadir observations of lightning and TLEs using optical instruments and electromagnetic wave receivers since November 2012. For the period between November 20, 2012 and November 30, 2014, JEM-GLIMS succeeded in detecting 5,048 lightning events. A total of 567 events in 5,048 lightning events were TLEs, which were mostly elves events. To identify the sprite occurrences from the transient optical flash data, it is necessary to perform the following data analysis: (1) a subtraction of the appropriately scaled wideband camera data from the narrowband camera data; (2) a calculation of intensity ratio between different spectrophotometer channels; and (3) an estimation of the polarization and CMC for the parent CG discharges using ground-based ELF measurement data. From a synthetic comparison of these results, it is confirmed that JEM-GLISM succeeded in detecting sprite events. The VHF receiver (VITF) onboard JEM-GLIMS uses two patch-type antennas separated by a 1.6-m interval and can detect VHF pulses emitted by lightning discharges in the 70-100 MHz frequency range. Using both an interferometric technique and a group delay technique, we can estimate the source locations of VHF pulses excited by lightning discharges. In the event detected at 06:41:15.68565 UT on June 12, 2014 over central North America, sprite was distributed with a horizontal displacement of 20 km from the peak location of the parent lightning emission. In this event, a total of 180 VHF pulses were simultaneously detected by VITF. From the detailed data analysis of these VHF pulse data, it is found that the majority of the source locations were placed near the area of the dim lightning emission, which may imply that the VHF pulses were associated with the in-cloud lightning current. At the presentation, we will show detailed comparison between the spatiotemporal characteristics of sprite emission and source locations of VHF pulses excited by the parent lightning

  20. Experimental verification of a model describing the intensity distribution from a single mode optical fiber

    SciTech Connect

    Moro, Erik A; Puckett, Anthony D; Todd, Michael D

    2011-01-24

    The intensity distribution of a transmission from a single mode optical fiber is often approximated using a Gaussian-shaped curve. While this approximation is useful for some applications such as fiber alignment, it does not accurately describe transmission behavior off the axis of propagation. In this paper, another model is presented, which describes the intensity distribution of the transmission from a single mode optical fiber. A simple experimental setup is used to verify the model's accuracy, and agreement between model and experiment is established both on and off the axis of propagation. Displacement sensor designs based on the extrinsic optical lever architecture are presented. The behavior of the transmission off the axis of propagation dictates the performance of sensor architectures where large lateral offsets (25-1500 {micro}m) exist between transmitting and receiving fibers. The practical implications of modeling accuracy over this lateral offset region are discussed as they relate to the development of high-performance intensity modulated optical displacement sensors. In particular, the sensitivity, linearity, resolution, and displacement range of a sensor are functions of the relative positioning of the sensor's transmitting and receiving fibers. Sensor architectures with high combinations of sensitivity and displacement range are discussed. It is concluded that the utility of the accurate model is in its predicative capability and that this research could lead to an improved methodology for high-performance sensor design.

  1. Experimental study and verification of the residence time distribution using fluorescence spectroscopy and color measurement

    NASA Astrophysics Data System (ADS)

    Aigner, Michael; Lepschi, Alexander; Aigner, Jakob; Garmendia, Izaro; Miethlinger, Jürgen

    2015-05-01

    We report on the inline measurement of residence time (RT) and residence time distribution (RTD) by means of fluorescence spectroscopy [1] and optical color measurements [2]. Measurements of thermoplastics in a variety of single-screw extruders were conducted. To assess the influence of screw configurations, screw speeds and mass throughput on the RT and RTD, tracer particles were introduced into the feeding section and the RT was measured inline in the plasticization unit. Using special measurement probes that can be inserted into 1/2″ - 20 UNF (unified fine thread) bore holes, the mixing ability of either the whole plasticization unit or selected screw regions, e.g., mixing parts, can be validated during the extrusion process. The measurement setups complement each other well, and their combined use can provide further insights into the mixing behavior of single-screw plasticization units.

  2. Verification of dose distribution for volumetric modulated arc therapy total marrow irradiation in a humanlike phantom

    SciTech Connect

    Surucu, Murat; Yeginer, Mete; Kavak, Gulbin O.; Fan, John; Radosevich, James A.; Aydogan, Bulent

    2012-01-15

    Purpose: Volumetric modulated arc therapy (VMAT) treatment planning studies have been reported to provide good target coverage and organs at risk (OARs) sparing in total marrow irradiation (TMI). A comprehensive dosimetric study simulating the clinical situation as close as possible is a norm in radiotherapy before a technique can be used to treat a patient. Without such a study, it would be difficult to make a reliable and safe clinical transition especially with a technique as complicated as VMAT-TMI. To this end, the dosimetric feasibility of VMAT-TMI technique in terms of treatment planning, delivery efficiency, and the most importantly three dimensional dose distribution accuracy was investigated in this study. The VMAT-TMI dose distribution inside a humanlike Rando phantom was measured and compared to the dose calculated using RapidArc especially in the field junctions and the inhomogeneous tissues including the lungs, which is the dose-limiting organ in TMI. Methods: Three subplans with a total of nine arcs were used to treat the planning target volume (PTV), which was determined as all the bones plus the 3 mm margin. Thermoluminescent detectors (TLDs) were placed at 39 positions throughout the phantom. The measured TLD doses were compared to the calculated plan doses. Planar dose for each arc was verified using mapcheck. Results: TLD readings demonstrated accurate dose delivery, with a median dose difference of 0.5% (range: -4.3% and 6.6%) from the calculated dose in the junctions and in the inhomogeneous medium including the lungs. Conclusions: The results from this study suggest that RapidArc VMAT technique is dosimetrically accurate, safe, and efficient in delivering TMI within clinically acceptable time frame.

  3. System performance and performance enhancement relative to element position location errors for distributed linear antenna arrays

    NASA Astrophysics Data System (ADS)

    Adrian, Andrew

    For the most part, antenna phased arrays have traditionally been comprised of antenna elements that are very carefully and precisely placed in very periodic grid structures. Additionally, the relative positions of the elements to each other are typically mechanically fixed as best as possible. There is never an assumption the relative positions of the elements are a function of time or some random behavior. In fact, every array design is typically analyzed for necessary element position tolerances in order to meet necessary performance requirements such as directivity, beamwidth, sidelobe level, and beam scanning capability. Consider an antenna array that is composed of several radiating elements, but the position of each of the elements is not rigidly, mechanically fixed like a traditional array. This is not to say that the element placement structure is ignored or irrelevant, but each element is not always in its relative, desired location. Relative element positioning would be analogous to a flock of birds in flight or a swarm of insects. They tend to maintain a near fixed position with the group, but not always. In the antenna array analog, it would be desirable to maintain a fixed formation, but due to other random processes, it is not always possible to maintain perfect formation. This type of antenna array is referred to as a distributed antenna array. A distributed antenna array's inability to maintain perfect formation causes degradations in the antenna factor pattern of the array. Directivity, beamwidth, sidelobe level and beam pointing error are all adversely affected by element relative position error. This impact is studied as a function of element relative position error for linear antenna arrays. The study is performed over several nominal array element spacings, from lambda to lambda, several sidelobe levels (20 to 50 dB) and across multiple array illumination tapers. Knowing the variation in performance, work is also performed to utilize a minimum

  4. Genomic distribution of AFLP markers relative to gene locations for different eukaryotic species

    PubMed Central

    2013-01-01

    Background Amplified fragment length polymorphism (AFLP) markers are frequently used for a wide range of studies, such as genome-wide mapping, population genetic diversity estimation, hybridization and introgression studies, phylogenetic analyses, and detection of signatures of selection. An important issue to be addressed for some of these fields is the distribution of the markers across the genome, particularly in relation to gene sequences. Results Using in-silico restriction fragment analysis of the genomes of nine eukaryotic species we characterise the distribution of AFLP fragments across the genome and, particularly, in relation to gene locations. First, we identify the physical position of markers across the chromosomes of all species. An observed accumulation of fragments around (peri) centromeric regions in some species is produced by repeated sequences, and this accumulation disappears when AFLP bands rather than fragments are considered. Second, we calculate the percentage of AFLP markers positioned within gene sequences. For the typical EcoRI/MseI enzyme pair, this ranges between 28 and 87% and is usually larger than that expected by chance because of the higher GC content of gene sequences relative to intergenic ones. In agreement with this, the use of enzyme pairs with GC-rich restriction sites substantially increases the above percentages. For example, using the enzyme system SacI/HpaII, 86% of AFLP markers are located within gene sequences in A. thaliana, and 100% of markers in Plasmodium falciparun. We further find that for a typical trait controlled by 50 genes of average size, if 1000 AFLPs are used in a study, the number of those within 1 kb distance from any of the genes would be only about 1–2, and only about 50% of the genes would have markers within that distance. Conclusions The high coverage of AFLP markers across the genomes and the high proportion of markers within or close to gene sequences make them suitable for genome scans and

  5. Verification of the efficiency of chemical disinfection and sanitation measures in in-building distribution systems.

    PubMed

    Lenz, J; Linke, S; Gemein, S; Exner, M; Gebel, J

    2010-06-01

    Previous investigations of biofilms, generated in a silicone tube model have shown that the number of colony forming units (CFU) can reach 10(7)/cm(2), the total cell count (TCC) of microorganisms can be up to 10(8)cells/cm(2). The present study focuses on the situation in in-building distribution systems. Different chemical disinfectants were tested for their efficacy on drinking water biofilms in silicone tubes: free chlorine (electrochemically activated), chlorine dioxide, hydrogen peroxide (H(2)O(2)), silver, and fruit acids. With regard to the widely differing manufacturers' instructions for the usage of their disinfectants three different variations of the silicone tube model were developed to simulate practical use conditions. First the continuous treatment, second the intermittent treatment, third the efficacy of external disinfection treatment and the monitoring for possible biofilm formation with the Hygiene-Monitor. The working experience showed that it is important to know how to handle the individual disinfectants. Every active ingredient has its own optimal application concerning its concentration, exposure time, physical parameters like pH, temperature or redox potential. When used correctly all products tested were able to reduce the CFU to a value below the detection limit. Most of the active ingredients could not significantly reduce the TCC/cm(2), which means that viable microorganisms may still be present in the system. Thus the question arises what happened with these cells? In some cases SEM pictures of the biofilm matrix after a successful disinfection still showed biofilm residues. According to these results, no general correlation between CFU/cm(2), TCC/cm(2) and the visualised biofilm matrix on the silicone tube surface (SEM) could be demonstrated after a treatment with disinfectants. PMID:20472500

  6. Optimal Sensor Placement for Leak Location in Water Distribution Networks Using Genetic Algorithms

    PubMed Central

    Casillas, Myrna V.; Puig, Vicenç; Garza-Castañón, Luis E.; Rosich, Albert

    2013-01-01

    This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach. PMID:24193099

  7. Impact detection, location, and characterization using spatially weighted distributed fiber optic sensors

    NASA Astrophysics Data System (ADS)

    Spillman, William B., Jr.; Huston, Dryver R.

    1996-11-01

    The ability to detect, localize and characterize impacts in real time is of critical importance for the safe operation of aircraft, spacecraft and other vehicles, particularly in light of the increasing use of high performance composite materials with unconventional and often catastrophic failure modes. Although a number of systems based on fiber optic sensors have been proposed or demonstrated, they have generally proved not to be useful due to difficulty of implementation, limited accuracy or high cost. In this paper, we present the results of an investigation using two spatially weighted distributed fiber optic sensors to detect, localize and characterize impacts along an extended linear region. By having the sensors co-located with one having sensitivity to impacts ranging from low to high along its length while the other sensor has sensitivity ranging from high to low along the same path, impacts can be localized and their magnitudes determined using a very simple algorithm. A theoretical description of the techniques is given and compared with experimental results.

  8. Benign epithelial gastric polyps--frequency, location, and age and sex distribution.

    PubMed

    Ljubicić, N; Kujundzić, M; Roić, G; Banić, M; Cupić, H; Doko, M; Zovak, M

    2002-06-01

    Prospective investigation has been undertaken with the aim to study the frequency, location and age and sex distribution of various histological types of benign gastric epithelial polyps. Histological type--adenomatous, hyperplastic and fundic gland polyps--was diagnosed on the basis of at least three histological samples taken from the polyp. Biopsy samples were also taken from the antrum and the body of the stomach so that gastritis could be graded and classified, and the presence of H. pylori could be determined by histology. All 6,700 patients, who had undergone upper gastrointestinal endoscopy in a one-year period, participated in this study. Among them 42 benign gastric epithelial polyp were found in 31 patients: adenomatous gastric polyps in 7 patients, hyperplastic gastric polyp in 21 and fundic gland polyp in 3 patients. All patients with hyperplastic polyps had chronic active superficial gastritis, whereas most of the patients with adenomatous polyps had a chronic atrophic gastritis with high prevalence of intestinal metaplasia. Among 21 patients with hyperplastic gastric polyps, 16 (76%) patients were positive for H. pylori infection in contrast to only 2 patients (29%) with adenomatous gastric polyps and 1 patient (33%) with fundic gland polyp. Presented data indicates that hyperplastic gastric polyps are the most common and they are associated with the presence of chronic active superficial gastritis and concomitant H. pylori infection. Adenomatous polyps are rarer and they tend to be associated with chronic atrophic gastritis and intestinal metaplasia. Fundic gland polyp is the rarest type of gastric polyps. PMID:12137323

  9. Lexical distributional cues, but not situational cues, are readily used to learn abstract locative verb-structure associations.

    PubMed

    Twomey, Katherine E; Chang, Franklin; Ambridge, Ben

    2016-08-01

    Children must learn the structural biases of locative verbs in order to avoid making overgeneralisation errors (e.g., (∗)I filled water into the glass). It is thought that they use linguistic and situational information to learn verb classes that encode structural biases. In addition to situational cues, we examined whether children and adults could use the lexical distribution of nouns in the post-verbal noun phrase of transitive utterances to assign novel verbs to locative classes. In Experiment 1, children and adults used lexical distributional cues to assign verb classes, but were unable to use situational cues appropriately. In Experiment 2, adults generalised distributionally-learned classes to novel verb arguments, demonstrating that distributional information can cue abstract verb classes. Taken together, these studies show that human language learners can use a lexical distributional mechanism that is similar to that used by computational linguistic systems that use large unlabelled corpora to learn verb meaning. PMID:27183399

  10. Spatial distribution of soil water repellency in a grassland located in Lithuania

    NASA Astrophysics Data System (ADS)

    Pereira, Paulo; Novara, Agata

    2014-05-01

    Soil water repellency (SWR) it is recognized to be very heterogeneous in time in space and depends on soil type, climate, land use, vegetation and season (Doerr et al., 2002). It prevents or reduces water infiltration, with important impacts on soil hydrology, influencing the mobilization and transport of substances into the soil profile. The reduced infiltration increases surface runoff and soil erosion. SWR reduce also the seed emergency and plant growth due the reduced amount of water in the root zone. Positive aspects of SWR are the increase of soil aggregate stability, organic carbon sequestration and reduction of water evaporation (Mataix-Solera and Doerr, 2004; Diehl, 2013). SWR depends on the soil aggregate size. In fire affected areas it was founded that SWR was more persistent in small size aggregates (Mataix-Solera and Doerr, 2004; Jordan et al., 2011). However, little information is available about SWR spatial distribution according to soil aggregate size. The aim of this work is study the spatial distribution of SWR in fine earth (<2 mm) and different aggregate sizes, 2-1 mm, 1-0.5 mm, 0.5-0.25 mm and <0.25 mm. The studied area is located near Vilnius (Lithuania) at 54° 42' N, 25° 08 E, 158 masl. A plot with 400 m2 (20 x 20 m with 5 m space between sampling points) and 25 soil samples were collected in the top soil (0-5 cm) and taken to the laboratory. Previously to SWR assessment, the samples were air dried. The persistence of SWR was analysed according to the Water Drop Penetration Method, which involves placing three drops of distilled water onto the soil surface and registering the time in seconds (s) required for the drop complete penetration (Wessel, 1988). Data did not respected Gaussian distribution, thus in order to meet normality requirements it was log-normal transformed. Spatial interpolations were carried out using Ordinary Kriging. The results shown that SWR was on average in fine earth 2.88 s (Coeficient of variation % (CV%)=44.62), 2

  11. Responses of European precipitation distributions and regimes to different blocking locations

    NASA Astrophysics Data System (ADS)

    Sousa, Pedro M.; Trigo, Ricardo M.; Barriopedro, David; Soares, Pedro M. M.; Ramos, Alexandre M.; Liberato, Margarida L. R.

    2016-04-01

    In this work we performed an analysis on the impacts of blocking episodes on seasonal and annual European precipitation and the associated physical mechanisms. Distinct domains were considered in detail taking into account different blocking center positions spanning between the Atlantic and western Russia. Significant positive precipitation anomalies are found for southernmost areas while generalized negative anomalies (up to 75 % in some areas) occur in large areas of central and northern Europe. This dipole of anomalies is reversed when compared to that observed during episodes of strong zonal flow conditions. We illustrate that the location of the maximum precipitation anomalies follows quite well the longitudinal positioning of the blocking centers and discuss regional and seasonal differences in the precipitation responses. To better understand the precipitation anomalies, we explore the blocking influence on cyclonic activity. The results indicate a split of the storm-tracks north and south of blocking systems, leading to an almost complete reduction of cyclonic centers in northern and central Europe and increases in southern areas, where cyclone frequency doubles during blocking episodes. However, the underlying processes conductive to the precipitation anomalies are distinct between northern and southern European regions, with a significant role of atmospheric instability in southern Europe, and moisture availability as the major driver at higher latitudes. This distinctive underlying process is coherent with the characteristic patterns of latent heat release from the ocean associated with blocked and strong zonal flow patterns. We also analyzed changes in the full range of the precipitation distribution of several regional sectors during blocked and zonal days. Results show that precipitation reductions in the areas under direct blocking influence are driven by a substantial drop in the frequency of moderate rainfall classes. Contrarily, southwards of

  12. Where exactly am I? Self-location judgements distribute between head and torso.

    PubMed

    Alsmith, Adrian J T; Longo, Matthew R

    2014-02-01

    I am clearly located where my body is located. But is there one particular place inside my body where I am? Recent results have provided apparently contradictory findings about this question. Here, we addressed this issue using a more direct approach than has been used in previous studies. Using a simple pointing task, we asked participants to point directly at themselves, either by manual manipulation of the pointer whilst blindfolded or by visually discerning when the pointer was in the correct position. Self-location judgements in haptic and visual modalities were highly similar, and were clearly modulated by the starting location of the pointer. Participants most frequently chose to point to one of two likely regions, the upper face or the upper torso, according to which they reached first. These results suggest that while the experienced self is not spread out homogeneously across the entire body, nor is it localised in any single point. Rather, two distinct regions, the upper face and upper torso, appear to be judged as where "I" am. PMID:24457520

  13. On intra-supply chain system with an improved distribution plan, multiple sales locations and quality assurance.

    PubMed

    Chiu, Singa Wang; Huang, Chao-Chih; Chiang, Kuo-Wei; Wu, Mei-Fang

    2015-01-01

    Transnational companies, operating in extremely competitive global markets, always seek to lower different operating costs, such as inventory holding costs in their intra- supply chain system. This paper incorporates a cost reducing product distribution policy into an intra-supply chain system with multiple sales locations and quality assurance studied by [Chiu et al., Expert Syst Appl, 40:2669-2676, (2013)]. Under the proposed cost reducing distribution policy, an added initial delivery of end items is distributed to multiple sales locations to meet their demand during the production unit's uptime and rework time. After rework when the remaining production lot goes through quality assurance, n fixed quantity installments of finished items are then transported to sales locations at a fixed time interval. Mathematical modeling and optimization techniques are used to derive closed-form optimal operating policies for the proposed system. Furthermore, the study demonstrates significant savings in stock holding costs for both the production unit and sales locations. Alternative of outsourcing product delivery task to an external distributor is analyzed to assist managerial decision making in potential outsourcing issues in order to facilitate further reduction in operating costs. PMID:26576330

  14. The hemodynamic effects of the LVAD outflow cannula location on the thrombi distribution in the aorta: A primary numerical study.

    PubMed

    Zhang, Yage; Gao, Bin; Yu, Chang

    2016-09-01

    Although a growing number of patients undergo LVAD implantation for heart failure treatment, thrombi are still the devastating complication for patients who used LVAD. LVAD outflow cannula location and thrombi generation sources were hypothesized to affect the thrombi distribution in the aorta. To test this hypothesis, numerical studies were conducted by using computational fluid dynamic (CFD) theory. Two anastomotic configurations, in which the LVAD outflow cannula is anastomosed to the anterior and lateral ascending aortic wall (named as anterior configurations and lateral configurations, respectively), are designed. The particles, whose sized are same as those of thrombi, are released at the LVAD output cannula and the aortic valve (named as thrombiP and thrombiL, respectively) to calculate the distribution of thrombi. The simulation results demonstrate that the thrombi distribution in the aorta is significantly affected by the LVAD outflow cannula location. In anterior configuration, the thrombi probability of entering into the three branches is 23.60%, while that in lateral configuration is 36.68%. Similarly, in anterior configuration, the thrombi probabilities of entering into brachiocephalic artery, left common carotid artery and left subclavian artery, is 8.51%, 9.64%, 5.45%, respectively, while that in lateral configuration it is 11.39%, 3.09%, 22.20% respectively. Moreover, the origins of thrombi could affect their distributions in the aorta. In anterior configuration, the thrombiP has a lower probability to enter into the three branches than thrombiL (12% vs. 25%). In contrast, in lateral configuration, the thrombiP has a higher probability to enter into the three branches than thrombiL (47% vs. 35%). In brief, the LVAD outflow cannula location significantly affects the distribution of thrombi in the aorta. Thus, in the clinical practice, the selection of outflow location of LVAD and the risk of thrombi formed in the left ventricle should be paid more

  15. Condensation of earthquake location distributions: Optimal spatial information encoding and application to multifractal analysis of south Californian seismicity.

    PubMed

    Kamer, Yavor; Ouillon, Guy; Sornette, Didier; Wössner, Jochen

    2015-08-01

    We present the "condensation" method that exploits the heterogeneity of the probability distribution functions (PDFs) of event locations to improve the spatial information content of seismic catalogs. As its name indicates, the condensation method reduces the size of seismic catalogs while improving the access to the spatial information content of seismic catalogs. The PDFs of events are first ranked by decreasing location errors and then successively condensed onto better located and lower variance event PDFs. The obtained condensed catalog differs from the initial catalog by attributing different weights to each event, the set of weights providing an optimal spatial representation with respect to the spatially varying location capability of the seismic network. Synthetic tests on fractal distributions perturbed with realistic location errors show that condensation improves spatial information content of the original catalog, which is quantified by the likelihood gain per event. Applied to Southern California seismicity, the new condensed catalog highlights major mapped fault traces and reveals possible additional structures while reducing the catalog length by ∼25%. The condensation method allows us to account for location error information within a point based spatial analysis. We demonstrate this by comparing the multifractal properties of the condensed catalog locations with those of the original catalog. We evidence different spatial scaling regimes characterized by distinct multifractal spectra and separated by transition scales. We interpret the upper scale as to agree with the thickness of the brittle crust, while the lower scale (2.5 km) might depend on the relocation procedure. Accounting for these new results, the epidemic type aftershock model formulation suggests that, contrary to previous studies, large earthquakes dominate the earthquake triggering process. This implies that the limited capability of detecting small magnitude events cannot be used

  16. Condensation of earthquake location distributions: Optimal spatial information encoding and application to multifractal analysis of south Californian seismicity

    NASA Astrophysics Data System (ADS)

    Kamer, Yavor; Ouillon, Guy; Sornette, Didier; Wössner, Jochen

    2015-08-01

    We present the "condensation" method that exploits the heterogeneity of the probability distribution functions (PDFs) of event locations to improve the spatial information content of seismic catalogs. As its name indicates, the condensation method reduces the size of seismic catalogs while improving the access to the spatial information content of seismic catalogs. The PDFs of events are first ranked by decreasing location errors and then successively condensed onto better located and lower variance event PDFs. The obtained condensed catalog differs from the initial catalog by attributing different weights to each event, the set of weights providing an optimal spatial representation with respect to the spatially varying location capability of the seismic network. Synthetic tests on fractal distributions perturbed with realistic location errors show that condensation improves spatial information content of the original catalog, which is quantified by the likelihood gain per event. Applied to Southern California seismicity, the new condensed catalog highlights major mapped fault traces and reveals possible additional structures while reducing the catalog length by ˜25 % . The condensation method allows us to account for location error information within a point based spatial analysis. We demonstrate this by comparing the multifractal properties of the condensed catalog locations with those of the original catalog. We evidence different spatial scaling regimes characterized by distinct multifractal spectra and separated by transition scales. We interpret the upper scale as to agree with the thickness of the brittle crust, while the lower scale (2.5 km) might depend on the relocation procedure. Accounting for these new results, the epidemic type aftershock model formulation suggests that, contrary to previous studies, large earthquakes dominate the earthquake triggering process. This implies that the limited capability of detecting small magnitude events cannot be

  17. Distributed fiber optic sensor employing phase generate carrier for disturbance detection and location

    NASA Astrophysics Data System (ADS)

    Xu, Haiyan; Wu, Hongyan; Zhang, Xuewu; Zhang, Zhuo; Li, Min

    2015-05-01

    Distributed optic fiber sensor is a new type of system, which could be used in the long-distance and strong-EMI condition for monitoring and inspection. A method of external modulation with a phase modulator is proposed in this paper to improve the positioning accuracy of the disturbance in a distributed optic-fiber sensor. We construct distributed disturbance detecting system based on Michelson interferometer, and a phase modulator has been attached to the fiber sensor in front of the Faraday rotation mirror (FRM), to elevate the signal produced by interfering of the two lights reflected by the Faraday rotation Mirror to a high frequency, while other signals remain in the low frequency. Through a high pass filter and phase retrieve circus, a signal which is proportional to the external disturbance is acquired. The accuracy of disturbance positioning with this signal can be largely improved. The method is quite simple and easy to achieve. Theoretical analysis and experimental results show that, this method can effectively improve the positioning accuracy.

  18. Syringe filtration methods for examining dissolved and colloidal trace element distributions in remote field locations

    NASA Technical Reports Server (NTRS)

    Shiller, Alan M.

    2003-01-01

    It is well-established that sampling and sample processing can easily introduce contamination into dissolved trace element samples if precautions are not taken. However, work in remote locations sometimes precludes bringing bulky clean lab equipment into the field and likewise may make timely transport of samples to the lab for processing impossible. Straightforward syringe filtration methods are described here for collecting small quantities (15 mL) of 0.45- and 0.02-microm filtered river water in an uncontaminated manner. These filtration methods take advantage of recent advances in analytical capabilities that require only small amounts of waterfor analysis of a suite of dissolved trace elements. Filter clogging and solute rejection artifacts appear to be minimal, although some adsorption of metals and organics does affect the first approximately 10 mL of water passing through the filters. Overall the methods are clean, easy to use, and provide reproducible representations of the dissolved and colloidal fractions of trace elements in river waters. Furthermore, sample processing materials can be prepared well in advance in a clean lab and transported cleanly and compactly to the field. Application of these methods is illustrated with data from remote locations in the Rocky Mountains and along the Yukon River. Evidence from field flow fractionation suggests that the 0.02-microm filters may provide a practical cutoff to distinguish metals associated with small inorganic and organic complexes from those associated with silicate and oxide colloids.

  19. Power approximation for the van Elteren test based on location-scale family of distributions.

    PubMed

    Zhao, Yan D; Qu, Yongming; Rahardja, Dewi

    2006-01-01

    The van Elteren test, as a type of stratified Wilcoxon-Mann-Whitney test for comparing two treatments accounting for stratum effects, has been used to replace the analysis of variance when the normality assumption was seriously violated. The sample size estimation methods for the van Elteren test have been proposed and evaluated previously. However, in designing an active-comparator trial where a sample of responses from the new treatment is available but the patient response data to the comparator are limited to summary statistics, the existing methods are either inapplicable or poorly behaved. In this paper we develop a new method for active-comparator trials assuming the responses from both treatments are from the same location-scale family. Theories and simulations have shown that the new method performs well when the location-scale assumption holds and works reasonably when the assumption does not hold. Thus, the new method is preferred when computing sample sizes for the van Elteren test in active-comparator trials. PMID:17146980

  20. MPL-Net Measurements of Aerosol and Cloud Vertical Distributions at Co-Located AERONET Sites

    NASA Technical Reports Server (NTRS)

    Welton, Ellsworth J.; Campbell, James R.; Berkoff, Timothy A.; Spinhirne, James D.; Tsay, Si-Chee; Holben, Brent; Starr, David OC. (Technical Monitor)

    2002-01-01

    In the early 1990s, the first small, eye-safe, and autonomous lidar system was developed, the Micropulse Lidar (MPL). The MPL acquires signal profiles of backscattered laser light from aerosols and clouds. The signals are analyzed to yield multiple layer heights, optical depths of each layer, average extinction-to-backscatter ratios for each layer, and profiles of extinction in each layer. In 2000, several MPL sites were organized into a coordinated network, called MPL-Net, by the Cloud and Aerosol Lidar Group at NASA Goddard Space Flight Center (GSFC) using funding provided by the NASA Earth Observing System. tn addition to the funding provided by NASA EOS, the NASA CERES Ground Validation Group supplied four MPL systems to the project, and the NASA TOMS group contributed their MPL for work at GSFC. The Atmospheric Radiation Measurement Program (ARM) also agreed to make their data available to the MPL-Net project for processing. In addition to the initial NASA and ARM operated sites, several other independent research groups have also expressed interest in joining the network using their own instruments. Finally, a limited amount of EOS funding was set aside to participate in various field experiments each year. The NASA Sensor Intercomparison and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) project also provides funds to deploy their MPL during ocean research cruises. All together, the MPL-Net project has participated in four major field experiments since 2000. Most MPL-Net sites and field experiment locations are also co-located with sunphotometers in the NASA Aerosol Robotic Network. (AERONET). Therefore, at these locations data is collected on both aerosol and cloud vertical structure as well as column optical depth and sky radiance. Real-time data products are now available from most MPL-Net sites. Our real-time products are generated at times of AERONET aerosol optical depth (AOD) measurements. The AERONET AOD is used as input to our

  1. Distribution of deciduous stands in villages located in coniferous forest landscapes in Sweden.

    PubMed

    Mikusiński, Grzegorz; Angelstam, Per; Sporrong, Ulf

    2003-12-01

    Termination of fire along with active removal of deciduous trees in favor of conifers together with anthropogenic transformation of productive forest into agricultural land, have transformed northern European coniferous forests and reduced their deciduous component. Locally, however, in the villages, deciduous trees and stands were maintained, and have more recently regenerated on abandoned agricultural land. We hypothesize that the present distribution of the deciduous component is related to the village in-field/out-field zonation in different regions, which emerges from physical conditions and recent economic development expressed as land-use change. We analyzed the spatial distribution of deciduous stands in in-field and out-field zones of villages in 6 boreal/hemiboreal Swedish regions (Norrbotten, Angermanland, Jämtland, Dalarna, Bergslagen, Småland). In each region 6 individual quadrates 5 x 5 km centered on village areas were selected. We found significant regional differences in the deciduous component (DEC) in different village zones. At the scale of villages Angermanland had the highest mean proportion of DEC (17%) and Jämtland the lowest (2%). However, the amounts of the DEC varied systematically in in-field and out-field zones. DEC was highest in the in-field in the south (Småland), but generally low further north. By contrast, the amount of DEC in the out-field was highest in the north. The relative amount of DEC in the forest edge peaked in landscapes with the strongest decline in active agriculture (Angermanland, Dalarna, Bergslagen). Because former and present local villages are vital for biodiversity linked to the deciduous component, our results indicate a need for integrated management of deciduous forest within entire landscapes. This study shows that simplified satellite data are useful for estimating the spatial distribution of deciduous trees and stands at the landscape scale. However, for detailed studies better thematic resolution is

  2. Circumferential distribution and location of Mallory-Weiss tears: recent trends

    PubMed Central

    Okada, Mayumi; Ishimura, Norihisa; Shimura, Shino; Mikami, Hironobu; Okimoto, Eiko; Aimi, Masahito; Uno, Goichi; Oshima, Naoki; Yuki, Takafumi; Ishihara, Shunji; Kinoshita, Yoshikazu

    2015-01-01

    Background and study aims: Mallory-Weiss tears (MWTs) are not only a common cause of acute nonvariceal gastrointestinal bleeding but also an iatrogenic adverse event related to endoscopic procedures. However, changes in the clinical characteristics and endoscopic features of MWTs over the past decade have not been reported. The aim of this study was to investigate recent trends in the etiology and endoscopic features of MWTs. Patients and methods: We retrospectively reviewed the medical records of patients with a diagnosis of MWT at our university hospital between August 2003 and September 2013. The information regarding etiology, clinical parameters, endoscopic findings, therapeutic interventions, and outcome was reviewed. Results: A total of 190 patients with MWTs were evaluated. More than half (n = 100) of the cases occurred during endoscopic procedures; cases related to alcohol consumption were less frequent (n = 13). MWTs were most frequently located in the lesser curvature of the stomach and right lateral wall (2 – to 4-o’clock position) of the esophagus, irrespective of the cause. The condition of more than 90 % of the patients (n = 179) was improved by conservative or endoscopic treatment, whereas 11 patients (5.8 %) required blood transfusion. Risk factors for blood transfusion were a longer laceration (odds ratio [OR] 2.3) and a location extending from the esophagus to the stomach (OR 5.3). Conclusions: MWTs were frequently found on the right lateral wall (2 – to 4-o’clock position) of the esophagus aligned with the lesser curvature of the stomach, irrespective of etiology. Longer lacerations extending from the esophagus to the gastric cardia were associated with an elevated risk for bleeding and requirement for blood transfusion. PMID:26528495

  3. Optimal location through distributed algorithm to avoid energy hole in mobile sink WSNs.

    PubMed

    Qing-hua, Li; Wei-hua, Gui; Zhi-gang, Chen

    2014-01-01

    In multihop data collection sensor network, nodes near the sink need to relay on remote data and, thus, have much faster energy dissipation rate and suffer from premature death. This phenomenon causes energy hole near the sink, seriously damaging the network performance. In this paper, we first compute energy consumption of each node when sink is set at any point in the network through theoretical analysis; then we propose an online distributed algorithm, which can adjust sink position based on the actual energy consumption of each node adaptively to get the actual maximum lifetime. Theoretical analysis and experimental results show that the proposed algorithms significantly improve the lifetime of wireless sensor network. It lowers the network residual energy by more than 30% when it is dead. Moreover, the cost for moving the sink is relatively smaller. PMID:24895668

  4. Arsenic distribution in soils and rye plants of a cropland located in an abandoned mining area.

    PubMed

    Álvarez-Ayuso, Esther; Abad-Valle, Patricia; Murciego, Ascensión; Villar-Alonso, Pedro

    2016-01-15

    A mining impacted cropland was studied in order to assess its As pollution level and the derived environmental and health risks. Profile soil samples (0-50 cm) and rye plant samples were collected at different distances (0-150 m) from the near mine dump and analyzed for their As content and distribution. These cropland soils were sandy, acidic and poor in organic matter and Fe/Al oxides. The soil total As concentrations (38-177 mg kg(-1)) and, especially, the soil soluble As concentrations (0.48-4.1 mg kg(-1)) importantly exceeded their safe limits for agricultural use of soils. Moreover, the soil As contents more prone to be mobilized could rise up to 25-69% of total As levels as determined using (NH4)2SO4, NH4H2PO4 and (NH4)2C2O4·H2O as sequential extractants. Arsenic in rye plants was primarily distributed in roots (3.4-18.8 mg kg(-1)), with restricted translocation to shoots (TF=0.05-0.26) and grains (TF=<0.02-0.14). The mechanism for this excluder behavior should be likely related to arsenate reduction to arsenite in roots, followed by its complexation with thiols, as suggested by the high arsenite level in rye roots (up to 95% of the total As content) and the negative correlation between thiol concentrations in rye roots and As concentrations in rye shoots (|R|=0.770; p<0.01). Accordingly, in spite of the high mobile and mobilizable As contents in soils, As concentrations in rye above-ground tissues comply with the European regulation on undesirable substances in animal feed. Likewise, rye grain As concentrations were below its maximum tolerable concentration in cereals established by international legislation. PMID:26519583

  5. Noninvasive determination of the location and distribution of DNAPL using advanced seismic reflection techniques.

    PubMed

    Temples, T J; Waddell, M G; Domoracki, W J; Eyer, J

    2001-01-01

    Recent advances in seismic reflection amplitude analysis (e.g., amplitude versus offset-AVO, bright spot mapping) technology to directly detect the presence of subsurface DNAPL (e.g., CCl4) were applied to 216-Z-9 crib, 200 West Area, DOE Hanford Site, Washington. Modeling to determine what type of anomaly might be present was performed. Model results were incorporated in the interpretation of the seismic data to determine the location of any seismic amplitude anomalies associated with the presence of high concentrations of CCl4. Seismic reflection profiles were collected and analyzed for the presence of DNAPL. Structure contour maps of the contact between the Hanford fine unit and the Plio/Pleistocene unit and between the Plio/Pleistocene unit and the caliche layer were interpreted to determine potential DNAPL flow direction. Models indicate that the contact between the Plio/Pleistocene unit and the caliche should have a positive reflection coefficient. When high concentrations of CCl4 are present, the reflection coefficient of this interface displays a noticeable positive increase in the seismic amplitude (i.e., bright spot). Amplitude data contoured on the Plio/Pleistocene-caliche boundary display high values indicating the presence of DNAPL to the north and east of the crib area. The seismic data agree well with the well control in areas of high concentrations of CCl4. PMID:11341013

  6. Impacts to the chest of PMHSs - Influence of impact location and load distribution on chest response.

    PubMed

    Holmqvist, Kristian; Svensson, Mats Y; Davidsson, Johan; Gutsche, Andreas; Tomasch, Ernst; Darok, Mario; Ravnik, Dean

    2016-02-01

    The chest response of the human body has been studied for several load conditions, but is not well known in the case of steering wheel rim-to-chest impact in heavy goods vehicle frontal collisions. The aim of this study was to determine the response of the human chest in a set of simulated steering wheel impacts. PMHS tests were carried out and analysed. The steering wheel load pattern was represented by a rigid pendulum with a straight bar-shaped front. A crash test dummy chest calibration pendulum was utilised for comparison. In this study, a set of rigid bar impacts were directed at various heights of the chest, spanning approximately 120mm around the fourth intercostal space. The impact energy was set below a level estimated to cause rib fracture. The analysed results consist of responses, evaluated with respect to differences in the impacting shape and impact heights on compression and viscous criteria chest injury responses. The results showed that the bar impacts consistently produced lesser scaled chest compressions than the hub; the Middle bar responses were around 90% of the hub responses. A superior bar impact provided lesser chest compression; the average response was 86% of the Middle bar response. For inferior bar impacts, the chest compression response was 116% of the chest compression in the middle. The damping properties of the chest caused the compression to decrease in the high speed bar impacts to 88% of that in low speed impacts. From the analysis it could be concluded that the bar impact shape provides lower chest criteria responses compared to the hub. Further, the bar responses are dependent on the impact location of the chest. Inertial and viscous effects of the upper body affect the responses. The results can be used to assess the responses of human substitutes such as anthropomorphic test devices and finite element human body models, which will benefit the development process of heavy goods vehicle safety systems. PMID:26687541

  7. Drop size distributions and related properties of fog for five locations measured from aircraft

    NASA Technical Reports Server (NTRS)

    Zak, J. Allen

    1994-01-01

    Fog drop size distributions were collected from aircraft as part of the Synthetic Vision Technology Demonstration Program. Three west coast marine advection fogs, one frontal fog, and a radiation fog were sampled from the top of the cloud to the bottom as the aircraft descended on a 3-degree glideslope. Drop size versus altitude versus concentration are shown in three dimensional plots for each 10-meter altitude interval from 1-minute samples. Also shown are median volume radius and liquid water content. Advection fogs contained the largest drops with median volume radius of 5-8 micrometers, although the drop sizes in the radiation fog were also large just above the runway surface. Liquid water content increased with height, and the total number of drops generally increased with time. Multimodal variations in number density and particle size were noted in most samples where there was a peak concentration of small drops (2-5 micrometers) at low altitudes, midaltitude peak of drops 5-11 micrometers, and high-altitude peak of the larger drops (11-15 micrometers and above). These observations are compared with others and corroborate previous results in fog gross properties, although there is considerable variation with time and altitude even in the same type of fog.

  8. Estimation of hydrothermal deposits location from magnetization distribution and magnetic properties in the North Fiji Basin

    NASA Astrophysics Data System (ADS)

    Choi, S.; Kim, C.; Park, C.; Kim, H.

    2013-12-01

    The North Fiji Basin is belong to one of the youngest basins of back-arc basins in the southwest Pacific (from 12 Ma ago). We performed the marine magnetic and the bathymetry survey in the North Fiji Basin for finding the submarine hydrothermal deposits in April 2012. We acquired magnetic and bathymetry datasets by using Multi-Beam Echo Sounder EM120 (Kongsberg Co.) and Overhouser Proton Magnetometer SeaSPY (Marine Magnetics Co.). We conducted the data processing to obtain detailed seabed topography, magnetic anomaly, reduce to the pole(RTP), analytic signal and magnetization. The study areas composed of the two areas(KF-1(longitude : 173.5 ~ 173.7 and latitude : -16.2 ~ -16.5) and KF-3(longitude : 173.4 ~ 173.6 and latitude : -18.7 ~ -19.1)) in Central Spreading Ridge(CSR) and one area(KF-2(longitude : 173.7 ~ 174 and latitude : -16.8 ~ -17.2)) in Triple Junction(TJ). The seabed topography of KF-1 existed thin horst in two grabens that trends NW-SE direction. The magnetic properties of KF-1 showed high magnetic anomalies in center part and magnetic lineament structure of trending E-W direction. In the magnetization distribution of KF-1, the low magnetization zone matches well with a strong analytic signal in the northeastern part. KF-2 area has TJ. The seabed topography formed like Y-shape and showed a high feature in the center of TJ. The magnetic properties of KF-2 displayed high magnetic anomalies in N-S spreading ridge center and northwestern part. In the magnetization distribution of KF-2, the low magnetization zone matches well with a strong analytic signal in the northeastern part. The seabed topography of KF-3 presented a flat and high topography like dome structure at center axis and some seamounts scattered around the axis. The magnetic properties of KF-3 showed high magnetic anomalies in N-S spreading ridge center part. In the magnetization of KF-2, the low magnetization zone mismatches to strong analytic signal in this area. The difference of KF-3

  9. Characteristics of size distributions at urban and rural locations in New York

    NASA Astrophysics Data System (ADS)

    Bae, M.-S.; Schwab, J. J.; Hogrefe, O.; Frank, B. P.; Lala, G. G.; Demerjian, K. L.

    2010-01-01

    Paired nano- and long-tube Scanning Mobility Particle Sizer (SMPS) systems were operated for four different intensive field campaigns in New York State. Two of these campaigns were at Queens College in New York City, during the summer of 2001 and the winter of 2004. The other field campaigns were at rural sites in New York State. The data with the computed diffusion loss corrections for the sampling lines and the SMPS instruments were examined and the combined SMPS data sets for each campaign were obtained. The diffusion corrections significantly affect total number concentrations, and in New York City, affect the mode structure of the size distributions. The relationship between merged and integrated SMPS total number concentrations with the diffusion loss corrections and the CPC number concentrations yield statistically significant increases (closer to 1) in the slope and correlation coefficient compared to the uncorrected values. The measurements are compared to PM2.5 mass concentrations and ion balance indications of aerosol acidity. Periods of low observed PM2.5 mass, high number concentration, and low median diameter due to small fresh particles are associated with primary emissions for the urban sites; and with particle nucleation and growth for the rural sites. The observations of high PM2.5 mass, lower number concentrations, and higher median diameter are mainly due to an enhancement of coagulation and/or condensation processes in relatively aged air. There are statistically different values for the condensation sink (CS) between urban and rural areas. While there is good association (r2>0.5) between the condensation sink (CS) in the range of 8.35-283.9 nm and PM2.5 mass in the urban areas, there is no discernable association in the rural areas. The average (±standard deviation) of CS lies in the range 6.5(±3.3)×10-3-2.4(±0.9)×10-2.

  10. Characteristics of size distributions at urban and rural locations in New York

    NASA Astrophysics Data System (ADS)

    Bae, M.-S.; Schwab, J. J.; Hogrefe, O.; Frank, B. P.; Lala, G. G.; Demerjian, K. L.

    2010-05-01

    Paired nano- and long-tube Scanning Mobility Particle Sizer (SMPS) systems were operated for four different intensive field campaigns in New York State. Two of these campaigns were at Queens College in New York City, during the summer of 2001 and the winter of 2004. The other field campaigns were at rural sites in New York State. The data with the computed diffusion loss corrections for the sampling lines and the SMPS instruments were examined and the combined SMPS data sets for each campaign were obtained. The diffusion corrections significantly affect total number concentrations, and in New York City, affect the mode structure of the size distributions. The relationship between merged and integrated SMPS total number concentrations with the diffusion loss corrections and the CPC number concentrations yield statistically significant increases (closer to 1) in the slope and correlation coefficient compared to the uncorrected values. The measurements are compared to PM2.5 mass concentrations and ion balance indications of aerosol acidity. Analysis of particle growth rate in comparison to other observations can classify the events and illustrate that urban and rural new particle formation and growth are the result of different causes. Periods of low observed PM2.5 mass, high number concentration, and low median diameter due to small fresh particles are associated with primary emissions for the urban sites; and with particle nucleation and growth for the rural sites. The observations of high PM2.5 mass, lower number concentrations, and higher median diameter are mainly due to an enhancement of photochemical reactions leading to condensation processes in relatively aged air. There are statistically different values for the condensation sink (CS) between urban and rural areas. While there is good association (r2>0.5) between the condensation sink (CS) in the range of 8.35-283.9 nm and PM2.5 mass in the urban areas, there is no discernable association in the rural areas

  11. Using Distributed Temperature Sensing to Locate and Quantify Thermal Refugia: Insights Into Radiative & Hydrologic Processes

    NASA Astrophysics Data System (ADS)

    Bond, R. M.; Stubblefield, A. P.

    2012-12-01

    Stream temperature plays a critical role in determining the overall structure and function of stream ecosystems. Aquatic fauna are particularly vulnerable to projected increases in the magnitude and duration of elevated stream temperatures from global climate change. Northern California cold water salmon and trout fisheries have been declared thermally impacted by the California State Water Resources Control Board. This study employed Distributed Temperature Sensing (DTS) to detect stream heating and cooling at one meter resolution along a one kilometer section of the North Fork of the Salmon River, a tributary of the Klamath River, northern California, USA. The Salmon River has an extensive legacy of hydraulic gold mining tailing which have been reworked into large gravel bars; creating shallow wide runs, possibly filling in pools and disrupting riparian vegetation recruitment. Eight days of temperature data were collected at 15 minute intervals during July 2012. Three remote weather stations were deployed during the study period. The main objectives of this research were: one, quantify thermal inputs that create and maintain thermal refugia for cold water fishes; two, investigate the role of riparian and topographic shading in buffering peak summer temperatures; and three, create and validate a physically based stream heating model to predict effects of riparian management, drought, and climate change on stream temperature. DTS was used to spatially identify cold water seeps and quantify their contribution to the stream's thermal regime. Along the one kilometer reach, hyporheic flow was identified using DTS. The spring was between 16-18°C while the peak mainstem temperature above the spring reached a maximum of 23°C. The study found a diel heating cycle of 5°C with a Maximum Weekly Average Temperature (MWAT) of over 22°C; exceeding salmon and trout protective temperature standards set by USEPA Region 10. Twenty intensive fish counts over five days were

  12. What influences national and foreign physicians’ geographic distribution? An analysis of medical doctors’ residence location in Portugal

    PubMed Central

    2012-01-01

    Background The debate over physicians’ geographical distribution has attracted the attention of the economic and public health literature over the last forty years. Nonetheless, it is still to date unclear what influences physicians’ location, and whether foreign physicians contribute to fill the geographical gaps left by national doctors in any given country. The present research sets out to investigate the current distribution of national and international physicians in Portugal, with the objective to understand its determinants and provide an evidence base for policy-makers to identify policies to influence it. Methods A cross-sectional study of physicians currently registered in Portugal was conducted to describe the population and explore the association of physician residence patterns with relevant personal and municipality characteristics. Data from the Portuguese Medical Council on physicians’ residence and characteristics were analysed, as well as data from the National Institute of Statistics on municipalities’ population, living standards and health care network. Descriptive statistics, chi-square tests, negative binomial and logistic regression modelling were applied to determine: (a) municipality characteristics predicting Portuguese and International physicians’ geographical distribution, and; (b) doctors’ characteristics that could increase the odds of residing outside the country’s metropolitan areas. Results There were 39,473 physicians in Portugal in 2008, 51.1% of whom male, and 40.2% between 41 and 55 years of age. They were predominantly Portuguese (90.5%), with Spanish, Brazilian and African nationalities also represented. Population, Population’s Purchasing Power, Nurses per capita and Municipality Development Index (MDI) were the municipality characteristics displaying the strongest association with national physicians’ location. For foreign physicians, the MDI was not statistically significant, while municipalities

  13. Dip distribution of Oita-Kumamoto Tectonic Line located in central Kyushu, Japan, estimated by eigenvectors of gravity gradient tensor

    NASA Astrophysics Data System (ADS)

    Kusumoto, Shigekazu

    2016-09-01

    We estimated the dip distribution of Oita-Kumamoto Tectonic Line located in central Kyushu, Japan, by using the dip of the maximum eigenvector of the gravity gradient tensor. A series of earthquakes in Kumamoto and Oita beginning on 14 April 2016 occurred along this tectonic line, the largest of which was M = 7.3. Because a gravity gradiometry survey has not been conducted in the study area, we calculated the gravity gradient tensor from the Bouguer gravity anomaly and employed it to the analysis. The general dip distribution of the Oita-Kumamoto Tectonic Line was found to be about 65° and tends to be higher towards its eastern end. In addition, we estimated the dip around the largest earthquake to be about 60° from the gravity gradient tensor. This result agrees with the dip of the earthquake source fault obtained by Global Navigation Satellite System data analysis.[Figure not available: see fulltext.

  14. Levels and spatial distribution of airborne chemical elements in a heavy industrial area located in the north of Spain.

    PubMed

    Lage, J; Almeida, S M; Reis, M A; Chaves, P C; Ribeiro, T; Garcia, S; Faria, J P; Fernández, B G; Wolterbeek, H T

    2014-01-01

    The adverse health effects of airborne particles have been subjected to intense investigation in recent years; however, more studies on the chemical characterization of particles from pollution emissions are needed to (1) identify emission sources, (2) better understand the relative toxicity of particles, and (3) pinpoint more targeted emission control strategies and regulations. The main objective of this study was to assess the levels and spatial distribution of airborne chemical elements in a heavy industrial area located in the north of Spain. Instrumental and biomonitoring techniques were integrated and analytical methods for k0 instrumental neutron activation analysis and particle-induced x-ray emission were used to determine element content in aerosol filters and lichens. Results indicated that in general local industry contributed to the emissions of As, Sb, Cu, V, and Ni, which are associated with combustion processes. In addition, the steelwork emitted significant quantities of Fe and Mn and the cement factory was associated with Ca emissions. The spatial distribution of Zn and Al also indicated an important contribution of two industries located outside the studied area. PMID:25072718

  15. Mechanics of the Compression Wood Response: II. On the Location, Action, and Distribution of Compression Wood Formation.

    PubMed

    Archer, R R; Wilson, B F

    1973-04-01

    A new method for simulation of cross-sectional growth provided detailed information on the location of normal wood and compression wood increments in two tilted white pine (Pinus strobus L.) leaders. These data were combined with data on stiffness, slope, and curvature changes over a 16-week period to make the mechanical analysis. The location of compression wood changed from the under side to a flank side and then to the upper side of the leader as the geotropic stimulus decreased, owing to compression wood action. Its location shifted back to a flank side when the direction of movement of the leader reversed. A model for this action, based on elongation strains, was developed and predicted the observed curvature changes with elongation strains of 0.3 to 0.5%, or a maximal compressive stress of 60 to 300 kilograms per square centimeter. After tilting, new wood formation was distributed so as to maintain consistent strain levels along the leaders in bending under gravitational loads. The computed effective elastic moduli were about the same for the two leaders throughout the season. PMID:16658408

  16. Verification of patient-specific dose distributions in proton therapy using a commercial two-dimensional ion chamber array

    SciTech Connect

    Arjomandy, Bijan; Sahoo, Narayan; Ciangaru, George; Zhu, Ronald; Song Xiaofei; Gillin, Michael

    2010-11-15

    Purpose: The purpose of this study was to determine whether a two-dimensional (2D) ion chamber array detector quickly and accurately measures patient-specific dose distributions in treatment with passively scattered and spot scanning proton beams. Methods: The 2D ion chamber array detector MatriXX was used to measure the dose distributions in plastic water phantom from passively scattered and spot scanning proton beam fields planned for patient treatment. Planar dose distributions were measured using MatriXX, and the distributions were compared to those calculated using a treatment-planning system. The dose distributions generated by the treatment-planning system and a film dosimetry system were similarly compared. Results: For passively scattered proton beams, the gamma index for the dose-distribution comparison for treatment fields for three patients with prostate cancer and for one patient with lung cancer was less than 1.0 for 99% and 100% of pixels for a 3% dose tolerance and 3 mm distance-to-dose agreement, respectively. For spot scanning beams, the mean ({+-} standard deviation) percentages of pixels with gamma indices meeting the passing criteria were 97.1%{+-}1.4% and 98.8%{+-}1.4% for MatriXX and film dosimetry, respectively, for 20 fields used to treat patients with prostate cancer. Conclusions: Unlike film dosimetry, MatriXX provides not only 2D dose-distribution information but also absolute dosimetry in fractions of minutes with acceptable accuracy. The results of this study indicate that MatriXX can be used to verify patient-field specific dose distributions in proton therapy.

  17. Experimental Verification of Application of Looped System and Centralized Voltage Control in a Distribution System with Renewable Energy Sources

    NASA Astrophysics Data System (ADS)

    Hanai, Yuji; Hayashi, Yasuhiro; Matsuki, Junya

    The line voltage control in a distribution network is one of the most important issues for a penetration of Renewable Energy Sources (RES). A loop distribution network configuration is an effective solution to resolve voltage and distribution loss issues concerned about a penetration of RES. In this paper, for a loop distribution network, the authors propose a voltage control method based on tap change control of LRT and active/reactive power control of RES. The tap change control of LRT takes a major role of the proposed voltage control. Additionally the active/reactive power control of RES supports the voltage control when voltage deviation from the upper or lower voltage limit is unavoidable. The proposed method adopts SCADA system based on measured data from IT switches, which are sectionalizing switch with sensor installed in distribution feeder. In order to check the validity of the proposed voltage control method, experimental simulations using a distribution system analog simulator “ANSWER” are carried out. In the simulations, the voltage maintenance capability in the normal and the emergency is evaluated.

  18. ECOLOGICAL STUDIES AND MATHEMATICAL MODELING OF 'CLADOPHORA' IN LAKE HURON: 7. MODEL VERIFICATION AND SYSTEM RESPONSE

    EPA Science Inventory

    This manuscript describes the verification of a calibrated mathematical model designed to predict the spatial and temporal distribution of Cladophora about a point source of nutrients. The study site was located at Harbor Beach, Michigan, on Lake Huron. The model is intended to h...

  19. Modes in the size distributions and neutralization extent of fog-processed ammonium salt aerosols observed at Canadian rural locations

    NASA Astrophysics Data System (ADS)

    Yao, X. H.; Zhang, L.

    2012-02-01

    Among the 192 samples of size-segregated water-soluble inorganic ions collected using a Micro-Orifice Uniform Deposit Impactor (MOUDI) at eight rural locations in Canada, ten samples were identified to have gone through fog processing. The supermicron particle modes of ammonium salt aerosols were found to be the fingerprint of fog processed aerosols. However, the patterns and the sizes of the supermicron modes varied with ambient temperature (T) and particle acidity and also differed between inland and coastal locations. Under T > 0 °C condition, fog-processed ammonium salt aerosols were completely neutralized and had a dominant mode at 1-2 μm and a minor mode at 5-10 μm if particles were in neutral condition, and ammonium sulfate was incompletely neutralized and only had a 1-2 μm mode if particles were in acidic conditions. Under T < 0 °C at the coastal site, fog-processed aerosols exhibited a bi-modal size distribution with a dominant mode of incompletely-neutralized ammonium sulfate at about 3 μm and a minor mode of completely-neutralized ammonium sulfate at 8-9 μm. Under T < 0 °C condition at the inland sites, fog-processed ammonium salt aerosols were sometimes completely neutralized and sometimes incompletely neutralized, and the size of the supermicron mode was in the range from 1 to 5 μm. Overall, fog-processed ammonium salt aerosols under T < 0 °C condition were generally distributed at larger size (e.g., 2-5 μm) than those under T > 0 °C condition (e.g., 1-2 μm).

  20. Radar prediction of absolute rain fade distributions for earth-satellite paths and general methods for extrapolation of fade statistics to other locations

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1982-01-01

    The first absolute rain fade distribution method described establishes absolute fade statistics at a given site by means of a sampled radar data base. The second method extrapolates absolute fade statistics from one location to another, given simultaneously measured fade and rain rate statistics at the former. Both methods employ similar conditional fade statistic concepts and long term rain rate distributions. Probability deviations in the 2-19% range, with an 11% average, were obtained upon comparison of measured and predicted levels at given attenuations. The extrapolation of fade distributions to other locations at 28 GHz showed very good agreement with measured data at three sites located in the continental temperate region.

  1. Implementation of a novel double-side technique for partial discharge detection and location in covered conductor overhead distribution networks

    NASA Astrophysics Data System (ADS)

    He, Weisheng; Li, Hongjie; Liang, Deliang; Sun, Haojie; Yang, Chenbo; Wei, Jinqu; Yuan, Zhijian

    2015-12-01

    Partial discharge (PD) detection has proven to be one of the most acceptable techniques for on-line condition monitoring and predictive maintenance of power apparatus. A powerful tool for detecting PD in covered-conductor (CC) lines is urgently needed to improve the asset management of CC overhead distribution lines. In this paper, an appropriate, portable and simple system designed to detect PD activity in CC lines and ultimately pinpoint the PD source is developed and tested. The system is based on a novel double-side synchronised PD measurement technique driven by pulse injection. Emphasis is placed on the proposed PD-location mechanism and hardware structure, with descriptions of the pulse-injection process, detection device, synchronisation principle and PD-location algorithm. The system is simulated using ATP-EMTP, and the simulated results are found to be consistent with the actual simulation layout. For further validation, the capability of the system is tested in a high-voltage laboratory experiment using a 10-kV CC line with cross-linked polyethylene insulation.

  2. SU-D-BRF-02: In Situ Verification of Radiation Therapy Dose Distributions From High-Energy X-Rays Using PET Imaging

    SciTech Connect

    Zhang, Q; Kai, L; Wang, X; Hua, B; Chui, L; Wang, Q; Ma, C

    2014-06-01

    Purpose: To study the possibility of in situ verification of radiation therapy dose distributions using PET imaging based on the activity distribution of 11C and 15O produced via photonuclear reactions in patient irradiated by 45MV x-rays. Methods: The method is based on the photonuclear reactions in the most elemental composition {sup 12}C and {sup 16}O in body tissues irradiated by bremsstrahlung photons with energies up to 45 MeV, resulting primarily in {sup 11}C and {sup 15}O, which are positron-emitting nuclei. The induced positron activity distributions were obtained with a PET scanner in the same room of a LA45 accelerator (Top Grade Medical, Beijing, China). The experiments were performed with a brain phantom using realistic treatment plans. The phantom was scanned at 20min and 2-5min after irradiation for {sup 11}C and {sup 15}, respectively. The interval between the two scans was 20 minutes. The activity distributions of {sup 11}C and {sup 15}O within the irradiated volume can be separated from each other because the half-life is 20min and 2min for {sup 11}C and {sup 15}O, respectively. Three x-ray energies were used including 10MV, 25MV and 45MV. The radiation dose ranged from 1.0Gy to 10.0Gy per treatment. Results: It was confirmed that no activity was detected at 10 MV beam energy, which was far below the energy threshold for photonuclear reactions. At 25 MV x-ray activity distribution images were observed on PET, which needed much higher radiation dose in order to obtain good quality. For 45 MV photon beams, good quality activation images were obtained with 2-3Gy radiation dose, which is the typical daily dose for radiation therapy. Conclusion: The activity distribution of {sup 15}O and {sup 11}C could be used to derive the dose distribution of 45MV x-rays at the regular daily dose level. This method can potentially be used to verify in situ dose distributions of patients treated on the LA45 accelerator.

  3. Frequency Distribution of Second Solid Cancer Locations in Relation to the Irradiated Volume Among 115 Patients Treated for Childhood Cancer

    SciTech Connect

    Diallo, Ibrahima Haddy, Nadia; Adjadj, Elisabeth; Samand, Akhtar; Quiniou, Eric; Chavaudra, Jean; Alziar, Iannis; Perret, Nathalie; Guerin, Sylvie; Lefkopoulos, Dimitri; Vathaire, Florent de

    2009-07-01

    Purpose: To provide better estimates of the frequency distribution of second malignant neoplasm (SMN) sites in relation to previous irradiated volumes, and better estimates of the doses delivered to these sites during radiotherapy (RT) of the first malignant neoplasm (FMN). Methods and Materials: The study focused on 115 patients who developed a solid SMN among a cohort of 4581 individuals. The homemade software package Dos{sub E}G was used to estimate the radiation doses delivered to SMN sites during RT of the FMN. Three-dimensional geometry was used to evaluate the distances between the irradiated volume, for RT delivered to each FMN, and the site of the subsequent SMN. Results: The spatial distribution of SMN relative to the irradiated volumes in our cohort was as follows: 12% in the central area of the irradiated volume, which corresponds to the planning target volume (PTV), 66% in the beam-bordering region (i.e., the area surrounding the PTV), and 22% in regions located more than 5 cm from the irradiated volume. At the SMN site, all dose levels ranging from almost zero to >75 Gy were represented. A peak SMN frequency of approximately 31% was identified in volumes that received <2.5 Gy. Conclusion: A greater volume of tissues receives low or intermediate doses in regions bordering the irradiated volume with modern multiple-beam RT arrangements. These results should be considered for risk-benefit evaluations of RT.

  4. Exomars Mission Verification Approach

    NASA Astrophysics Data System (ADS)

    Cassi, Carlo; Gilardi, Franco; Bethge, Boris

    According to the long-term cooperation plan established by ESA and NASA in June 2009, the ExoMars project now consists of two missions: A first mission will be launched in 2016 under ESA lead, with the objectives to demonstrate the European capability to safely land a surface package on Mars, to perform Mars Atmosphere investigation, and to provide communi-cation capability for present and future ESA/NASA missions. For this mission ESA provides a spacecraft-composite, made up of an "Entry Descent & Landing Demonstrator Module (EDM)" and a Mars Orbiter Module (OM), NASA provides the Launch Vehicle and the scientific in-struments located on the Orbiter for Mars atmosphere characterisation. A second mission with it launch foreseen in 2018 is lead by NASA, who provides spacecraft and launcher, the EDL system, and a rover. ESA contributes the ExoMars Rover Module (RM) to provide surface mobility. It includes a drill system allowing drilling down to 2 meter, collecting samples and to investigate them for signs of past and present life with exobiological experiments, and to investigate the Mars water/geochemical environment, In this scenario Thales Alenia Space Italia as ESA Prime industrial contractor is in charge of the design, manufacturing, integration and verification of the ESA ExoMars modules, i.e.: the Spacecraft Composite (OM + EDM) for the 2016 mission, the RM for the 2018 mission and the Rover Operations Control Centre, which will be located at Altec-Turin (Italy). The verification process of the above products is quite complex and will include some pecu-liarities with limited or no heritage in Europe. Furthermore the verification approach has to be optimised to allow full verification despite significant schedule and budget constraints. The paper presents the verification philosophy tailored for the ExoMars mission in line with the above considerations, starting from the model philosophy, showing the verification activities flow and the sharing of tests

  5. Spatially distributed energy balance snowmelt modeling in a mountainous river basin: estimation of meteorological inputs and verification of model results

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A spatially distributed energy balance snowmelt model has been applied to a 2150 km2 drainage basin in the Boise River, ID, USA, to simulate the accumulation and melt of the snowpack for the years 1998–2000. The simulation was run at a 3 h time step and a spatial resolution of 250 m. Spatial field t...

  6. TESTING AND VERIFICATION OF REAL-TIME WATER QUALITY MONITORING SENSORS IN A DISTRIBUTION SYSTEM AGAINST INTRODUCED CONTAMINATION

    EPA Science Inventory

    Drinking water distribution systems reach the majority of American homes, business and civic areas, and are therefore an attractive target for terrorist attack via direct contamination, or backflow events. Instrumental monitoring of such systems may be used to signal the prese...

  7. Development and verification of an analytical algorithm to predict absorbed dose distributions in ocular proton therapy using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Koch, Nicholas C.; Newhauser, Wayne D.

    2010-02-01

    Proton beam radiotherapy is an effective and non-invasive treatment for uveal melanoma. Recent research efforts have focused on improving the dosimetric accuracy of treatment planning and overcoming the present limitation of relative analytical dose calculations. Monte Carlo algorithms have been shown to accurately predict dose per monitor unit (D/MU) values, but this has yet to be shown for analytical algorithms dedicated to ocular proton therapy, which are typically less computationally expensive than Monte Carlo algorithms. The objective of this study was to determine if an analytical method could predict absolute dose distributions and D/MU values for a variety of treatment fields like those used in ocular proton therapy. To accomplish this objective, we used a previously validated Monte Carlo model of an ocular nozzle to develop an analytical algorithm to predict three-dimensional distributions of D/MU values from pristine Bragg peaks and therapeutically useful spread-out Bragg peaks (SOBPs). Results demonstrated generally good agreement between the analytical and Monte Carlo absolute dose calculations. While agreement in the proximal region decreased for beams with less penetrating Bragg peaks compared with the open-beam condition, the difference was shown to be largely attributable to edge-scattered protons. A method for including this effect in any future analytical algorithm was proposed. Comparisons of D/MU values showed typical agreement to within 0.5%. We conclude that analytical algorithms can be employed to accurately predict absolute proton dose distributions delivered by an ocular nozzle.

  8. Atmospheric aerosols size distribution properties in winter and pre-monsoon over western Indian Thar Desert location

    NASA Astrophysics Data System (ADS)

    Panwar, Chhagan; Vyas, B. M.

    2016-05-01

    The first ever experimental results over Indian Thar Desert region concerning to height integrated aerosols size distribution function in particles size ranging between 0.09 to 2 µm such as, aerosols columnar size distribution (CSD), effective radius (Reff), integrated content of total aerosols (Nt), columnar content of accumulation and coarse size aerosols particles concentration (Na) (size < 0.5 µm) and (Nc) (size between 0.5 to 2 µm) have been described specifically during winter (a stable weather condition and intense anthropogenic pollution activity period) and pre-monsoon (intense dust storms of natural mineral aerosols as well as unstable atmospheric weather condition period) at Jaisalmer (26.90°N, 69.90°E, 220 m above surface level (asl)) located in central Thar desert vicinity of western Indian site. The CSD and various derived other aerosols size parameters are retrieved from their average spectral characteristics of Aerosol Optical Thickness (AOT) from UV to Infrared wavelength spectrum measured from Multi-Wavelength solar Radiometer (MWR). The natures of CSD are, in general, bio-modal character, instead of uniformly distributed character and power law distributions. The observed primary peaks in CSD plots are seen around about 1013 m2 μm-1 at radius range 0.09-0.20 µm during both the seasons. But, in winter months, secondary peaks of relatively lower CSD values of 1010 to 1011 m2/μm-1 occur within a lower radius size range 0.4 to 0.6 µm. In contrast to this, while in dust dominated and hot season, the dominated secondary maxima of the higher CSD of about 1012 m2μm-3 is found of bigger aerosols size particles in a rage of 0.6 to 1.0 µm which is clearly demonstrating the characteristics of higher aerosols laden of bigger size aerosols in summer months relative to their prevailed lower aerosols loading of smaller size aerosols particles (0.4 to 0.6 µm) in cold months. Several other interesting features of changing nature of monthly spectral AOT

  9. Sub-micron particle number size distributions characteristics at an urban location, Kanpur, in the Indo-Gangetic Plain

    NASA Astrophysics Data System (ADS)

    Kanawade, V. P.; Tripathi, S. N.; Bhattu, Deepika; Shamjad, P. M.

    2014-10-01

    We present long-term measurements of sub-micron particle number size distributions (PNSDs) conducted at an urban location, Kanpur, in India, from September 2007 to July 2011. The mean Aitken mode (NAIT), accumulation mode (NACCU), the total particle (NTOT), and black carbon (BC) mass concentrations were 12.4 × 103 cm- 3, 18.9 × 103 cm- 3, 31.9 × 103 cm- 3, and 7.96 μg m- 3, respectively, within the observed range at other urban locations worldwide, but much higher than those reported at urban sites in the developed nations. The total particle volume concentration appears to be dominated mainly by the accumulation mode particles, except during the monsoon months, perhaps due to efficient wet deposition of accumulation mode particles by precipitation. At Kanpur, the diurnal variation of particle number concentrations was very distinct, with highest during morning and late evening hours, and lowest during the afternoon hours. This behavior could be attributed to the large primary emissions of aerosol particles and temporal evolution of the planetary boundary layer. A distinct seasonal variation in the total particle number and BC mass concentrations was observed, with the maximum in winter and minimum during the rainy season, however, the Aitken mode particles did not show a clear seasonal fluctuation. The ratio of Aitken to accumulation mode particles, NAIT/NACCU, was varied from 0.1 to 14.2, with maximum during April to September months, probably suggesting the importance of new particle formation processes and subsequent particle growth. This finding suggests that dedicated long-term measurements of PNSDs (from a few nanometer to one micron) are required to systematically characterize new particle formation over the Indian subcontinent that has been largely unstudied so far. Contrarily, the low NAIT/NACCU during post-monsoon and winter indicated the dominance of biomass/biofuel burning aerosol emissions at this site.

  10. Spatial patterns of Transit-Time Distributions using δ18O-isotope tracer simulations at ungauged river locations

    NASA Astrophysics Data System (ADS)

    Stockinger, Michael; Bogena, Heye; Lücke, Andreas; Diekkrüger, Bernd; Weiler, Markus; Vereecken, Harry

    2013-04-01

    Knowledge of catchment response times to a precipitation forcing and of isotope tracer transit times can be used to characterize a catchment's hydrological behavior. The aim of this study was to use one gauging station together with multiple δ18O-isotope monitoring locations along the main stream to characterize the spatial heterogeneity of a catchment's hydrological behavior in the context of transit times. We present a method suitable for small catchments to estimate the Transit-Time Distribution (TTD) of precipitation to any stream point using δ18O tracer data, no matter if the stream point is gauged or ungauged. Hourly runoff and precipitation data were used to determine the effective precipitation under base flow conditions at Wüstebach (Eifel, Germany), a small, forested TERENO/TR32 test site. Modeling was focused on base flow due to the weekly measurement intervals of δ18O. The modeling period of 2.5 years was split up in six different hydrological seasons, based on average soil water content, in order to ensure a good fit of the model. Due to the small size of the Wüstebach catchment (27 ha) we assumed the derived effective precipitation to be applicable for the whole catchment. For subsequent modeling of stream water δ18O data we used effective precipitation as an input variable and corrected in a two-step process for canopy evaporation and soil evaporation. Thus we derived base flow TTDs for the ungauged stream and tributary locations. Results show a different behavior of the catchment's response time for different catchment wetness conditions with respect to base flow formation. Winter seasons show similar response times, as well as summer seasons, with the exception of one summer with a considerable higher response time. The transit time of water across the isotope observation points shows points more influenced by shallow source waters than other points, where a higher contribution of groundwater is observable.

  11. Simple Syringe Filtration Methods for Reliably Examining Dissolved and Colloidal Trace Element Distributions in Remote Field Locations

    NASA Astrophysics Data System (ADS)

    Shiller, A. M.

    2002-12-01

    Methods for obtaining reliable dissolved trace element samples frequently utilize clean labs, portable laminar flow benches, or other equipment not readily transportable to remote locations. In some cases unfiltered samples can be obtained in a remote location and transported back to a lab for filtration. However, this may not always be possible or desirable. Additionally, methods for obtaining information on colloidal composition are likewise frequently too cumbersome for remote locations as well as being time-consuming. For that reason I have examined clean methods for collecting samples filtered through 0.45 and 0.02 micron syringe filters. With this methodology, only small samples are collected (typically 15 mL). However, with the introduction of the latest generation of ICP-MS's and microflow nebulizers, sample requirements for elemental analysis are much lower than just a few years ago. Thus, a determination of a suite of first row transition elements is frequently readily obtainable with samples of less than 1 mL. To examine the "traditional" (<0.45 micron) dissolved phase, 25 mm diameter polypropylene syringe filters and all polyethylene/polypropylene syringes are utilized. Filters are pre-cleaned in the lab using 40 mL of approx. 1 M HCl followed by a clean water rinse. Syringes are pre-cleaned by leaching with hot 1 M HCl followed by a clean water rinse. Sample kits are packed in polyethylene bags for transport to the field. Results are similar to results obtained using 0.4 micron polycarbonate screen filters, though concentrations may differ somewhat depending on the extent of sample pre-rinsing of the filter. Using this method, a multi-year time series of dissolved metals in a remote Rocky Mountain stream has been obtained. To examine the effect of colloidal material on dissolved metal concentrations, 0.02 micron alumina syringe filters have been utilized. Other workers have previously used these filters for examining colloidal Fe distributions in lake

  12. TPS verification with UUT simulation

    NASA Astrophysics Data System (ADS)

    Wang, Guohua; Meng, Xiaofeng; Zhao, Ruixian

    2006-11-01

    TPS's (Test Program Set) verification or first article acceptance test commonly depends on fault insertion experiment on UUT (Unit Under Test). However the failure modes injected on UUT is limited and it is almost infeasible when the UUT is in development or in a distributed state. To resolve this problem, a TPS verification method based on UUT interface signal simulation is putting forward. The interoperability between ATS (automatic test system) and UUT simulation platform is very important to realize automatic TPS verification. After analyzing the ATS software architecture, the approach to realize interpretability between ATS software and UUT simulation platform is proposed. And then the UUT simulation platform software architecture is proposed based on the ATS software architecture. The hardware composition and software architecture of the UUT simulation is described in details. The UUT simulation platform has been implemented in avionics equipment TPS development, debug and verification.

  13. KAT-7 SCIENCE VERIFICATION: USING H I OBSERVATIONS OF NGC 3109 TO UNDERSTAND ITS KINEMATICS AND MASS DISTRIBUTION

    SciTech Connect

    Carignan, C.; Frank, B. S.; Hess, K. M.; Lucero, D. M.; Randriamampandry, T. H.; Goedhart, S.; Passmoor, S. S.

    2013-09-15

    H I observations of the Magellanic-type spiral NGC 3109, obtained with the seven dish Karoo Array Telescope (KAT-7), are used to analyze its mass distribution. Our results are compared to those obtained using Very Large Array (VLA) data. KAT-7 is a pathfinder of the Square Kilometer Array precursor MeerKAT, which is under construction. The short baselines and low system temperature of the telescope make it sensitive to large-scale, low surface brightness emission. The new observations with KAT-7 allow the measurement of the rotation curve (RC) of NGC 3109 out to 32', doubling the angular extent of existing measurements. A total H I mass of 4.6 Multiplication-Sign 10{sup 8} M{sub Sun} is derived, 40% more than what is detected by the VLA observations. The observationally motivated pseudo-isothermal dark matter (DM) halo model can reproduce the observed RC very well, but the cosmologically motivated Navarro-Frenk-White DM model gives a much poorer fit to the data. While having a more accurate gas distribution has reduced the discrepancy between the observed RC and the MOdified Newtonian Dynamics (MOND) models, this is done at the expense of having to use unrealistic mass-to-light ratios for the stellar disk and/or very large values for the MOND universal constant a{sub 0}. Different distances or H I contents cannot reconcile MOND with the observed kinematics, in view of the small errors on these two quantities. As with many slowly rotating gas-rich galaxies studied recently, the present result for NGC 3109 continues to pose a serious challenge to the MOND theory.

  14. Verification of Anderson superexchange in MnO via magnetic pair distribution function analysis and ab initio theory

    DOE PAGESBeta

    Benjamin A. Frandsen; Brunelli, Michela; Page, Katharine; Uemura, Yasutomo J.; Staunton, Julie B.; Billinge, Simon J. L.

    2016-05-11

    Here, we present a temperature-dependent atomic and magnetic pair distribution function (PDF) analysis of neutron total scattering measurements of antiferromagnetic MnO, an archetypal strongly correlated transition-metal oxide. The known antiferromagnetic ground-state structure fits the low-temperature data closely with refined parameters that agree with conventional techniques, confirming the reliability of the newly developed magnetic PDF method. The measurements performed in the paramagnetic phase reveal significant short-range magnetic correlations on a ~1 nm length scale that differ substantially from the low-temperature long-range spin arrangement. Ab initio calculations using a self-interaction-corrected local spin density approximation of density functional theory predict magnetic interactions dominatedmore » by Anderson superexchange and reproduce the measured short-range magnetic correlations to a high degree of accuracy. Further calculations simulating an additional contribution from a direct exchange interaction show much worse agreement with the data. Furthermore, the Anderson superexchange model for MnO is thus verified by experimentation and confirmed by ab initio theory.« less

  15. Verification of Anderson Superexchange in MnO via Magnetic Pair Distribution Function Analysis and ab initio Theory

    NASA Astrophysics Data System (ADS)

    Frandsen, Benjamin A.; Brunelli, Michela; Page, Katharine; Uemura, Yasutomo J.; Staunton, Julie B.; Billinge, Simon J. L.

    2016-05-01

    We present a temperature-dependent atomic and magnetic pair distribution function (PDF) analysis of neutron total scattering measurements of antiferromagnetic MnO, an archetypal strongly correlated transition-metal oxide. The known antiferromagnetic ground-state structure fits the low-temperature data closely with refined parameters that agree with conventional techniques, confirming the reliability of the newly developed magnetic PDF method. The measurements performed in the paramagnetic phase reveal significant short-range magnetic correlations on a ˜1 nm length scale that differ substantially from the low-temperature long-range spin arrangement. Ab initio calculations using a self-interaction-corrected local spin density approximation of density functional theory predict magnetic interactions dominated by Anderson superexchange and reproduce the measured short-range magnetic correlations to a high degree of accuracy. Further calculations simulating an additional contribution from a direct exchange interaction show much worse agreement with the data. The Anderson superexchange model for MnO is thus verified by experimentation and confirmed by ab initio theory.

  16. Verification of Anderson Superexchange in MnO via Magnetic Pair Distribution Function Analysis and ab initio Theory.

    PubMed

    Frandsen, Benjamin A; Brunelli, Michela; Page, Katharine; Uemura, Yasutomo J; Staunton, Julie B; Billinge, Simon J L

    2016-05-13

    We present a temperature-dependent atomic and magnetic pair distribution function (PDF) analysis of neutron total scattering measurements of antiferromagnetic MnO, an archetypal strongly correlated transition-metal oxide. The known antiferromagnetic ground-state structure fits the low-temperature data closely with refined parameters that agree with conventional techniques, confirming the reliability of the newly developed magnetic PDF method. The measurements performed in the paramagnetic phase reveal significant short-range magnetic correlations on a ∼1  nm length scale that differ substantially from the low-temperature long-range spin arrangement. Ab initio calculations using a self-interaction-corrected local spin density approximation of density functional theory predict magnetic interactions dominated by Anderson superexchange and reproduce the measured short-range magnetic correlations to a high degree of accuracy. Further calculations simulating an additional contribution from a direct exchange interaction show much worse agreement with the data. The Anderson superexchange model for MnO is thus verified by experimentation and confirmed by ab initio theory. PMID:27232042

  17. The prevalence and distribution of gastrointestinal parasites of stray and refuge dogs in four locations in India.

    PubMed

    Traub, Rebecca J; Pednekar, Riddhi P; Cuttell, Leigh; Porter, Ronald B; Abd Megat Rani, Puteri Azaziah; Gatne, Mukulesh L

    2014-09-15

    A gastrointestinal parasite survey of 411 stray and refuge dogs sampled from four geographical and climactically distinct locations in India revealed these animals to represent a significant source of environmental contamination for parasites that pose a zoonotic risk to the public. Hookworms were the most commonly identified parasite in dogs in Sikkim (71.3%), Mumbai (48.8%) and Delhi (39.1%). In Ladakh, which experiences harsh extremes in climate, a competitive advantage was observed for parasites such as Sarcocystis spp. (44.2%), Taenia hydatigena (30.3%) and Echinococcus granulosus (2.3%) that utilise intermediate hosts for the completion of their life cycle. PCR identified Ancylostoma ceylanicum and Ancylostoma caninum to occur sympatrically, either as single or mixed infections in Sikkim (Northeast) and Mumbai (West). In Delhi, A. caninum was the only species identified in dogs, probably owing to its ability to evade unfavourable climatic conditions by undergoing arrested development in host tissue. The expansion of the known distribution of A. ceylanicum to the west, as far as Mumbai, justifies the renewed interest in this emerging zoonosis and advocates for its surveillance in future human parasite surveys. Of interest was the absence of Trichuris vulpis in dogs, in support of previous canine surveys in India. This study advocates the continuation of birth control programmes in stray dogs that will undoubtedly have spill-over effects on reducing the levels of environmental contamination with parasite stages. In particular, owners of pet animals exposed to these environments must be extra vigilant in ensuring their animals are regularly dewormed and maintaining strict standards of household and personal hygiene. PMID:25139393

  18. ETV - VERIFICATION TESTING (ENVIRONMENTAL TECHNOLOGY VERIFICATION PROGRAM)

    EPA Science Inventory

    Verification testing is a major component of the Environmental Technology Verification (ETV) program. The ETV Program was instituted to verify the performance of innovative technical solutions to problems that threaten human health or the environment and was created to substantia...

  19. Swarm Verification

    NASA Technical Reports Server (NTRS)

    Holzmann, Gerard J.; Joshi, Rajeev; Groce, Alex

    2008-01-01

    Reportedly, supercomputer designer Seymour Cray once said that he would sooner use two strong oxen to plow a field than a thousand chickens. Although this is undoubtedly wise when it comes to plowing a field, it is not so clear for other types of tasks. Model checking problems are of the proverbial "search the needle in a haystack" type. Such problems can often be parallelized easily. Alas, none of the usual divide and conquer methods can be used to parallelize the working of a model checker. Given that it has become easier than ever to gain access to large numbers of computers to perform even routine tasks it is becoming more and more attractive to find alternate ways to use these resources to speed up model checking tasks. This paper describes one such method, called swarm verification.

  20. Development of Distributed System for Informational Location and Control on the Corporate Web Portal "Analytical Chemistry in Russia"

    NASA Astrophysics Data System (ADS)

    Shirokova, V. I.; Kolotov, V. P.; Alenina, M. V.

    A new Internet portal developed by community of Russian analysts has been launched in 2001 (http://www.geokhi.ru/~rusanalytchem, http://www.rusanalytchem.org) Corporate Web Portal information, "Analytical Chemistry in Russia" , Corporate Web Portal information, "Analytical Chemistry in Russia" ). Now the portal contains a large amount of information, great part of it is stored in the form of SQL data base (MS SQL). The information retrieval is made by means of ASP pages, containing VB Scripts. The obtained experience of work with such topical portal has detected some weak points, related with its centralized administration and updating. It has been found that urgent supporting of all requests from different persons/organizations on information allocation on the portal's server takes a lot of efforts and time. That is why, the further development of portal we relate with development of a distributed system for information allocation and control, under preserving of centralized administration for ensuring of security and stable working of the portal. Analysis and testing of some available technologies lead us to conclusion to apply MS Share Point technologies. A MS Share Point Team Services (SPTS) has been selected as a technology supporting relatively small groups, where MS SQL is used for storage data and metadata. The last feature was considered as decisive one for SPTS selection, allowing easy integration with data base of the whole portal. SPTS was launched as an independent Internet site accessible from home page of the portal. It serves as a root site to exit to dozens of subsites serving different bodies of Russian Scientific Council on analytical chemistry and external organizations located over the whole Russia. The secure functioning of such hierarchical system, which includes a lot of remote information suppliers, based on use of roles to manage user rights

  1. Generic Verification Protocol for Verification of Online Turbidimeters

    EPA Science Inventory

    This protocol provides generic procedures for implementing a verification test for the performance of online turbidimeters. The verification tests described in this document will be conducted under the Environmental Technology Verification (ETV) Program. Verification tests will...

  2. VIABLE BACTERIAL AEROSOL PARTICLE SIZE DISTRIBUTIONS IN THE MIDSUMMER ATMOSPHERE AT AN ISOLATED LOCATION IN THE HIGH DESERT CHAPARRAL

    EPA Science Inventory

    The viable bacterial particle size distribution in the atmosphere at the Hanford Nuclear Reservation, Richland, WA during two 1-week periods in June 1992, was observed at three intervals during the day (morning, midday and evening) and at three heights (2, 4, and 8 m) above groun...

  3. Distribution of Foraminifera in the Core Samples of Kollidam and Marakanam Mangrove Locations, Tamil Nadu, Southeast Coast of India

    NASA Astrophysics Data System (ADS)

    Nowshath, M.

    2013-05-01

    In order to study the distribution of Foraminifera in the subsurface sediments of mangrove environment, two core samples have been collected i) near boating house, Pitchavaram, from Kollidam estuary (C2) and ii) backwaters of Marakanam (C2)with the help of PVC corer. The length of the core varies from a total of 25 samples from both cores were obtained and they were subjected to standard micropaleontological and sedimentological analyses for the evaluation of different sediment characteristics. The core sample No.C1 (Pitchavaram) yielded only foraminifera whereas the other one core no.C2 (Marakanam) has yielded discussed only the down core distribution of foraminifera. The widely utilized classification proposed by Loeblich and Tappan (1987) has been followed in the present study for Foraminiferal taxonomy and accordingly 23 foraminiferal species belonging to 18 genera, 10 families, 8 superfamilies and 4 suborders have been reported and illustrated. The foraminiferal species recorded are characteristic of shallow innershelf to marginal marine and tropical in nature. Sedimentological parameters such as CaCO3, Organic matter and sand-silt-clay ratio was estimated and their down core distribution is discussed. An attempt has been made to evaluate the favourable substrate for the Foraminifera population abundance in the present area of study. From the overall distribution of foraminifera in different samples of Kollidam estuary (Pitchavaram area), and Marakanam estuary it is observed that siltysand and sandysilt are more accommodative substrate for the population of foraminifera, respectively. The distribution of foraminifera in the core samples indicate that the sediments were deposited under normal oxygenated environment conditions.;

  4. Verification of VENTSAR

    SciTech Connect

    Simpkins, A.A.

    1995-01-01

    The VENTSAR code is an upgraded and improved version of the VENTX code, which estimates concentrations on or near a building from a release at a nearby location. The code calculates the concentrations either for a given meteorological exceedance probability or for a given stability and wind speed combination. A single building can be modeled which lies in the path of the plume, or a penthouse can be added to the top of the building. Plume rise may also be considered. Release types can be either chemical or radioactive. Downwind concentrations are determined at user-specified incremental distances. This verification report was prepared to demonstrate that VENTSAR is properly executing all algorithms and transferring data. Hand calculations were also performed to ensure proper application of methodologies.

  5. Verification of VLSI designs

    NASA Technical Reports Server (NTRS)

    Windley, P. J.

    1991-01-01

    In this paper we explore the specification and verification of VLSI designs. The paper focuses on abstract specification and verification of functionality using mathematical logic as opposed to low-level boolean equivalence verification such as that done using BDD's and Model Checking. Specification and verification, sometimes called formal methods, is one tool for increasing computer dependability in the face of an exponentially increasing testing effort.

  6. Measuring location, size, distribution, and loading of NiO crystallites in individual SBA-15 pores by electron tomography.

    PubMed

    Friedrich, Heiner; Sietsma, Jelle R A; de Jongh, Petra E; Verkleij, Arie J; de Jong, Krijn P

    2007-08-22

    By the combination of electron tomography with image segmentation, the properties of 299 NiO crystallites contained in 6 SBA-15 pores were studied. A statistical analysis of the particle size showed that crystallites between 2 and 6 nm were present with a distribution maximum at 3 and 4 nm, for the number-weighted and volume-weighted curves, respectively. Interparticle distances between nearest neighbors were 1-3 nm with very few isolated crystallites. In the examined pores, a local loading twice the applied average of 24 wt % NiO was found. This suggests that a very high local loading combined with a high dispersion is achievable. PMID:17655305

  7. Dependence of the continuum energy distribution of T Tauri stars on the location of the temperature minimum

    NASA Astrophysics Data System (ADS)

    Calvet, N.

    1981-12-01

    The influence of the position of the temperature minimum on the continuum flux produced by theoretical models of T Tauri stars is investigated. In particular, continuum fluxes for models with similar temperature profiles which differ in the position of the temperature minimum are calculated. Assumed temperature profiles are presented and the transfer and equilibrium equations for a 5-level plus continuum representation for the hydrogen atom is solved using the complete linearization scheme of Auer and Mihalas (1969). This calculation gives the electron density and departure coefficients for the first five levels of the hydrogen atom, which are then used to calculate non-LTE source functions for the continuum produced by these levels. The resulting continuum fluxes are shown and discussed. It is concluded that the discrepancy between theoretical models and observations in the blue and UV regions of the spectrum found in Calvet (1981) cannot be diminished by changing the location of the temperature minimum.

  8. Requirements, Verification, and Compliance (RVC) Database Tool

    NASA Technical Reports Server (NTRS)

    Rainwater, Neil E., II; McDuffee, Patrick B.; Thomas, L. Dale

    2001-01-01

    This paper describes the development, design, and implementation of the Requirements, Verification, and Compliance (RVC) database used on the International Space Welding Experiment (ISWE) project managed at Marshall Space Flight Center. The RVC is a systems engineer's tool for automating and managing the following information: requirements; requirements traceability; verification requirements; verification planning; verification success criteria; and compliance status. This information normally contained within documents (e.g. specifications, plans) is contained in an electronic database that allows the project team members to access, query, and status the requirements, verification, and compliance information from their individual desktop computers. Using commercial-off-the-shelf (COTS) database software that contains networking capabilities, the RVC was developed not only with cost savings in mind but primarily for the purpose of providing a more efficient and effective automated method of maintaining and distributing the systems engineering information. In addition, the RVC approach provides the systems engineer the capability to develop and tailor various reports containing the requirements, verification, and compliance information that meets the needs of the project team members. The automated approach of the RVC for capturing and distributing the information improves the productivity of the systems engineer by allowing that person to concentrate more on the job of developing good requirements and verification programs and not on the effort of being a "document developer".

  9. Verification of Adaptive Systems

    SciTech Connect

    Pullum, Laura L; Cui, Xiaohui; Vassev, Emil; Hinchey, Mike; Rouff, Christopher; Buskens, Richard

    2012-01-01

    Adaptive systems are critical for future space and other unmanned and intelligent systems. Verification of these systems is also critical for their use in systems with potential harm to human life or with large financial investments. Due to their nondeterministic nature and extremely large state space, current methods for verification of software systems are not adequate to provide a high level of assurance for them. The combination of stabilization science, high performance computing simulations, compositional verification and traditional verification techniques, plus operational monitors, provides a complete approach to verification and deployment of adaptive systems that has not been used before. This paper gives an overview of this approach.

  10. Lunar Pickup Ions Observed by ARTEMIS: Spatial and Temporal Distribution and Constraints on Species and Source Locations

    NASA Technical Reports Server (NTRS)

    Halekas, Jasper S.; Poppe, A. R.; Delory, G. T.; Sarantos, M.; Farrell, W. M.; Angelopoulos, V.; McFadden, J. P.

    2012-01-01

    ARTEMIS observes pickup ions around the Moon, at distances of up to 20,000 km from the surface. The observed ions form a plume with a narrow spatial and angular extent, generally seen in a single energy/angle bin of the ESA instrument. Though ARTEMIS has no mass resolution capability, we can utilize the analytically describable characteristics of pickup ion trajectories to constrain the possible ion masses that can reach the spacecraft at the observation location in the correct energy/angle bin. We find that most of the observations are consistent with a mass range of approx. 20-45 amu, with a smaller fraction consistent with higher masses, and very few consistent with masses below 15 amu. With the assumption that the highest fluxes of pickup ions come from near the surface, the observations favor mass ranges of approx. 20-24 and approx. 36-40 amu. Although many of the observations have properties consistent with a surface or near-surface release of ions, some do not, suggesting that at least some of the observed ions have an exospheric source. Of all the proposed sources for ions and neutrals about the Moon, the pickup ion flux measured by ARTEMIS correlates best with the solar wind proton flux, indicating that sputtering plays a key role in either directly producing ions from the surface, or producing neutrals that subsequently become ionized.

  11. Investigation of Reflectance Distribution and Trend for the Double Ray Located in the Northwest of Tycho Crater

    NASA Astrophysics Data System (ADS)

    Yi, Eung Seok; Kim, Kyeong Ja; Choi, Yi Re; Kim, Yong Ha; Lee, Sung Soon; Lee, Seung Ryeol

    2015-06-01

    Analysis of lunar samples returned by the US Apollo missions revealed that the lunar highlands consist of anorthosite, plagioclase, pyroxene, and olivine; also, the lunar maria are composed of materials such as basalt and ilmenite. More recently, the remote sensing approach has enabled reduction of the time required to investigate the entire lunar surface, compared to the approach of returning samples. Moreover, remote sensing has also made it possible to determine the existence of specific minerals and to examine wide areas. In this paper, an investigation was performed on the reflectance distribution and its trend. The results were applied to the example of the double ray stretched in parallel lines from the Tycho crater to the third-quadrant of Mare Nubium. Basic research and background information for the investigation of lunar surface characteristics is also presented. For this research, resources aboard the SELenological and ENgineering Explorer (SELENE), a Japanese lunar probe, were used. These included the Multiband Imager (MI) in the Lunar Imager / Spectrometer (LISM). The data of these instruments were edited through the toolkit, an image editing and analysis tool, Exelis Visual Information Solution (ENVI).

  12. A statistical study of the spatial distribution of Co-operative UK Twin Located Auroral Sounding System (CUTLASS) backscatter power during EISCAT heater beam-sweeping experiments

    NASA Astrophysics Data System (ADS)

    Shergill, H.; Robinson, T. R.; Dhillon, R. S.; Lester, M.; Milan, S. E.; Yeoman, T. K.

    2010-05-01

    High-power electromagnetic waves can excite a variety of plasma instabilities in Earth's ionosphere. These lead to the growth of plasma waves and plasma density irregularities within the heated volume, including patches of small-scale field-aligned electron density irregularities. This paper reports a statistical study of intensity distributions in patches of these irregularities excited by the European Incoherent Scatter (EISCAT) heater during beam-sweeping experiments. The irregularities were detected by the Co-operative UK Twin Located Auroral Sounding System (CUTLASS) coherent scatter radar located in Finland. During these experiments the heater beam direction is steadily changed from northward to southward pointing. Comparisons are made between statistical parameters of CUTLASS backscatter power distributions and modeled heater beam power distributions provided by the EZNEC version 4 software. In general, good agreement between the statistical parameters and the modeled beam is observed, clearly indicating the direct causal connection between the heater beam and the irregularities, despite the sometimes seemingly unpredictable nature of unaveraged results. The results also give compelling evidence in support of the upper hybrid theory of irregularity excitation.

  13. Study of scattering from a sphere with an eccentrically located spherical inclusion by generalized Lorenz-Mie theory: internal and external field distribution.

    PubMed

    Wang, J J; Gouesbet, G; Han, Y P; Gréhan, G

    2011-01-01

    Based on the recent results in the generalized Lorenz-Mie theory, solutions for scattering problems of a sphere with an eccentrically located spherical inclusion illuminated by an arbitrary shaped electromagnetic beam in an arbitrary orientation are obtained. Particular attention is paid to the description and application of an arbitrary shaped beam in an arbitrary orientation to the scattering problem under study. The theoretical formalism is implemented in a homemade computer program written in FORTRAN. Numerical results concerning spatial distributions of both internal and external fields are displayed in different formats in order to properly display exemplifying results. More specifically, as an example, we consider the case of a focused fundamental Gaussian beam (TEM(00) mode) illuminating a glass sphere (having a real refractive index equal to 1.50) with an eccentrically located spherical water inclusion (having a real refractive index equal to 1.33). Displayed results are for various parameters of the incident electromagnetic beam (incident orientation, beam waist radius, location of the beam waist center) and of the scatterer system (location of the inclusion inside the host sphere and relative diameter of the inclusion to the host sphere). PMID:21200408

  14. Light dose verification for pleural PDT

    NASA Astrophysics Data System (ADS)

    Sandell, Julia L.; Liang, Xing; Zhu, Timothy

    2012-02-01

    The ability to deliver uniform light dose in Photodynamic therapy (PDT) is critical to treatment efficacy. Current protocol in pleural photodynamic therapy uses 7 isotropic detectors placed at discrete locations within the pleural cavity to monitor light dose throughout treatment. While effort is made to place the detectors uniformly through the cavity, measurements do not provide an overall uniform measurement of delivered dose. A real-time infrared (IR) tracking camera is development to better deliver and monitor a more uniform light distribution during treatment. It has been shown previously that there is good agreement between fluence calculated using IR tracking data and isotropic detector measurements for direct light phantom experiments. This study presents the results of an extensive phantom study which uses variable, patient-like geometries and optical properties (both absorption and scattering). Position data of the treatment is collected from the IR navigation system while concurrently light distribution measurements are made using the aforementioned isotropic detectors. These measurements are compared to fluence calculations made using data from the IR navigation system to verify our light distribution theory is correct and applicable in patient-like settings. The verification of this treatment planning technique is an important step in bringing real-time fluence monitoring into the clinic for more effective treatment.

  15. Environmental Technology Verification Report - Electric Power and Heat Production Using Renewable Biogas at Patterson Farms

    EPA Science Inventory

    The U.S. EPA operates the Environmental Technology Verification program to facilitate the deployment of innovative technologies through performance verification and information dissemination. A technology area of interest is distributed electrical power generation, particularly w...

  16. Simulation verification techniques study

    NASA Technical Reports Server (NTRS)

    Schoonmaker, P. B.; Wenglinski, T. H.

    1975-01-01

    Results are summarized of the simulation verification techniques study which consisted of two tasks: to develop techniques for simulator hardware checkout and to develop techniques for simulation performance verification (validation). The hardware verification task involved definition of simulation hardware (hardware units and integrated simulator configurations), survey of current hardware self-test techniques, and definition of hardware and software techniques for checkout of simulator subsystems. The performance verification task included definition of simulation performance parameters (and critical performance parameters), definition of methods for establishing standards of performance (sources of reference data or validation), and definition of methods for validating performance. Both major tasks included definition of verification software and assessment of verification data base impact. An annotated bibliography of all documents generated during this study is provided.

  17. Software verification and testing

    NASA Technical Reports Server (NTRS)

    1985-01-01

    General procedures for software verification and validation are provided as a guide for managers, programmers, and analysts involved in software development. The verification and validation procedures described are based primarily on testing techniques. Testing refers to the execution of all or part of a software system for the purpose of detecting errors. Planning, execution, and analysis of tests are outlined in this document. Code reading and static analysis techniques for software verification are also described.

  18. How do wetland type and location affect their hydrological services? - A distributed hydrological modelling study of the contribution of isolated and riparian wetlands

    NASA Astrophysics Data System (ADS)

    Fossey, Maxime; Rousseau, Alain N.; Savary, Stéphane; Royer, Alain

    2015-04-01

    Wetlands play a significant role on the hydrological cycle, reducing peak flows through water storage functions and sustaining low flows through slow release of water. However, their impacts on water resource availability and flood control are mainly driven by wetland types and locations within a watershed. So, despite the general agreement about these major hydrological functions, little is known about their spatial and typological influences. Consequently, assessing the quantitative impact of wetlands on hydrological regimes has become a relevant issue for both the scientific community and the decision-maker community. To investigate the hydrologic response at the watershed scale, mathematical modelling has been a well-accepted framework. Specific isolated and riparian wetland modules were implemented in the PHYSITEL/HYDROTEL distributed hydrological modelling platform to assess the impact of the spatial distribution of isolated and riparian wetlands on the stream flows of the Becancour River watershed, Quebec, Canada. More specifically, the focus was on assessing whether stream flow parameters, including peak flow and low flow, were related to: (i) geographic location of wetlands, (ii) typology of wetlands, and (iii) season of the year. Preliminary results suggest that isolated and riparian wetlands have individual space- and time-dependent impacts on the hydrologic response of the study watershed and provide relevant information for the design of wetland protection and restoration programs.

  19. The discrimination filters to increase the reliability of EEW association on the location using geometric distribution of triggered stations with upgrading a travel time model.

    NASA Astrophysics Data System (ADS)

    Chi, H. C.; Park, J. H.; Lim, I. S.; Seong, Y. J.

    2015-12-01

    In operation of Earthquake Early Warning System (EEWS), the alerting criteria are one of the most important parameters in optimizing acceptable warning system. During early stage of testing EEW systems from 2011 to 2013, we adapted ElarmS by UC Berkeley BSL to Korean seismic network and applied very simple criteria for event alerting with the combination of the numbers of station and magnitude. As a result of the testing we found out that the real-time test result of Earthquake Early Warning (EEW) system in Korea showed that all events located within seismic network with bigger than magnitude 3.0 were well detected. However, two events located at sea between land and island gave false results with magnitude over 4.0 related to the teleseismic waves and one event located in land gave false results with magnitude over 3.0 related to the teleseismic waves. These teleseismic-relevant false events were caused by logical co-relation during association procedure and the corresponding geometric distribution of associated stations is crescent-shaped. Seismic stations are not deployed uniformly, so the expected bias ratio varies with evaluated epicentral location. This ratio is calculated in advance and stored into database, called as TrigDB, for the discrimination of teleseismic-origin false alarm. We developed a method, so called 'TrigDB back filling', updating location with supplementary association of stations comparing triggered times between sandwiched stations which was not associated previously based on predefined criteria such as travel-time. Because EEW program assume that all events are local, teleseismic-relevant events can give more triggered stations by using back filling of the unassociated stations than the normal association. And we also developed a travel time curve (K-SEIS-1DTT2015) to reduce split event for EEWS. After applying the K-SEIS-1DTT2015 model, these teleseismic-relevant false events are reduced. As a result of these methods we could get more

  20. Proton Therapy Verification with PET Imaging

    PubMed Central

    Zhu, Xuping; Fakhri, Georges El

    2013-01-01

    Proton therapy is very sensitive to uncertainties introduced during treatment planning and dose delivery. PET imaging of proton induced positron emitter distributions is the only practical approach for in vivo, in situ verification of proton therapy. This article reviews the current status of proton therapy verification with PET imaging. The different data detecting systems (in-beam, in-room and off-line PET), calculation methods for the prediction of proton induced PET activity distributions, and approaches for data evaluation are discussed. PMID:24312147

  1. 10 CFR 72.79 - Facility information and verification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... DOC/NRC Form AP-A and associated forms; (b) Shall submit location information described in § 75.11 of this chapter on DOC/NRC Form AP-1 and associated forms; and (c) Shall permit verification thereof...

  2. 10 CFR 72.79 - Facility information and verification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... DOC/NRC Form AP-A and associated forms; (b) Shall submit location information described in § 75.11 of this chapter on DOC/NRC Form AP-1 and associated forms; and (c) Shall permit verification thereof...

  3. 10 CFR 72.79 - Facility information and verification.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... DOC/NRC Form AP-A and associated forms; (b) Shall submit location information described in § 75.11 of this chapter on DOC/NRC Form AP-1 and associated forms; and (c) Shall permit verification thereof...

  4. 10 CFR 72.79 - Facility information and verification.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... DOC/NRC Form AP-A and associated forms; (b) Shall submit location information described in § 75.11 of this chapter on DOC/NRC Form AP-1 and associated forms; and (c) Shall permit verification thereof...

  5. Influence of pH, layer charge location and crystal thickness distribution on U(VI) sorption onto heterogeneous dioctahedral smectite.

    PubMed

    Guimarães, Vanessa; Rodríguez-Castellón, Enrique; Algarra, Manuel; Rocha, Fernando; Bobos, Iuliu

    2016-11-01

    The UO2(2+) adsorption on smectite (samples BA1, PS2 and PS3) with a heterogeneous structure was investigated at pH 4 (I=0.02M) and pH 6 (I=0.2M) in batch experiments, with the aim to evaluate the influence of pH, layer charge location and crystal thickness distribution. Mean crystal thickness distribution of smectite crystallite used in sorption experiments range from 4.8nm (sample PS2), to 5.1nm (sample PS3) and, to 7.4nm (sample BA1). Smaller crystallites have higher total surface area and sorption capacity. Octahedral charge location favor higher sorption capacity. The sorption isotherms of Freundlich, Langmuir and SIPS were used to model the sorption experiments. The surface complexation and cation exchange reactions were modeled using PHREEQC-code to describe the UO2(2+) sorption on smectite. The amount of UO2(2+) adsorbed on smectite samples decreased significantly at pH 6 and higher ionic strength, where the sorption mechanism was restricted to the edge sites of smectite. Two binding energy components at 380.8±0.3 and 382.2±0.3eV, assigned to hydrated UO2(2+) adsorbed by cation exchange and by inner-sphere complexation on the external sites at pH 4, were identified after the U4f7/2 peak deconvolution by X-photoelectron spectroscopy. Also, two new binding energy components at 380.3±0.3 and 381.8±0.3eV assigned to AlOUO2(+) and SiOUO2(+) surface species were observed at pH 6. PMID:27285596

  6. Estimation of locations and migration of debris flows on Izu-Oshima Island, Japan, on 16 October 2013 by the distribution of high frequency seismic amplitudes

    NASA Astrophysics Data System (ADS)

    Ogiso, Masashi; Yomogida, Kiyoshi

    2015-06-01

    In the early morning on 16 October 2013, large debris flows resulted in over 30 people dead on Izu-Oshima Island, Japan, which were induced by heavy rainfall from the approaching Typhoon 1326 (Wipha). We successfully estimated the locations and migration processes of five large events of the debris flows, using the spatial distribution of high-frequency seismic amplitudes recorded by a seismic network on the island. The flows occurred on the western flank of the island, almost at the same place as the site where large traces of debris flows were identified after the disaster. During each event of debris flows, the estimated locations migrated downstream with increasing time, from the caldera rim of Miharayama volcano in the center of the island to its western side with a speed of up to 30 m/s. The estimated time series of source amplitudes are different from event to event, exhibiting a large variety of flow sequences while they seem to have repeated at a relatively narrow area over several tens of minutes. The present approach may be utilized for early detection and warning for prevention and reduction of the present type of disasters in the future.

  7. SU-E-J-58: Dosimetric Verification of Metal Artifact Effects: Comparison of Dose Distributions Affected by Patient Teeth and Implants

    SciTech Connect

    Lee, M; Kang, S; Lee, S; Suh, T; Lee, J; Park, J; Park, H; Lee, B

    2014-06-01

    Purpose: Implant-supported dentures seem particularly appropriate for the predicament of becoming edentulous and cancer patients are no exceptions. As the number of people having dental implants increased in different ages, critical dosimetric verification of metal artifact effects are required for the more accurate head and neck radiation therapy. The purpose of this study is to verify the theoretical analysis of the metal(streak and dark) artifact, and to evaluate dosimetric effect which cause by dental implants in CT images of patients with the patient teeth and implants inserted humanoid phantom. Methods: The phantom comprises cylinder which is shaped to simulate the anatomical structures of a human head and neck. Through applying various clinical cases, made phantom which is closely allied to human. Developed phantom can verify two classes: (i)closed mouth (ii)opened mouth. RapidArc plans of 4 cases were created in the Eclipse planning system. Total dose of 2000 cGy in 10 fractions is prescribed to the whole planning target volume (PTV) using 6MV photon beams. Acuros XB (AXB) advanced dose calculation algorithm, Analytical Anisotropic Algorithm (AAA) and progressive resolution optimizer were used in dose optimization and calculation. Results: In closed and opened mouth phantom, because dark artifacts formed extensively around the metal implants, dose variation was relatively higher than that of streak artifacts. As the PTV was delineated on the dark regions or large streak artifact regions, maximum 7.8% dose error and average 3.2% difference was observed. The averaged minimum dose to the PTV predicted by AAA was about 5.6% higher and OARs doses are also 5.2% higher compared to AXB. Conclusion: The results of this study showed that AXB dose calculation involving high-density materials is more accurate than AAA calculation, and AXB was superior to AAA in dose predictions beyond dark artifact/air cavity portion when compared against the measurements.

  8. Fuel Retrieval System (FRS) Design Verification

    SciTech Connect

    YANOCHKO, R.M.

    2000-01-27

    This document was prepared as part of an independent review to explain design verification activities already completed, and to define the remaining design verification actions for the Fuel Retrieval System. The Fuel Retrieval Subproject was established as part of the Spent Nuclear Fuel Project (SNF Project) to retrieve and repackage the SNF located in the K Basins. The Fuel Retrieval System (FRS) construction work is complete in the KW Basin, and start-up testing is underway Design modifications and construction planning are also underway for the KE Basin. An independent review of the design verification process as applied to the K Basin projects was initiated in support of preparation for the SNF Project operational readiness review (ORR).

  9. Voltage verification unit

    DOEpatents

    Martin, Edward J.

    2008-01-15

    A voltage verification unit and method for determining the absence of potentially dangerous potentials within a power supply enclosure without Mode 2 work is disclosed. With this device and method, a qualified worker, following a relatively simple protocol that involves a function test (hot, cold, hot) of the voltage verification unit before Lock Out/Tag Out and, and once the Lock Out/Tag Out is completed, testing or "trying" by simply reading a display on the voltage verification unit can be accomplished without exposure of the operator to the interior of the voltage supply enclosure. According to a preferred embodiment, the voltage verification unit includes test leads to allow diagnostics with other meters, without the necessity of accessing potentially dangerous bus bars or the like.

  10. Verification of RADTRAN

    SciTech Connect

    Kanipe, F.L.; Neuhauser, K.S.

    1995-12-31

    This document presents details of the verification process of the RADTRAN computer code which was established for the calculation of risk estimates for radioactive materials transportation by highway, rail, air, and waterborne modes.

  11. ENVIRONMENTAL TECHNOLOGY VERIFICATION--FUELCELL ENERGY, INC.: DFC 300A MOLTEN CARBONATE FUEL CELL COMBINED HEAT AND POWER SYSTEM

    EPA Science Inventory

    The U.S. EPA operates the Environmental Technology Verification program to facilitate the deployment of innovative technologies through performance verification and information dissemination. A technology area of interest is distributed electrical power generation, particularly w...

  12. Pyroclastic Eruptions in a Mars Climate Model: The Effects of Grain Size, Plume Height, Density, Geographical Location, and Season on Ash Distribution

    NASA Astrophysics Data System (ADS)

    Kerber, L. A.; Head, J. W.; Madeleine, J.; Wilson, L.; Forget, F.

    2010-12-01

    Pyroclastic volcanism has played a major role in the geologic history of the planet Mars. In addition to several highland patera features interpreted to be composed of pyroclastic material, there are a number of vast, fine-grained, friable deposits which may have a volcanic origin. The physical processes involved in the explosive eruption of magma, including the nucleation of bubbles, the fragmentation of magma, the incorporation of atmospheric gases, the formation of a buoyant plume, and the fall-out of individual pyroclasts has been modeled extensively for martian conditions [Wilson, L., J.W. Head (2007), Explosive volcanic eruptions on Mars: Tephra and accretionary lapilli formation, dispersal and recognition in the geologic record, J. Volcanol. Geotherm. Res. 163, 83-97]. We have further developed and expanded this original model in order to take into account differing temperature, pressure, and wind regimes found at different altitudes, at different geographic locations, and during different martian seasons. Using a well-established Mars global circulation model [LMD-GCM, Forget, F., F. Hourdin, R. Fournier, C. Hourdin, O. Talagrand (1999), Improved general circulation models of the martian atmosphere from the surface to above 80 km, J. Geophys. Res. 104, 24,155-24,176] we are able to link the volcanic eruption model of Wilson and Head (2007) to the spatially and temporally dynamic GCM temperature, pressure, and wind profiles to create three-dimensional maps of expected ash deposition on the surface. Here we present results exploring the effects of grain-size distribution, plume height, density of ash, latitude, season, and atmospheric pressure on the areal extent and shape of the resulting ash distribution. Our results show that grain-size distribution and plume height most strongly effect the distance traveled by the pyroclasts from the vent, while latitude and season can have a large effect on the direction in which the pyroclasts travel and the final shape

  13. NON-INVASIVE DETERMINATION OF THE LOCATION AND DISTRIBUTION OF FREE-PHASE DENSE NONAQUEOUS PHASE LIQUIDS (DNAPL) BY SEISMIC REFLECTION TECHNIQUES

    SciTech Connect

    Michael G. Waddell; William J. Domoracki; Tom J. Temples; Jerome Eyer

    2001-05-01

    The Earth Sciences and Resources Institute, University of South Carolina is conducting a 14 month proof of concept study to determine the location and distribution of subsurface Dense Nonaqueous Phase Liquid (DNAPL) carbon tetrachloride (CCl{sub 4}) contamination at the 216-Z-9 crib, 200 West area, Department of Energy (DOE) Hanford Site, Washington by use of two-dimensional high resolution seismic reflection surveys and borehole geophysical data. The study makes use of recent advances in seismic reflection amplitude versus offset (AVO) technology to directly detect the presence of subsurface DNAPL. The techniques proposed are a noninvasive means towards site characterization and direct free-phase DNAPL detection. This report covers the results of Task 3 and change of scope of Tasks 4-6. Task 1 contains site evaluation and seismic modeling studies. The site evaluation consists of identifying and collecting preexisting geological and geophysical information regarding subsurface structure and the presence and quantity of DNAPL. The seismic modeling studies were undertaken to determine the likelihood that an AVO response exists and its probable manifestation. Task 2 is the design and acquisition of 2-D seismic reflection data designed to image areas of probable high concentration of DNAPL. Task 3 is the processing and interpretation of the 2-D data. Task 4, 5, and 6 were designing, acquiring, processing, and interpretation of a three dimensional seismic survey (3D) at the Z-9 crib area at 200 west area, Hanford.

  14. Eldercare Locator

    MedlinePlus

    ... page content Skip Navigation Department of Health and Human Services Your Browser ... Welcome to the Eldercare Locator, a public service of the U.S. Administration on Aging connecting you to services for older ...

  15. Alu and L1 sequence distributions in Xq24-q28 and their comparative utility in YAC contig assembly and verification

    SciTech Connect

    Porta, G.; Zucchi, I.; Schlessinger, D.; Hillier, L.; Green, P.; Nowotny, V.; D`Urso, M.

    1993-05-01

    The contents of Alu- and L1-containing TaqI restriction fragments were assessed by Southern blot analyses across YAC contigs already assembled by other means and localized within Xq24-q28. Fingerprinting patterns of YACs in contigs were concordant. Using software based on that of M. V. Olson et al. to analyze digitized data on fragment sizes, fingerprinting itself could establish matches among about 40% of a test group of 435 YACs. At 100-kb resolution, both repetitive elements were found throughout the region, with no apparent enrichment of Alu or L1 in DNA of G compared to that found in R bands. However, consistent with a random overall distribution, delimited regions of up to 100 kb contained clusters of repetitive elements. The local concentrations may help to account for the reported differential hybridization of Alu and L1 probes to segments of metaphase chromosomes. 40 refs., 6 figs., 2 tabs.

  16. Assessment of total and organic vanadium levels and their bioaccumulation in edible sea cucumbers: tissues distribution, inter-species-specific, locational differences and seasonal variations.

    PubMed

    Liu, Yanjun; Zhou, Qingxin; Xu, Jie; Xue, Yong; Liu, Xiaofang; Wang, Jingfeng; Xue, Changhu

    2016-02-01

    The objective of this study is to investigate the levels, inter-species-specific, locational differences and seasonal variations of vanadium in sea cucumbers and to validate further several potential factors controlling the distribution of metals in sea cucumbers. Vanadium levels were evaluated in samples of edible sea cucumbers and were demonstrated exhibit differences in different seasons, species and sampling sites. High vanadium concentrations were measured in the sea cucumbers, and all of the vanadium detected was in an organic form. Mean vanadium concentrations were considerably higher in the blood (sea cucumber) than in the other studied tissues. The highest concentration of vanadium (2.56 μg g(-1)), as well as a higher degree of organic vanadium (85.5 %), was observed in the Holothuria scabra samples compared with all other samples. Vanadium levels in Apostichopus japonicus from Bohai Bay and Yellow Sea have marked seasonal variations. Average values of 1.09 μg g(-1) of total vanadium and 0.79 μg g(-1) of organic vanadium were obtained in various species of sea cucumbers. Significant positive correlations between vanadium in the seawater and V org in the sea cucumber (r = 81.67 %, p = 0.00), as well as between vanadium in the sediment and V org in the sea cucumber (r = 77.98 %, p = 0.00), were observed. Vanadium concentrations depend on the seasons (salinity, temperature), species, sampling sites and seawater environment (seawater, sediment). Given the adverse toxicological effects of inorganic vanadium and positive roles in controlling the development of diabetes in humans, a regular monitoring programme of vanadium content in edible sea cucumbers can be recommended. PMID:25732906

  17. Verification of statistical method CORN for modeling of microfuel in the case of high grain concentration

    NASA Astrophysics Data System (ADS)

    Chukbar, B. K.

    2015-12-01

    Two methods of modeling a double-heterogeneity fuel are studied: the deterministic positioning and the statistical method CORN of the MCU software package. The effect of distribution of microfuel in a pebble bed on the calculation results is studied. The results of verification of the statistical method CORN for the cases of the microfuel concentration up to 170 cm-3 in a pebble bed are presented. The admissibility of homogenization of the microfuel coating with the graphite matrix is studied. The dependence of the reactivity on the relative location of fuel and graphite spheres in a pebble bed is found.

  18. Verification of statistical method CORN for modeling of microfuel in the case of high grain concentration

    SciTech Connect

    Chukbar, B. K.

    2015-12-15

    Two methods of modeling a double-heterogeneity fuel are studied: the deterministic positioning and the statistical method CORN of the MCU software package. The effect of distribution of microfuel in a pebble bed on the calculation results is studied. The results of verification of the statistical method CORN for the cases of the microfuel concentration up to 170 cm{sup –3} in a pebble bed are presented. The admissibility of homogenization of the microfuel coating with the graphite matrix is studied. The dependence of the reactivity on the relative location of fuel and graphite spheres in a pebble bed is found.

  19. Secure optical verification using dual phase-only correlation

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Zhang, Yan; Xie, Zhenwei; Liu, Zhengjun; Liu, Shutian

    2015-02-01

    We introduce a security-enhanced optical verification system using dual phase-only correlation based on a novel correlation algorithm. By employing a nonlinear encoding, the inherent locks of the verification system are obtained in real-valued random distributions, and the identity keys assigned to authorized users are designed as pure phases. The verification process is implemented in two-step correlation, so only authorized identity keys can output the discriminate auto-correlation and cross-correlation signals that satisfy the reset threshold values. Compared with the traditional phase-only-correlation-based verification systems, a higher security level against counterfeiting and collisions are obtained, which is demonstrated by cryptanalysis using known attacks, such as the known-plaintext attack and the chosen-plaintext attack. Optical experiments as well as necessary numerical simulations are carried out to support the proposed verification method.

  20. Nuclear disarmament verification

    SciTech Connect

    DeVolpi, A.

    1993-12-31

    Arms control treaties, unilateral actions, and cooperative activities -- reflecting the defusing of East-West tensions -- are causing nuclear weapons to be disarmed and dismantled worldwide. In order to provide for future reductions and to build confidence in the permanency of this disarmament, verification procedures and technologies would play an important role. This paper outlines arms-control objectives, treaty organization, and actions that could be undertaken. For the purposes of this Workshop on Verification, nuclear disarmament has been divided into five topical subareas: Converting nuclear-weapons production complexes, Eliminating and monitoring nuclear-weapons delivery systems, Disabling and destroying nuclear warheads, Demilitarizing or non-military utilization of special nuclear materials, and Inhibiting nuclear arms in non-nuclear-weapons states. This paper concludes with an overview of potential methods for verification.

  1. Wind gust warning verification

    NASA Astrophysics Data System (ADS)

    Primo, Cristina

    2016-07-01

    Operational meteorological centres around the world increasingly include warnings as one of their regular forecast products. Warnings are issued to warn the public about extreme weather situations that might occur leading to damages and losses. In forecasting these extreme events, meteorological centres help their potential users in preventing the damage or losses they might suffer. However, verifying these warnings requires specific methods. This is due not only to the fact that they happen rarely, but also because a new temporal dimension is added when defining a warning, namely the time window of the forecasted event. This paper analyses the issues that might appear when dealing with warning verification. It also proposes some new verification approaches that can be applied to wind warnings. These new techniques are later applied to a real life example, the verification of wind gust warnings at the German Meteorological Centre ("Deutscher Wetterdienst"). Finally, the results obtained from the latter are discussed.

  2. Explaining Verification Conditions

    NASA Technical Reports Server (NTRS)

    Deney, Ewen; Fischer, Bernd

    2006-01-01

    The Hoare approach to program verification relies on the construction and discharge of verification conditions (VCs) but offers no support to trace, analyze, and understand the VCs themselves. We describe a systematic extension of the Hoare rules by labels so that the calculus itself can be used to build up explanations of the VCs. The labels are maintained through the different processing steps and rendered as natural language explanations. The explanations can easily be customized and can capture different aspects of the VCs; here, we focus on their structure and purpose. The approach is fully declarative and the generated explanations are based only on an analysis of the labels rather than directly on the logical meaning of the underlying VCs or their proofs. Keywords: program verification, Hoare calculus, traceability.

  3. Turbulence Modeling Verification and Validation

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.

    2014-01-01

    steps in the process. Verification insures that the CFD code is solving the equations as intended (no errors in the implementation). This is typically done either through the method of manufactured solutions (MMS) or through careful step-by-step comparisons with other verified codes. After the verification step is concluded, validation is performed to document the ability of the turbulence model to represent different types of flow physics. Validation can involve a large number of test case comparisons with experiments, theory, or DNS. Organized workshops have proved to be valuable resources for the turbulence modeling community in its pursuit of turbulence modeling verification and validation. Workshop contributors using different CFD codes run the same cases, often according to strict guidelines, and compare results. Through these comparisons, it is often possible to (1) identify codes that have likely implementation errors, and (2) gain insight into the capabilities and shortcomings of different turbulence models to predict the flow physics associated with particular types of flows. These are valuable lessons because they help bring consistency to CFD codes by encouraging the correction of faulty programming and facilitating the adoption of better models. They also sometimes point to specific areas needed for improvement in the models. In this paper, several recent workshops are summarized primarily from the point of view of turbulence modeling verification and validation. Furthermore, the NASA Langley Turbulence Modeling Resource website is described. The purpose of this site is to provide a central location where RANS turbulence models are documented, and test cases, grids, and data are provided. The goal of this paper is to provide an abbreviated survey of turbulence modeling verification and validation efforts, summarize some of the outcomes, and give some ideas for future endeavors in this area.

  4. Verification and validation benchmarks.

    SciTech Connect

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  5. Spectroscopic verification of zinc absorption and distribution in the desert plant Prosopis juliflora-velutina (velvet mesquite) treated with ZnO nanoparticles

    PubMed Central

    Hernandez-Viezcas, J.A.; Castillo-Michel, H.; Servin, A.D.; Peralta-Videa, J.R.; Gardea-Torresdey, J.L.

    2012-01-01

    The impact of metal nanoparticles (NPs) on biological systems, especially plants, is still not well understood. The aim of this research was to determine the effects of zinc oxide (ZnO) NPs in velvet mesquite (Prosopis juliflora-velutina). Mesquite seedlings were grown for 15 days in hydroponics with ZnO NPs (10 nm) at concentrations varying from 500 to 4000 mg L−1. Zinc concentrations in roots, stems and leaves were determined by inductively coupled plasma optical emission spectroscopy (ICP-OES). Plant stress was examined by the specific activity of catalase (CAT) and ascorbate peroxidase (APOX); while the biotransformation of ZnO NPs and Zn distribution in tissues was determined by X-ray absorption spectroscopy (XAS) and micro X-ray fluorescence (μXRF), respectively. ICP-OES results showed that Zn concentrations in tissues (2102 ± 87, 1135 ± 56, and 628 ± 130 mg kg−1 d wt in roots, stems, and leaves, respectively) were found at 2000 mg ZnO NPs L−1. Stress tests showed that ZnO NPs increased CAT in roots, stems, and leaves, while APOX increased only in stems and leaves. XANES spectra demonstrated that ZnO NPs were not present in mesquite tissues, while Zn was found as Zn(II), resembling the spectra of Zn(NO3)2. The μXRF analysis confirmed the presence of Zn in the vascular system of roots and leaves in ZnO NP treated plants. PMID:22820414

  6. TFE verification program

    NASA Astrophysics Data System (ADS)

    1994-01-01

    This is the final semiannual progress report for the Thermionic Fuel Elements (TFE) verification. A decision was made in August 1993 to begin a Close Out Program on October 1, 1993. Final reports summarizing the design analyses and test activities of the TFE Verification Program will be written, stand-alone documents for each task. The objective of the semiannual progress report is to summarize the technical results obtained during the latest reporting period. The information presented herein includes evaluated test data, design evaluations, the results of analyses and the significance of results.

  7. General Environmental Verification Specification

    NASA Technical Reports Server (NTRS)

    Milne, J. Scott, Jr.; Kaufman, Daniel S.

    2003-01-01

    The NASA Goddard Space Flight Center s General Environmental Verification Specification (GEVS) for STS and ELV Payloads, Subsystems, and Components is currently being revised based on lessons learned from GSFC engineering and flight assurance. The GEVS has been used by Goddard flight projects for the past 17 years as a baseline from which to tailor their environmental test programs. A summary of the requirements and updates are presented along with the rationale behind the changes. The major test areas covered by the GEVS include mechanical, thermal, and EMC, as well as more general requirements for planning, tracking of the verification programs.

  8. Requirement Assurance: A Verification Process

    NASA Technical Reports Server (NTRS)

    Alexander, Michael G.

    2011-01-01

    Requirement Assurance is an act of requirement verification which assures the stakeholder or customer that a product requirement has produced its "as realized product" and has been verified with conclusive evidence. Product requirement verification answers the question, "did the product meet the stated specification, performance, or design documentation?". In order to ensure the system was built correctly, the practicing system engineer must verify each product requirement using verification methods of inspection, analysis, demonstration, or test. The products of these methods are the "verification artifacts" or "closure artifacts" which are the objective evidence needed to prove the product requirements meet the verification success criteria. Institutional direction is given to the System Engineer in NPR 7123.1A NASA Systems Engineering Processes and Requirements with regards to the requirement verification process. In response, the verification methodology offered in this report meets both the institutional process and requirement verification best practices.

  9. Context Effects in Sentence Verification.

    ERIC Educational Resources Information Center

    Kiger, John I.; Glass, Arnold L.

    1981-01-01

    Three experiments examined what happens to reaction time to verify easy items when they are mixed with difficult items in a verification task. Subjects verification of simple arithmetic equations and sentences took longer when placed in a difficult list. Difficult sentences also slowed the verification of easy arithmetic equations. (Author/RD)

  10. ENVIRONMENTAL TECHNOLOGY VERIFICATION PROGRAM

    EPA Science Inventory

    This presentation will be given at the EPA Science Forum 2005 in Washington, DC. The Environmental Technology Verification Program (ETV) was initiated in 1995 to speed implementation of new and innovative commercial-ready environemntal technologies by providing objective, 3rd pa...

  11. Computer Graphics Verification

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Video processing creates technical animation sequences using studio quality equipment to realistically represent fluid flow over space shuttle surfaces, helicopter rotors, and turbine blades.Computer systems Co-op, Tim Weatherford, performing computer graphics verification. Part of Co-op brochure.

  12. FPGA Verification Accelerator (FVAX)

    NASA Technical Reports Server (NTRS)

    Oh, Jane; Burke, Gary

    2008-01-01

    Is Verification Acceleration Possible? - Increasing the visibility of the internal nodes of the FPGA results in much faster debug time - Forcing internal signals directly allows a problem condition to be setup very quickly center dot Is this all? - No, this is part of a comprehensive effort to improve the JPL FPGA design and V&V process.

  13. Telescope performance verification

    NASA Astrophysics Data System (ADS)

    Swart, Gerhard P.; Buckley, David A. H.

    2004-09-01

    While Systems Engineering appears to be widely applied on the very large telescopes, it is lacking in the development of many of the medium and small telescopes currently in progress. The latter projects rely heavily on the experience of the project team, verbal requirements and conjecture based on the successes and failures of other telescopes. Furthermore, it is considered an unaffordable luxury to "close-the-loop" by carefully analysing and documenting the requirements and then verifying the telescope's compliance with them. In this paper the authors contend that a Systems Engineering approach is a keystone in the development of any telescope and that verification of the telescope's performance is not only an important management tool but also forms the basis upon which successful telescope operation can be built. The development of the Southern African Large Telescope (SALT) has followed such an approach and is now in the verification phase of its development. Parts of the SALT verification process will be discussed in some detail to illustrate the suitability of this approach, including oversight by the telescope shareholders, recording of requirements and results, design verification and performance testing. Initial test results will be presented where appropriate.

  14. Multibody modeling and verification

    NASA Technical Reports Server (NTRS)

    Wiens, Gloria J.

    1989-01-01

    A summary of a ten week project on flexible multibody modeling, verification and control is presented. Emphasis was on the need for experimental verification. A literature survey was conducted for gathering information on the existence of experimental work related to flexible multibody systems. The first portion of the assigned task encompassed the modeling aspects of flexible multibodies that can undergo large angular displacements. Research in the area of modeling aspects were also surveyed, with special attention given to the component mode approach. Resulting from this is a research plan on various modeling aspects to be investigated over the next year. The relationship between the large angular displacements, boundary conditions, mode selection, and system modes is of particular interest. The other portion of the assigned task was the generation of a test plan for experimental verification of analytical and/or computer analysis techniques used for flexible multibody systems. Based on current and expected frequency ranges of flexible multibody systems to be used in space applications, an initial test article was selected and designed. A preliminary TREETOPS computer analysis was run to ensure frequency content in the low frequency range, 0.1 to 50 Hz. The initial specifications of experimental measurement and instrumentation components were also generated. Resulting from this effort is the initial multi-phase plan for a Ground Test Facility of Flexible Multibody Systems for Modeling Verification and Control. The plan focusses on the Multibody Modeling and Verification (MMV) Laboratory. General requirements of the Unobtrusive Sensor and Effector (USE) and the Robot Enhancement (RE) laboratories were considered during the laboratory development.

  15. Ada(R) Test and Verification System (ATVS)

    NASA Technical Reports Server (NTRS)

    Strelich, Tom

    1986-01-01

    The Ada Test and Verification System (ATVS) functional description and high level design are completed and summarized. The ATVS will provide a comprehensive set of test and verification capabilities specifically addressing the features of the Ada language, support for embedded system development, distributed environments, and advanced user interface capabilities. Its design emphasis was on effective software development environment integration and flexibility to ensure its long-term use in the Ada software development community.

  16. Effect of object identification algorithms on feature based verification scores

    NASA Astrophysics Data System (ADS)

    Weniger, Michael; Friederichs, Petra

    2015-04-01

    Many modern spatial verification techniques rely on feature identification algorithms. We study the importance of the choice of algorithm and its parameters for the resulting scores. SAL is used as an example to show that these choices have a statistically significant impact on the distributions of object dependent scores. Non-continuous operators used for feature identification are identified as the underlying reason for the observed stability issues, with implications for many feature based verification techniques.

  17. 28 CFR 74.6 - Location of eligible persons.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Location of eligible persons. 74.6... PROVISION Verification of Eligibility § 74.6 Location of eligible persons. The Office shall compare the... information system to determine if such persons are living or deceased and, if living, the present location...

  18. RESRAD-BUILD verification.

    SciTech Connect

    Kamboj, S.; Yu, C.; Biwer, B. M.; Klett, T.

    2002-01-31

    The results generated by the RESRAD-BUILD code (version 3.0) were verified with hand or spreadsheet calculations using equations given in the RESRAD-BUILD manual for different pathways. For verification purposes, different radionuclides--H-3, C-14, Na-22, Al-26, Cl-36, Mn-54, Co-60, Au-195, Ra-226, Ra-228, Th-228, and U-238--were chosen to test all pathways and models. Tritium, Ra-226, and Th-228 were chosen because of the special tritium and radon models in the RESRAD-BUILD code. Other radionuclides were selected to represent a spectrum of radiation types and energies. Verification of the RESRAD-BUILD code was conducted with an initial check of all the input parameters for correctness against their original source documents. Verification of the calculations was performed external to the RESRAD-BUILD code with Microsoft{reg_sign} Excel to verify all the major portions of the code. In some cases, RESRAD-BUILD results were compared with those of external codes, such as MCNP (Monte Carlo N-particle) and RESRAD. The verification was conducted on a step-by-step basis and used different test cases as templates. The following types of calculations were investigated: (1) source injection rate, (2) air concentration in the room, (3) air particulate deposition, (4) radon pathway model, (5) tritium model for volume source, (6) external exposure model, (7) different pathway doses, and (8) time dependence of dose. Some minor errors were identified in version 3.0; these errors have been corrected in later versions of the code. Some possible improvements in the code were also identified.

  19. Robust verification analysis

    NASA Astrophysics Data System (ADS)

    Rider, William; Witkowski, Walt; Kamm, James R.; Wildey, Tim

    2016-02-01

    We introduce a new methodology for inferring the accuracy of computational simulations through the practice of solution verification. We demonstrate this methodology on examples from computational heat transfer, fluid dynamics and radiation transport. Our methodology is suited to both well- and ill-behaved sequences of simulations. Our approach to the analysis of these sequences of simulations incorporates expert judgment into the process directly via a flexible optimization framework, and the application of robust statistics. The expert judgment is systematically applied as constraints to the analysis, and together with the robust statistics guards against over-emphasis on anomalous analysis results. We have named our methodology Robust Verification. Our methodology is based on utilizing multiple constrained optimization problems to solve the verification model in a manner that varies the analysis' underlying assumptions. Constraints applied in the analysis can include expert judgment regarding convergence rates (bounds and expectations) as well as bounding values for physical quantities (e.g., positivity of energy or density). This approach then produces a number of error models, which are then analyzed through robust statistical techniques (median instead of mean statistics). This provides self-contained, data and expert informed error estimation including uncertainties for both the solution itself and order of convergence. Our method produces high quality results for the well-behaved cases relatively consistent with existing practice. The methodology can also produce reliable results for ill-behaved circumstances predicated on appropriate expert judgment. We demonstrate the method and compare the results with standard approaches used for both code and solution verification on well-behaved and ill-behaved simulations.

  20. Visual inspection for CTBT verification

    SciTech Connect

    Hawkins, W.; Wohletz, K.

    1997-03-01

    On-site visual inspection will play an essential role in future Comprehensive Test Ban Treaty (CTBT) verification. Although seismic and remote sensing techniques are the best understood and most developed methods for detection of evasive testing of nuclear weapons, visual inspection can greatly augment the certainty and detail of understanding provided by these more traditional methods. Not only can visual inspection offer ``ground truth`` in cases of suspected nuclear testing, but it also can provide accurate source location and testing media properties necessary for detailed analysis of seismic records. For testing in violation of the CTBT, an offending party may attempt to conceal the test, which most likely will be achieved by underground burial. While such concealment may not prevent seismic detection, evidence of test deployment, location, and yield can be disguised. In this light, if a suspicious event is detected by seismic or other remote methods, visual inspection of the event area is necessary to document any evidence that might support a claim of nuclear testing and provide data needed to further interpret seismic records and guide further investigations. However, the methods for visual inspection are not widely known nor appreciated, and experience is presently limited. Visual inspection can be achieved by simple, non-intrusive means, primarily geological in nature, and it is the purpose of this report to describe the considerations, procedures, and equipment required to field such an inspection.

  1. TFE verification program

    NASA Astrophysics Data System (ADS)

    1990-03-01

    The information presented herein will include evaluated test data, design evaluations, the results of analyses and the significance of results. The program objective is to demonstrate the technology readiness of a Thermionic Fuel Element (TFE) suitable for use as the basic element in a thermionic reactor with electric power output in the 0.5 to 5.0 MW(e) range, and a full-power life of 7 years. The TF Verification Program builds directly on the technology and data base developed in the 1960s and 1970s in an AEC/NASA program, and in the SP-100 program conducted in 1983, 1984 and 1985. In the SP-100 program, the attractive features of thermionic power conversion technology were recognized but concern was expressed over the lack of fast reactor irradiation data. The TFE Verification Program addresses this concern. The general logic and strategy of the program to achieve its objectives is shown. Five prior programs form the basis for the TFE Verification Program: (1) AEC/NASA program of the 1960s and early 1970; (2) SP-100 concept development program; (3) SP-100 thermionic technology program; (4) Thermionic irradiations program in TRIGA in FY-88; and (5) Thermionic Program in 1986 and 1987.

  2. TFE Verification Program

    SciTech Connect

    Not Available

    1990-03-01

    The objective of the semiannual progress report is to summarize the technical results obtained during the latest reporting period. The information presented herein will include evaluated test data, design evaluations, the results of analyses and the significance of results. The program objective is to demonstrate the technology readiness of a TFE suitable for use as the basic element in a thermionic reactor with electric power output in the 0.5 to 5.0 MW(e) range, and a full-power life of 7 years. The TF Verification Program builds directly on the technology and data base developed in the 1960s and 1970s in an AEC/NASA program, and in the SP-100 program conducted in 1983, 1984 and 1985. In the SP-100 program, the attractive but concern was expressed over the lack of fast reactor irradiation data. The TFE Verification Program addresses this concern. The general logic and strategy of the program to achieve its objectives is shown on Fig. 1-1. Five prior programs form the basis for the TFE Verification Program: (1) AEC/NASA program of the 1960s and early 1970; (2) SP-100 concept development program;(3) SP-100 thermionic technology program; (4) Thermionic irradiations program in TRIGA in FY-86; (5) and Thermionic Technology Program in 1986 and 1987. 18 refs., 64 figs., 43 tabs.

  3. Calibration or verification? A balanced approach for science.

    USGS Publications Warehouse

    Myers, C.T.; Kennedy, D.M.

    1997-01-01

    The calibration of balances is routinely performed both in the laboratory and the field. This process is required to accurately determine the weight of an object or chemical. The frequency of calibration and verification of balances is mandated by their use and location. Tolerance limits for balances could not be located in any standard procedure manuals. A survey was conducted to address the issues of calibration and verification frequency and to discuss the significance of defining tolerance limits for balances. Finally, for the benefit of laboratories unfamiliar with such procedures, we provide a working model based on our laboratory, the Upper Mississippi Science Center (UMSC), in La Crosse, Wisconsin.

  4. Verification of ICESat-2/ATLAS Science Receiver Algorithm Onboard Databases

    NASA Astrophysics Data System (ADS)

    Carabajal, C. C.; Saba, J. L.; Leigh, H. W.; Magruder, L. A.; Urban, T. J.; Mcgarry, J.; Schutz, B. E.

    2013-12-01

    NASA's ICESat-2 mission will fly the Advanced Topographic Laser Altimetry System (ATLAS) instrument on a 3-year mission scheduled to launch in 2016. ATLAS is a single-photon detection system transmitting at 532nm with a laser repetition rate of 10 kHz, and a 6 spot pattern on the Earth's surface. A set of onboard Receiver Algorithms will perform signal processing to reduce the data rate and data volume to acceptable levels. These Algorithms distinguish surface echoes from the background noise, limit the daily data volume, and allow the instrument to telemeter only a small vertical region about the signal. For this purpose, three onboard databases are used: a Surface Reference Map (SRM), a Digital Elevation Model (DEM), and a Digital Relief Maps (DRMs). The DEM provides minimum and maximum heights that limit the signal search region of the onboard algorithms, including a margin for errors in the source databases, and onboard geolocation. Since the surface echoes will be correlated while noise will be randomly distributed, the signal location is found by histogramming the received event times and identifying the histogram bins with statistically significant counts. Once the signal location has been established, the onboard Digital Relief Maps (DRMs) will be used to determine the vertical width of the telemetry band about the signal. University of Texas-Center for Space Research (UT-CSR) is developing the ICESat-2 onboard databases, which are currently being tested using preliminary versions and equivalent representations of elevation ranges and relief more recently developed at Goddard Space Flight Center (GSFC). Global and regional elevation models have been assessed in terms of their accuracy using ICESat geodetic control, and have been used to develop equivalent representations of the onboard databases for testing against the UT-CSR databases, with special emphasis on the ice sheet regions. A series of verification checks have been implemented, including

  5. Continuous verification using multimodal biometrics.

    PubMed

    Sim, Terence; Zhang, Sheng; Janakiraman, Rajkumar; Kumar, Sandeep

    2007-04-01

    Conventional verification systems, such as those controlling access to a secure room, do not usually require the user to reauthenticate himself for continued access to the protected resource. This may not be sufficient for high-security environments in which the protected resource needs to be continuously monitored for unauthorized use. In such cases, continuous verification is needed. In this paper, we present the theory, architecture, implementation, and performance of a multimodal biometrics verification system that continuously verifies the presence of a logged-in user. Two modalities are currently used--face and fingerprint--but our theory can be readily extended to include more modalities. We show that continuous verification imposes additional requirements on multimodal fusion when compared to conventional verification systems. We also argue that the usual performance metrics of false accept and false reject rates are insufficient yardsticks for continuous verification and propose new metrics against which we benchmark our system. PMID:17299225

  6. Quantum money with classical verification

    SciTech Connect

    Gavinsky, Dmitry

    2014-12-04

    We propose and construct a quantum money scheme that allows verification through classical communication with a bank. This is the first demonstration that a secure quantum money scheme exists that does not require quantum communication for coin verification. Our scheme is secure against adaptive adversaries - this property is not directly related to the possibility of classical verification, nevertheless none of the earlier quantum money constructions is known to possess it.

  7. Quantum money with classical verification

    NASA Astrophysics Data System (ADS)

    Gavinsky, Dmitry

    2014-12-01

    We propose and construct a quantum money scheme that allows verification through classical communication with a bank. This is the first demonstration that a secure quantum money scheme exists that does not require quantum communication for coin verification. Our scheme is secure against adaptive adversaries - this property is not directly related to the possibility of classical verification, nevertheless none of the earlier quantum money constructions is known to possess it.

  8. Surfactants in the sea-surface microlayer and sub-surface water at estuarine locations: Their concentration, distribution, enrichment, and relation to physicochemical characteristics.

    PubMed

    Huang, Yun-Jie; Brimblecombe, Peter; Lee, Chon-Lin; Latif, Mohd Talib

    2015-08-15

    Samples of sea-surface microlayer (SML) and sub-surface water (SSW) were collected from two areas-Kaohsiung City (Taiwan) and the southwest coast of Peninsular Malaysia to study the influence of SML on enrichment and distribution and to compare SML with the SSW. Anionic surfactants (MBAS) predominated in this study and were significantly higher in Kaohsiung than in Malaysia. Industrial areas in Kaohsiung were enriched with high loads of anthropogenic sources, accounted for higher surfactant amounts, and pose higher environmental disadvantages than in Malaysia, where pollutants were associated with agricultural activities. The dissolved organic carbon (DOC), MBAS, and cationic surfactant (DBAS) concentrations in the SML correlated to the SSW, reflecting exchanges between the SML and SSW in Kaohsiung. The relationships between surfactants and the physiochemical parameters indicated that DOC and saltwater dilution might affect the distributions of MBAS and DBAS in Kaohsiung. In Malaysia, DOC might be the important factor controlling DBAS. PMID:26093815

  9. Distribution of polychlorinated biphenyls and organochlorine pesticides in human breast milk from various locations in Tunisia: levels of contamination, influencing factors, and infant risk assessment.

    PubMed

    Ennaceur, S; Gandoura, N; Driss, M R

    2008-09-01

    The concentrations of dichlorodiphenytrichloroethane and its metabolites (DDTs), hexachlorobenzene (HCB), hexachlorocyclohexane isomers (HCHs), dieldrin, and 20 polychlorinated biphenyls (PCBs) were determined in 237 human breast milk samples collected from 12 locations in Tunisia. Gas chromatography with electron capture detector (GC-ECD) was used to identify and quantify residue levels on a lipid basis of organochlorine compounds (OCs). The predominant OCs in human breast milk were PCBs, p,p'-DDE, p,p'-DDT, HCHs, and HCB. Concentrations of DDTs in human breast milk from rural areas were significantly higher than those from urban locations (p<0.05). With regard to PCBs, we observed the predominance of mid-chlorinated congeners due to the presence of PCBs with high K(ow) such as PCB 153, 138, and 180. Positive correlations were found between concentrations of OCs in human breast milk and age of mothers and number of parities, suggesting the influence of such factors on OC burdens in lactating mothers. The comparison of daily intakes of PCBs, DDTs, HCHs, and HCB to infants through human breast milk with guidelines proposed by WHO and Health Canada shows that some individuals accumulated OCs in breast milk close to or higher than these guidelines. PMID:18614165

  10. Distribution of polychlorinated biphenyls and organochlorine pesticides in human breast milk from various locations in Tunisia: Levels of contamination, influencing factors, and infant risk assessment

    SciTech Connect

    Ennaceur, S. Gandoura, N.; Driss, M.R.

    2008-09-15

    The concentrations of dichlorodiphenytrichloroethane and its metabolites (DDTs), hexachlorobenzene (HCB), hexachlorocyclohexane isomers (HCHs), dieldrin, and 20 polychlorinated biphenyls (PCBs) were determined in 237 human breast milk samples collected from 12 locations in Tunisia. Gas chromatography with electron capture detector (GC-ECD) was used to identify and quantify residue levels on a lipid basis of organochlorine compounds (OCs). The predominant OCs in human breast milk were PCBs, p,p'-DDE, p,p'-DDT, HCHs, and HCB. Concentrations of DDTs in human breast milk from rural areas were significantly higher than those from urban locations (p<0.05). With regard to PCBs, we observed the predominance of mid-chlorinated congeners due to the presence of PCBs with high K{sub ow} such as PCB 153, 138, and 180. Positive correlations were found between concentrations of OCs in human breast milk and age of mothers and number of parities, suggesting the influence of such factors on OC burdens in lactating mothers. The comparison of daily intakes of PCBs, DDTs, HCHs, and HCB to infants through human breast milk with guidelines proposed by WHO and Health Canada shows that some individuals accumulated OCs in breast milk close to or higher than these guidelines.

  11. Requirements of Operational Verification of the NWSRFS-ESP Forecasts

    NASA Astrophysics Data System (ADS)

    Imam, B.; Werner, K.; Hartmann, H.; Sorooshian, S.; Pritchard, E.

    2006-12-01

    National Weather Service River Forecast System (NWSRFS). We focus on short (1- 15 days) ensemble forecasts and investigate the utility of both simple "single forecast" graphical approaches, and analytical "distribution" based measures and their associated diagrams. The presentation also addresses the role of both observation and historical-simulation, which is used in initializing hindcasts (retrospective forecasts) for diagnostic verification studies in operational procedures.

  12. Automating engineering verification in ALMA subsystems

    NASA Astrophysics Data System (ADS)

    Ortiz, José; Castillo, Jorge

    2014-08-01

    The Atacama Large Millimeter/submillimeter Array is an interferometer comprising 66 individual high precision antennas located over 5000 meters altitude in the north of Chile. Several complex electronic subsystems need to be meticulously tested at different stages of an antenna commissioning, both independently and when integrated together. First subsystem integration takes place at the Operations Support Facilities (OSF), at an altitude of 3000 meters. Second integration occurs at the high altitude Array Operations Site (AOS), where also combined performance with Central Local Oscillator (CLO) and Correlator is assessed. In addition, there are several other events requiring complete or partial verification of instrument specifications compliance, such as parts replacements, calibration, relocation within AOS, preventive maintenance and troubleshooting due to poor performance in scientific observations. Restricted engineering time allocation and the constant pressure of minimizing downtime in a 24/7 astronomical observatory, impose the need to complete (and report) the aforementioned verifications in the least possible time. Array-wide disturbances, such as global power interruptions and following recovery, generate the added challenge of executing this checkout on multiple antenna elements at once. This paper presents the outcome of the automation of engineering verification setup, execution, notification and reporting in ALMA and how these efforts have resulted in a dramatic reduction of both time and operator training required. Signal Path Connectivity (SPC) checkout is introduced as a notable case of such automation.

  13. TFE Verification Program

    SciTech Connect

    Not Available

    1993-05-01

    The objective of the semiannual progress report is to summarize the technical results obtained during the latest reporting period. The information presented herein will include evaluated test data, design evaluations, the results of analyses and the significance of results. The program objective is to demonstrate the technology readiness of a TFE (thermionic fuel element) suitable for use as the basic element in a thermionic reactor with electric power output in the 0.5 to 5.0 MW(e) range, and a full-power life of 7 years. The TFE Verification Program builds directly on the technology and data base developed in the 1960s and early 1970s in an AEC/NASA program, and in the SP-100 program conducted in 1983, 1984 and 1985. In the SP-100 program, the attractive features of thermionic power conversion technology were recognized but concern was expressed over the lack of fast reactor irradiation data. The TFE Verification Program addresses this concern.

  14. Deductive Verification of Cryptographic Software

    NASA Technical Reports Server (NTRS)

    Almeida, Jose Barcelar; Barbosa, Manuel; Pinto, Jorge Sousa; Vieira, Barbara

    2009-01-01

    We report on the application of an off-the-shelf verification platform to the RC4 stream cipher cryptographic software implementation (as available in the openSSL library), and introduce a deductive verification technique based on self-composition for proving the absence of error propagation.

  15. HDL to verification logic translator

    NASA Astrophysics Data System (ADS)

    Gambles, J. W.; Windley, P. J.

    The increasingly higher number of transistors possible in VLSI circuits compounds the difficulty in insuring correct designs. As the number of possible test cases required to exhaustively simulate a circuit design explodes, a better method is required to confirm the absence of design faults. Formal verification methods provide a way to prove, using logic, that a circuit structure correctly implements its specification. Before verification is accepted by VLSI design engineers, the stand alone verification tools that are in use in the research community must be integrated with the CAD tools used by the designers. One problem facing the acceptance of formal verification into circuit design methodology is that the structural circuit descriptions used by the designers are not appropriate for verification work and those required for verification lack some of the features needed for design. We offer a solution to this dilemma: an automatic translation from the designers' HDL models into definitions for the higher-ordered logic (HOL) verification system. The translated definitions become the low level basis of circuit verification which in turn increases the designer's confidence in the correctness of higher level behavioral models.

  16. Software Verification and Validation Procedure

    SciTech Connect

    Olund, Thomas S.

    2008-09-15

    This Software Verification and Validation procedure provides the action steps for the Tank Waste Information Network System (TWINS) testing process. The primary objective of the testing process is to provide assurance that the software functions as intended, and meets the requirements specified by the client. Verification and validation establish the primary basis for TWINS software product acceptance.

  17. HDL to verification logic translator

    NASA Technical Reports Server (NTRS)

    Gambles, J. W.; Windley, P. J.

    1992-01-01

    The increasingly higher number of transistors possible in VLSI circuits compounds the difficulty in insuring correct designs. As the number of possible test cases required to exhaustively simulate a circuit design explodes, a better method is required to confirm the absence of design faults. Formal verification methods provide a way to prove, using logic, that a circuit structure correctly implements its specification. Before verification is accepted by VLSI design engineers, the stand alone verification tools that are in use in the research community must be integrated with the CAD tools used by the designers. One problem facing the acceptance of formal verification into circuit design methodology is that the structural circuit descriptions used by the designers are not appropriate for verification work and those required for verification lack some of the features needed for design. We offer a solution to this dilemma: an automatic translation from the designers' HDL models into definitions for the higher-ordered logic (HOL) verification system. The translated definitions become the low level basis of circuit verification which in turn increases the designer's confidence in the correctness of higher level behavioral models.

  18. Method and system for determining depth distribution of radiation-emitting material located in a source medium and radiation detector system for use therein

    DOEpatents

    Benke, Roland R.; Kearfott, Kimberlee J.; McGregor, Douglas S.

    2003-03-04

    A method, system and a radiation detector system for use therein are provided for determining the depth distribution of radiation-emitting material distributed in a source medium, such as a contaminated field, without the need to take samples, such as extensive soil samples, to determine the depth distribution. The system includes a portable detector assembly with an x-ray or gamma-ray detector having a detector axis for detecting the emitted radiation. The radiation may be naturally-emitted by the material, such as gamma-ray-emitting radionuclides, or emitted when the material is struck by other radiation. The assembly also includes a hollow collimator in which the detector is positioned. The collimator causes the emitted radiation to bend toward the detector as rays parallel to the detector axis of the detector. The collimator may be a hollow cylinder positioned so that its central axis is perpendicular to the upper surface of the large area source when positioned thereon. The collimator allows the detector to angularly sample the emitted radiation over many ranges of polar angles. This is done by forming the collimator as a single adjustable collimator or a set of collimator pieces having various possible configurations when connected together. In any one configuration, the collimator allows the detector to detect only the radiation emitted from a selected range of polar angles measured from the detector axis. Adjustment of the collimator or the detector therein enables the detector to detect radiation emitted from a different range of polar angles. The system further includes a signal processor for processing the signals from the detector wherein signals obtained from different ranges of polar angles are processed together to obtain a reconstruction of the radiation-emitting material as a function of depth, assuming, but not limited to, a spatially-uniform depth distribution of the material within each layer. The detector system includes detectors having

  19. Distribution and abundance of zooplankton at selected locations on the Savannah River and from tributaries of the Savannah River Plant: December 1984--August 1985

    SciTech Connect

    Chimney, M.J.; Cody, W.R.

    1986-11-01

    Spatial and temporal differences in the abundance and composition of the zooplankton community occurred at Savannah River and SRP creek/swamp sampling locations. Stations are grouped into four categories based on differences in community structure: Savannah River; thermally influenced stations on Four Mile Creek and Pen Branch; closed-canopy stations in the Steel Creek system; and open-canopy Steel Creek stations, non-thermally influenced stations on Pen Branch and Beaver Dam Creek. Differences among stations were little related to water temperature, dissolved oxygen concentration, conductivity or pH at the tine of collection. None of these parameters appeared to be limiting. Rather, past thermal history and habitat structure seemed to be important controlling factors. 66 refs.

  20. Towards an in-situ measurement of wave velocity in buried plastic water distribution pipes for the purposes of leak location

    NASA Astrophysics Data System (ADS)

    Almeida, Fabrício C. L.; Brennan, Michael J.; Joseph, Phillip F.; Dray, Simon; Whitfield, Stuart; Paschoalini, Amarildo T.

    2015-12-01

    Water companies are under constant pressure to ensure that water leakage is kept to a minimum. Leak noise correlators are often used to help find and locate leaks. These devices correlate acoustic or vibration signals from sensors which are placed either side the location of a suspected leak. The peak in the cross-correlation function of the measured signals gives the time difference between the arrival times of the leak noise at the sensors. To convert the time delay into a distance, the speed at which the leak noise propagates along the pipe (wave-speed) needs to be known. Often, this is estimated from historical wave-speed data measured on other pipes obtained at various times and under various conditions, or it is estimated from tables which are calculated using simple formula. Usually, the wave-speed is not measured directly at the time of the correlation measurement and is therefore potentially a source of significant error in the localisation of the leak. In this paper, a new method of measuring the wave-speed in-situ in the presence of a leak, that is robust and simple, is explored. Experiments were conducted on a bespoke large scale buried pipe test-rig, in which a leak was also induced in the pipe between the measurement positions to simulate a condition that is likely to occur in practice. It is shown that even in conditions where the signal to noise ratio is very poor, the wave-speed estimate calculated using the new method is less than 5% different from the best estimate of 387 m s-1.

  1. Cleanup Verification Package for the 118-F-6 Burial Ground

    SciTech Connect

    H. M. Sulloway

    2008-10-02

    This cleanup verification package documents completion of remedial action for the 118-F-6 Burial Ground located in the 100-FR-2 Operable Unit of the 100-F Area on the Hanford Site. The trenches received waste from the 100-F Experimental Animal Farm, including animal manure, animal carcasses, laboratory waste, plastic, cardboard, metal, and concrete debris as well as a railroad tank car.

  2. NON-INVASIVE DETERMINATION OF THE LOCATION AND DISTRIBUTION OF FREE-PHASE DENSE NONAQUEOUS PHASE LIQUIDS (DNAPL) BY SEISMIC REFLECTION TECHNIQUES

    SciTech Connect

    Michael G. Waddell; William J. Domoracki; Tom J. Temples

    2001-12-01

    This annual technical progress report is for part of Task 4 (site evaluation), Task 5 (2D seismic design, acquisition, and processing), and Task 6 (2D seismic reflection, interpretation, and AVO analysis) on DOE contact number DE-AR26-98FT40369. The project had planned one additional deployment to another site other than Savannah River Site (SRS) or DOE Hanford Site. After the SUBCON midyear review in Albuquerque, NM, it was decided that two additional deployments would be performed. The first deployment is to test the feasibility of using non-invasive seismic reflection and AVO analysis as a monitoring tool to assist in determining the effectiveness of Dynamic Underground Stripping (DUS) in removal of DNAPL. The second deployment is to the Department of Defense (DOD) Charleston Naval Weapons Station Solid Waste Management Unit 12 (SWMU-12), Charleston, SC to further test the technique to detect high concentrations of DNAPL. The Charleston Naval Weapons Station SWMU-12 site was selected in consultation with National Energy Technology Laboratory (NETL) and DOD Naval Facilities Engineering Command Southern Division (NAVFAC) personnel. Based upon the review of existing data and due to the shallow target depth, the project team collected three Vertical Seismic Profiles (VSP) and an experimental P-wave seismic reflection line. After preliminary data analysis of the VSP data and the experimental reflection line data, it was decided to proceed with Task 5 and Task 6. Three high resolution P-wave reflection profiles were collected with two objectives; (1) design the reflection survey to image a target depth of 20 feet below land surface to assist in determining the geologic controls on the DNAPL plume geometry, and (2) apply AVO analysis to the seismic data to locate the zone of high concentration of DNAPL. Based upon the results of the data processing and interpretation of the seismic data, the project team was able to map the channel that is controlling the DNAPL plume

  3. Cancelable face verification using optical encryption and authentication.

    PubMed

    Taheri, Motahareh; Mozaffari, Saeed; Keshavarzi, Parviz

    2015-10-01

    In a cancelable biometric system, each instance of enrollment is distorted by a transform function, and the output should not be retransformed to the original data. This paper presents a new cancelable face verification system in the encrypted domain. Encrypted facial images are generated by a double random phase encoding (DRPE) algorithm using two keys (RPM1 and RPM2). To make the system noninvertible, a photon counting (PC) method is utilized, which requires a photon distribution mask for information reduction. Verification of sparse images that are not recognizable by direct visual inspection is performed by unconstrained minimum average correlation energy filter. In the proposed method, encryption keys (RPM1, RPM2, and PDM) are used in the sender side, and the receiver needs only encrypted images and correlation filters. In this manner, the system preserves privacy if correlation filters are obtained by an adversary. Performance of PC-DRPE verification system is evaluated under illumination variation, pose changes, and facial expression. Experimental results show that utilizing encrypted images not only increases security concerns but also enhances verification performance. This improvement can be attributed to the fact that, in the proposed system, the face verification problem is converted to key verification tasks. PMID:26479930

  4. Trajectory Based Behavior Analysis for User Verification

    NASA Astrophysics Data System (ADS)

    Pao, Hsing-Kuo; Lin, Hong-Yi; Chen, Kuan-Ta; Fadlil, Junaidillah

    Many of our activities on computer need a verification step for authorized access. The goal of verification is to tell apart the true account owner from intruders. We propose a general approach for user verification based on user trajectory inputs. The approach is labor-free for users and is likely to avoid the possible copy or simulation from other non-authorized users or even automatic programs like bots. Our study focuses on finding the hidden patterns embedded in the trajectories produced by account users. We employ a Markov chain model with Gaussian distribution in its transitions to describe the behavior in the trajectory. To distinguish between two trajectories, we propose a novel dissimilarity measure combined with a manifold learnt tuning for catching the pairwise relationship. Based on the pairwise relationship, we plug-in any effective classification or clustering methods for the detection of unauthorized access. The method can also be applied for the task of recognition, predicting the trajectory type without pre-defined identity. Given a trajectory input, the results show that the proposed method can accurately verify the user identity, or suggest whom owns the trajectory if the input identity is not provided.

  5. 28 CFR 74.6 - Location of eligible persons.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Location of eligible persons. 74.6 Section 74.6 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) CIVIL LIBERTIES ACT REDRESS PROVISION Verification of Eligibility § 74.6 Location of eligible persons. The Office shall compare...

  6. 28 CFR 74.6 - Location of eligible persons.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Location of eligible persons. 74.6 Section 74.6 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) CIVIL LIBERTIES ACT REDRESS PROVISION Verification of Eligibility § 74.6 Location of eligible persons. The Office shall compare...

  7. Online fingerprint verification.

    PubMed

    Upendra, K; Singh, S; Kumar, V; Verma, H K

    2007-01-01

    As organizations search for more secure authentication methods for user access, e-commerce, and other security applications, biometrics is gaining increasing attention. With an increasing emphasis on the emerging automatic personal identification applications, fingerprint based identification is becoming more popular. The most widely used fingerprint representation is the minutiae based representation. The main drawback with this representation is that it does not utilize a significant component of the rich discriminatory information available in the fingerprints. Local ridge structures cannot be completely characterized by minutiae. Also, it is difficult quickly to match two fingerprint images containing different number of unregistered minutiae points. In this study filter bank based representation, which eliminates these weakness, is implemented and the overall performance of the developed system is tested. The results have shown that this system can be used effectively for secure online verification applications. PMID:17365425

  8. Using multi-scale distribution and movement effects along a montane highway to identify optimal crossing locations for a large-bodied mammal community

    PubMed Central

    Römer, Heinrich; Germain, Ryan R.

    2013-01-01

    Roads are a major cause of habitat fragmentation that can negatively affect many mammal populations. Mitigation measures such as crossing structures are a proposed method to reduce the negative effects of roads on wildlife, but the best methods for determining where such structures should be implemented, and how their effects might differ between species in mammal communities is largely unknown. We investigated the effects of a major highway through south-eastern British Columbia, Canada on several mammal species to determine how the highway may act as a barrier to animal movement, and how species may differ in their crossing-area preferences. We collected track data of eight mammal species across two winters, along both the highway and pre-marked transects, and used a multi-scale modeling approach to determine the scale at which habitat characteristics best predicted preferred crossing sites for each species. We found evidence for a severe barrier effect on all investigated species. Freely-available remotely-sensed habitat landscape data were better than more costly, manually-digitized microhabitat maps in supporting models that identified preferred crossing sites; however, models using both types of data were better yet. Further, in 6 of 8 cases models which incorporated multiple spatial scales were better at predicting preferred crossing sites than models utilizing any single scale. While each species differed in terms of the landscape variables associated with preferred/avoided crossing sites, we used a multi-model inference approach to identify locations along the highway where crossing structures may benefit all of the species considered. By specifically incorporating both highway and off-highway data and predictions we were able to show that landscape context plays an important role for maximizing mitigation measurement efficiency. Our results further highlight the need for mitigation measures along major highways to improve connectivity between mammal

  9. Experiments for locating damaged truss members in a truss structure

    NASA Technical Reports Server (NTRS)

    Mcgowan, Paul E.; Smith, Suzanne W.; Javeed, Mehzad

    1991-01-01

    Locating damaged truss members in large space structures will involve a combination of sensing and diagnostic techniques. Methods developed for damage location require experimental verification prior to on-orbit applications. To this end, a series of experiments for locating damaged members using a generic, ten bay truss structure were conducted. A 'damaged' member is a member which has been removed entirely. Previously developed identification methods are used in conjunction with the experimental data to locate damage. Preliminary results to date are included, and indicate that mode selection and sensor location are important issues for location performance. A number of experimental data sets representing various damage configurations were compiled using the ten bay truss. The experimental data and the corresponding finite element analysis models are available to researchers for verification of various methods of structure identification and damage location.

  10. NON-INVASIVE DETERMINATION OF THE LOCATION AND DISTRIBUTION OF FREE-PHASE DENSE NONAQUEOUS PHASE LIQUIDS (DNAPL) BY SEISMIC REFLECTION TECHNIQUES

    SciTech Connect

    Michael G. Waddell; William J. Domoracki; Tom J. Temples

    2001-05-01

    This semi-annual technical progress report is for Task 4 site evaluation, Task 5 seismic reflection design and acquisition, and Task 6 seismic reflection processing and interpretation on DOE contact number DE-AR26-98FT40369. The project had planned one additional deployment to another site other than Savannah River Site (SRS) or DOE Hanford. During this reporting period the project had an ASME peer review. The findings and recommendation of the review panel, as well at the project team response to comments, are in Appendix A. After the SUBCON midyear review in Albuquerque, NM and the peer review it was decided that two additional deployments would be performed. The first deployment is to test the feasibility of using non-invasive seismic reflection and AVO analysis as monitoring to assist in determining the effectiveness of Dynamic Underground Stripping (DUS) in removal of DNAPL. Under the rescope of the project, Task 4 would be performed at the Charleston Navy Weapons Station, Charleston, SC and not at the Dynamic Underground Stripping (DUS) project at SRS. The project team had already completed Task 4 at the M-area seepage basin, only a few hundred yards away from the DUS site. Because the geology is the same, Task 4 was not necessary. However, a Vertical Seismic Profile (VSP) was conducted in one well to calibrate the geology to the seismic data. The first deployment to the DUS Site (Tasks 5 and 6) has been completed. Once the steam has been turned off these tasks will be performed again to compare the results to the pre-steam data. The results from the first deployment to the DUS site indicated a seismic amplitude anomaly at the location and depths of the known high concentrations of DNAPL. The deployment to another site with different geologic conditions was supposed to occur during this reporting period. The first site selected was DOE Paducah, Kentucky. After almost eight months of negotiation, site access was denied requiring the selection of another site

  11. 40 CFR 1065.390 - PM balance verifications and weighing process verification.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 34 2013-07-01 2013-07-01 false PM balance verifications and weighing... § 1065.390 PM balance verifications and weighing process verification. (a) Scope and frequency. This section describes three verifications. (1) Independent verification of PM balance performance within...

  12. 40 CFR 1065.390 - PM balance verifications and weighing process verification.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false PM balance verifications and weighing... § 1065.390 PM balance verifications and weighing process verification. (a) Scope and frequency. This section describes three verifications. (1) Independent verification of PM balance performance within...

  13. 40 CFR 1065.390 - PM balance verifications and weighing process verification.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 34 2012-07-01 2012-07-01 false PM balance verifications and weighing... § 1065.390 PM balance verifications and weighing process verification. (a) Scope and frequency. This section describes three verifications. (1) Independent verification of PM balance performance within...

  14. 40 CFR 1065.390 - PM balance verifications and weighing process verification.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 33 2011-07-01 2011-07-01 false PM balance verifications and weighing... § 1065.390 PM balance verifications and weighing process verification. (a) Scope and frequency. This section describes three verifications. (1) Independent verification of PM balance performance within...

  15. 40 CFR 1065.390 - PM balance verifications and weighing process verification.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 33 2014-07-01 2014-07-01 false PM balance verifications and weighing... § 1065.390 PM balance verifications and weighing process verification. (a) Scope and frequency. This section describes three verifications. (1) Independent verification of PM balance performance within...

  16. Distributed computing

    SciTech Connect

    Chambers, F.B.; Duce, D.A.; Jones, G.P.

    1984-01-01

    CONTENTS: The Dataflow Approach: Fundamentals of dataflow. Architecture and performance. Assembler level programming. High level dataflow programming. Declarative systems: Functional programming. Logic programming and prolog. The ''language first'' approach. Towards a successor to von Neumann. Loosely-coupled systems: Architectures. Communications. Distributed filestores. Mechanisms for distributed control. Distributed operating systems. Programming languages. Closely-coupled systems: Architecture. Programming languages. Run-time support. Development aids. Cyba-M. Polyproc. Modeling and verification: Using algebra for concurrency. Reasoning about concurrent systems. Each chapter includes references. Index.

  17. 7 CFR 272.13 - Prisoner verification system (PVS).

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 4 2014-01-01 2014-01-01 false Prisoner verification system (PVS). 272.13 Section 272.13 Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE FOOD STAMP AND FOOD DISTRIBUTION PROGRAM REQUIREMENTS FOR PARTICIPATING...

  18. 7 CFR 272.13 - Prisoner verification system (PVS).

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 4 2013-01-01 2013-01-01 false Prisoner verification system (PVS). 272.13 Section 272.13 Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE FOOD STAMP AND FOOD DISTRIBUTION PROGRAM REQUIREMENTS FOR PARTICIPATING...

  19. Biometric verification with correlation filters.

    PubMed

    Vijaya Kumar, B V K; Savvides, Marios; Xie, Chunyan; Venkataramani, Krithika; Thornton, Jason; Mahalanobis, Abhijit

    2004-01-10

    Using biometrics for subject verification can significantly improve security over that of approaches based on passwords and personal identification numbers, both of which people tend to lose or forget. In biometric verification the system tries to match an input biometric (such as a fingerprint, face image, or iris image) to a stored biometric template. Thus correlation filter techniques are attractive candidates for the matching precision needed in biometric verification. In particular, advanced correlation filters, such as synthetic discriminant function filters, can offer very good matching performance in the presence of variability in these biometric images (e.g., facial expressions, illumination changes, etc.). We investigate the performance of advanced correlation filters for face, fingerprint, and iris biometric verification. PMID:14735958

  20. Biometric verification with correlation filters

    NASA Astrophysics Data System (ADS)

    Vijaya Kumar, B. V. K.; Savvides, Marios; Xie, Chunyan; Venkataramani, Krithika; Thornton, Jason; Mahalanobis, Abhijit

    2004-01-01

    Using biometrics for subject verification can significantly improve security over that of approaches based on passwords and personal identification numbers, both of which people tend to lose or forget. In biometric verification the system tries to match an input biometric (such as a fingerprint, face image, or iris image) to a stored biometric template. Thus correlation filter techniques are attractive candidates for the matching precision needed in biometric verification. In particular, advanced correlation filters, such as synthetic discriminant function filters, can offer very good matching performance in the presence of variability in these biometric images (e.g., facial expressions, illumination changes, etc.). We investigate the performance of advanced correlation filters for face, fingerprint, and iris biometric verification.

  1. Verification and Validation for Flight-Critical Systems (VVFCS)

    NASA Technical Reports Server (NTRS)

    Graves, Sharon S.; Jacobsen, Robert A.

    2010-01-01

    On March 31, 2009 a Request for Information (RFI) was issued by NASA s Aviation Safety Program to gather input on the subject of Verification and Validation (V & V) of Flight-Critical Systems. The responses were provided to NASA on or before April 24, 2009. The RFI asked for comments in three topic areas: Modeling and Validation of New Concepts for Vehicles and Operations; Verification of Complex Integrated and Distributed Systems; and Software Safety Assurance. There were a total of 34 responses to the RFI, representing a cross-section of academic (26%), small & large industry (47%) and government agency (27%).

  2. Evaluation of 3D pre-treatment verification for volumetric modulated arc therapy plan in head region

    NASA Astrophysics Data System (ADS)

    Ruangchan, S.; Oonsiri, S.; Suriyapee, S.

    2016-03-01

    The development of pre-treatment QA tools contributes to the three dimension (3D) dose verification using the calculation software with the measured planar dose distribution. This research is aimed to evaluate the Sun Nuclear 3DVH software with Thermo luminescence dosimeter (TLD) measurement. The two VMAT patient plans (2.5 arcs) of 6 MV photons with different PTV locations were transferred to the Rando phantom images. The PTV of the first plan located in homogeneous area and vice versa in the second plan. For treatment planning process, the Rando phantom images were employed in optimization and calculation with the PTV, brain stem, lens and TLD position contouring. The verification plans were created, transferred to the ArcCHECK for measurement and calculated the 3D dose using 3DVH software. The range of the percent dose differences in both PTV and organ at risk (OAR) between TLD and 3DVH software of the first and the second plans were -2.09 to 3.87% and -1.39 to 6.88%, respectively. The mean percent dose differences for the PTV were 1.62% and 3.93% for the first and the second plans, respectively. In conclusion, the 3DVH software results show good agreement with TLD when the tumor located in the homogeneous area.

  3. Numerical modelling and verification of Polish ventricular assist device.

    PubMed

    Milenin, Andrzej; Kopernik, Magdalena; Jurkojć, Dorota; Gawlikowski, Maciej; Rusin, Tomasz; Darłak, Maciej; Kustosz, Roman

    2012-01-01

    The developed multiscale model of blood chamber of POLVAD (Polish ventricular assist device) was introduced. The tension test for polymer and digital image correlation (DIC) were performed for verification of the strains and displacements obtained in the numerical model of POLVAD_EXT. The numerical simulations were carried out in conditions given in the experiment to compare the results obtained on external surfaces of blood chamber of the POLVAD_EXT. The examined polymer applied in the POLVADs is sensitive to changes of temperature and this observation is considered in all prepared numerical models. The comparison of experimental and numerical results shows acceptable coincidence. There are some heterogeneous distributions of strains in experiment with respect to analysis of computed parameters. The comparison of two versions of blood chambers (POLVAD and POLVAD_EXT) in numerical analysis shows that POLVAD_EXT construction is better with respect to analysis of strain and stress. The maximum values of computed parameters are located in the regions between connectors on the internal surfaces of blood chambers of POLVAD. PMID:23140381

  4. Expose : procedure and results of the joint experiment verification tests

    NASA Astrophysics Data System (ADS)

    Panitz, C.; Rettberg, P.; Horneck, G.; Rabbow, E.; Baglioni, P.

    The International Space Station will carry the EXPOSE facility accommodated at the universal workplace URM-D located outside the Russian Service Module. The launch will be affected in 2005 and it is planned to stay in space for 1.5 years. The tray like structure will accomodate 2 chemical and 6 biological PI-experiments or experiment systems of the ROSE (Response of Organisms to Space Environment) consortium. EXPOSE will support long-term in situ studies of microbes in artificial meteorites, as well as of microbial communities from special ecological niches, such as endolithic and evaporitic ecosystems. The either vented or sealed experiment pockets will be covered by an optical filter system to control intensity and spectral range of solar UV irradiation. Control of sun exposure will be achieved by the use of individual shutters. To test the compatibility of the different biological systems and their adaptation to the opportunities and constraints of space conditions a profound ground support program has been developed. The procedure and first results of this joint Experiment Verification Tests (EVT) will be presented. The results will be essential for the success of the EXPOSE mission and have been done in parallel with the development and construction of the final hardware design of the facility. The results of the mission will contribute to the understanding of the organic chemistry processes in space, the biological adaptation strategies to extreme conditions, e.g. on early Earth and Mars, and the distribution of life beyond its planet of origin.

  5. Thoughts on Verification of Nuclear Disarmament

    SciTech Connect

    Dunlop, W H

    2007-09-26

    perfect and it was expected that occasionally there might be a verification measurement that was slightly above 150 kt. But the accuracy was much improved over the earlier seismic measurements. In fact some of this improvement was because as part of this verification protocol the US and Soviet Union provided the yields of several past tests to improve seismic calibrations. This actually helped provide a much needed calibration for the seismic measurements. It was also accepted that since nuclear tests were to a large part R&D related, it was also expected that occasionally there might be a test that was slightly above 150 kt, as you could not always predict the yield with high accuracy in advance of the test. While one could hypothesize that the Soviets could do a test at some other location than their test sites, if it were even a small fraction of 150 kt it would clearly be observed and would be a violation of the treaty. So the issue of clandestine tests of significance was easily covered for this particular treaty.

  6. Organics Verification Study for Sinclair and Dyes Inlets, Washington

    SciTech Connect

    Kohn, Nancy P.; Brandenberger, Jill M.; Niewolny, Laurie A.; Johnston, Robert K.

    2006-09-28

    Sinclair and Dyes Inlets near Bremerton, Washington, are on the State of Washington 1998 303(d) list of impaired waters because of fecal coliform contamination in marine water, metals in sediment and fish tissue, and organics in sediment and fish tissue. Because significant cleanup and source control activities have been conducted in the inlets since the data supporting the 1998 303(d) listings were collected, two verification studies were performed to address the 303(d) segments that were listed for metal and organic contaminants in marine sediment. The Metals Verification Study (MVS) was conducted in 2003; the final report, Metals Verification Study for Sinclair and Dyes Inlets, Washington, was published in March 2004 (Kohn et al. 2004). This report describes the Organics Verification Study that was conducted in 2005. The study approach was similar to the MVS in that many surface sediment samples were screened for the major classes of organic contaminants, and then the screening results and other available data were used to select a subset of samples for quantitative chemical analysis. Because the MVS was designed to obtain representative data on concentrations of contaminants in surface sediment throughout Sinclair Inlet, Dyes Inlet, Port Orchard Passage, and Rich Passage, aliquots of the 160 MVS sediment samples were used in the analysis for the Organics Verification Study. However, unlike metals screening methods, organics screening methods are not specific to individual organic compounds, and are not available for some target organics. Therefore, only the quantitative analytical results were used in the organics verification evaluation. The results of the Organics Verification Study showed that sediment quality outside of Sinclair Inlet is unlikely to be impaired because of organic contaminants. Similar to the results for metals, in Sinclair Inlet, the distribution of residual organic contaminants is generally limited to nearshore areas already within the

  7. LOCATING MONITORING STATIONS IN WATER DISTRIBUTION SYSTEMS

    EPA Science Inventory

    Water undergoes changes in quality between the time it leaves the treatment plant and the time it reaches the customer's tap, making it important to select monitoring stations that will adequately monitor these changers. But because there is no uniform schedule or framework for ...

  8. Transfer function verification and block diagram simplification of a very high-order distributed pole closed-loop servo by means of non-linear time-response simulation

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, A. K.

    1975-01-01

    Linear frequency domain methods are inadequate in analyzing the 1975 Viking Orbiter (VO75) digital tape recorder servo due to dominant nonlinear effects such as servo signal limiting, unidirectional servo control, and static/dynamic Coulomb friction. The frequency loop (speed control) servo of the VO75 tape recorder is used to illustrate the analytical tools and methodology of system redundancy elimination and high order transfer function verification. The paper compares time-domain performance parameters derived from a series of nonlinear time responses with the available experimental data in order to select the best possible analytical transfer function representation of the tape transport (mechanical segment of the tape recorder) from several possible candidates. The study also shows how an analytical time-response simulation taking into account most system nonlinearities can pinpoint system redundancy and overdesign stemming from a strictly empirical design approach. System order reduction is achieved through truncation of individual transfer functions and elimination of redundant blocks.

  9. Verification of Advances in a Coupled Snow-runoff Modeling Framework for Operational Streamflow Forecasts

    NASA Astrophysics Data System (ADS)

    Barik, M. G.; Hogue, T. S.; Franz, K. J.; He, M.

    2011-12-01

    The National Oceanic and Atmospheric Administration's (NOAA's) River Forecast Centers (RFCs) issue hydrologic forecasts related to flood events, reservoir operations for water supply, streamflow regulation, and recreation on the nation's streams and rivers. The RFCs use the National Weather Service River Forecast System (NWSRFS) for streamflow forecasting which relies on a coupled snow model (i.e. SNOW17) and rainfall-runoff model (i.e. SAC-SMA) in snow-dominated regions of the US. Errors arise in various steps of the forecasting system from input data, model structure, model parameters, and initial states. The goal of the current study is to undertake verification of potential improvements in the SNOW17-SAC-SMA modeling framework developed for operational streamflow forecasts. We undertake verification for a range of parameters sets (i.e. RFC, DREAM (Differential Evolution Adaptive Metropolis)) as well as a data assimilation (DA) framework developed for the coupled models. Verification is also undertaken for various initial conditions to observe the influence of variability in initial conditions on the forecast. The study basin is the North Fork America River Basin (NFARB) located on the western side of the Sierra Nevada Mountains in northern California. Hindcasts are verified using both deterministic (i.e. Nash Sutcliffe efficiency, root mean square error, and joint distribution) and probabilistic (i.e. reliability diagram, discrimination diagram, containing ratio, and Quantile plots) statistics. Our presentation includes comparison of the performance of different optimized parameters and the DA framework as well as assessment of the impact associated with the initial conditions used for streamflow forecasts for the NFARB.

  10. Hard and Soft Safety Verifications

    NASA Technical Reports Server (NTRS)

    Wetherholt, Jon; Anderson, Brenda

    2012-01-01

    The purpose of this paper is to examine the differences between and the effects of hard and soft safety verifications. Initially, the terminology should be defined and clarified. A hard safety verification is datum which demonstrates how a safety control is enacted. An example of this is relief valve testing. A soft safety verification is something which is usually described as nice to have but it is not necessary to prove safe operation. An example of a soft verification is the loss of the Solid Rocket Booster (SRB) casings from Shuttle flight, STS-4. When the main parachutes failed, the casings impacted the water and sank. In the nose cap of the SRBs, video cameras recorded the release of the parachutes to determine safe operation and to provide information for potential anomaly resolution. Generally, examination of the casings and nozzles contributed to understanding of the newly developed boosters and their operation. Safety verification of SRB operation was demonstrated by examination for erosion or wear of the casings and nozzle. Loss of the SRBs and associated data did not delay the launch of the next Shuttle flight.