Science.gov

Sample records for distributed location verification

  1. Collaborative Localization and Location Verification in WSNs

    PubMed Central

    Miao, Chunyu; Dai, Guoyong; Ying, Kezhen; Chen, Qingzhang

    2015-01-01

    Localization is one of the most important technologies in wireless sensor networks. A lightweight distributed node localization scheme is proposed by considering the limited computational capacity of WSNs. The proposed scheme introduces the virtual force model to determine the location by incremental refinement. Aiming at solving the drifting problem and malicious anchor problem, a location verification algorithm based on the virtual force mode is presented. In addition, an anchor promotion algorithm using the localization reliability model is proposed to re-locate the drifted nodes. Extended simulation experiments indicate that the localization algorithm has relatively high precision and the location verification algorithm has relatively high accuracy. The communication overhead of these algorithms is relative low, and the whole set of reliable localization methods is practical as well as comprehensive. PMID:25954948

  2. Verification of LHS distributions.

    SciTech Connect

    Swiler, Laura Painton

    2006-04-01

    This document provides verification test results for normal, lognormal, and uniform distributions that are used in Sandia's Latin Hypercube Sampling (LHS) software. The purpose of this testing is to verify that the sample values being generated in LHS are distributed according to the desired distribution types. The testing of distribution correctness is done by examining summary statistics, graphical comparisons using quantile-quantile plots, and format statistical tests such as the Chisquare test, the Kolmogorov-Smirnov test, and the Anderson-Darling test. The overall results from the testing indicate that the generation of normal, lognormal, and uniform distributions in LHS is acceptable.

  3. An Efficient Location Verification Scheme for Static Wireless Sensor Networks.

    PubMed

    Kim, In-Hwan; Kim, Bo-Sung; Song, JooSeok

    2017-01-24

    In wireless sensor networks (WSNs), the accuracy of location information is vital to support many interesting applications. Unfortunately, sensors have difficulty in estimating their location when malicious sensors attack the location estimation process. Even though secure localization schemes have been proposed to protect location estimation process from attacks, they are not enough to eliminate the wrong location estimations in some situations. The location verification can be the solution to the situations or be the second-line defense. The problem of most of the location verifications is the explicit involvement of many sensors in the verification process and requirements, such as special hardware, a dedicated verifier and the trusted third party, which causes more communication and computation overhead. In this paper, we propose an efficient location verification scheme for static WSN called mutually-shared region-based location verification (MSRLV), which reduces those overheads by utilizing the implicit involvement of sensors and eliminating several requirements. In order to achieve this, we use the mutually-shared region between location claimant and verifier for the location verification. The analysis shows that MSRLV reduces communication overhead by 77% and computation overhead by 92% on average, when compared with the other location verification schemes, in a single sensor verification. In addition, simulation results for the verification of the whole network show that MSRLV can detect the malicious sensors by over 90% when sensors in the network have five or more neighbors.

  4. An Efficient Location Verification Scheme for Static Wireless Sensor Networks

    PubMed Central

    Kim, In-hwan; Kim, Bo-sung; Song, JooSeok

    2017-01-01

    In wireless sensor networks (WSNs), the accuracy of location information is vital to support many interesting applications. Unfortunately, sensors have difficulty in estimating their location when malicious sensors attack the location estimation process. Even though secure localization schemes have been proposed to protect location estimation process from attacks, they are not enough to eliminate the wrong location estimations in some situations. The location verification can be the solution to the situations or be the second-line defense. The problem of most of the location verifications is the explicit involvement of many sensors in the verification process and requirements, such as special hardware, a dedicated verifier and the trusted third party, which causes more communication and computation overhead. In this paper, we propose an efficient location verification scheme for static WSN called mutually-shared region-based location verification (MSRLV), which reduces those overheads by utilizing the implicit involvement of sensors and eliminating several requirements. In order to achieve this, we use the mutually-shared region between location claimant and verifier for the location verification. The analysis shows that MSRLV reduces communication overhead by 77% and computation overhead by 92% on average, when compared with the other location verification schemes, in a single sensor verification. In addition, simulation results for the verification of the whole network show that MSRLV can detect the malicious sensors by over 90% when sensors in the network have five or more neighbors. PMID:28125007

  5. Identity verification in quantum key distribution

    NASA Astrophysics Data System (ADS)

    Zeng, Guihua; Zhang, Weiping

    2000-02-01

    The security of the previous quantum key distribution protocols, which is guaranteed by the laws of quantum physics, is based on legitimate users. However, impersonation of the legitimate communicators by eavesdroppers, in practice, will be inevitable. In this paper, we proposed a quantum key verification scheme, which can simultaneously distribute the quantum secret key and verify the communicators' identity. Investigation shows that this proposed identity verification scheme is secure.

  6. Protecting Privacy and Securing the Gathering of Location Proofs - The Secure Location Verification Proof Gathering Protocol

    NASA Astrophysics Data System (ADS)

    Graham, Michelle; Gray, David

    As wireless networks become increasingly ubiquitous, the demand for a method of locating a device has increased dramatically. Location Based Services are now commonplace but there are few methods of verifying or guaranteeing a location provided by a user without some specialised hardware, especially in larger scale networks. We propose a system for the verification of location claims, using proof gathered from neighbouring devices. In this paper we introduce a protocol to protect this proof gathering process, protecting the privacy of all involved parties and securing it from intruders and malicious claiming devices. We present the protocol in stages, extending the security of this protocol to allow for flexibility within its application. The Secure Location Verification Proof Gathering Protocol (SLVPGP) has been designed to function within the area of Vehicular Networks, although its application could be extended to any device with wireless & cryptographic capabilities.

  7. 37 CFR 384.7 - Verification of royalty distributions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... distributions. 384.7 Section 384.7 Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF... BUSINESS ESTABLISHMENT SERVICES § 384.7 Verification of royalty distributions. (a) General. This section prescribes procedures by which any Copyright Owner may verify the royalty distributions made by...

  8. A Verification System for Distributed Objects with Asynchronous Method Calls

    NASA Astrophysics Data System (ADS)

    Ahrendt, Wolfgang; Dylla, Maximilian

    We present a verification system for Creol, an object-oriented modeling language for concurrent distributed applications. The system is an instance of KeY, a framework for object-oriented software verification, which has so far been applied foremost to sequential Java. Building on KeY characteristic concepts, like dynamic logic, sequent calculus, explicit substitutions, and the taclet rule language, the system presented in this paper addresses functional correctness of Creol models featuring local cooperative thread parallelism and global communication via asynchronous method calls. The calculus heavily operates on communication histories which describe the interfaces of Creol units. Two example scenarios demonstrate the usage of the system.

  9. Field verification of a nondestructive damage location algorithm

    SciTech Connect

    Farrar, C.R.; Stubbs, N.

    1996-12-31

    Over the past 25 years, the use of modal parameters for detecting damage has received considerable attention from the civil engineering community. The basic idea is that changes in the structure`s properties, primarily stiffness, will alter the dynamic properties of the structure such as frequencies and mode shapes, and properties derived from these quantities such as modal-based flexibility. In this paper, a method for nondestructive damage location in bridges, as determined by changes in the modal properties, is described. The damage detection algorithm is applied to pre- and post-damage modal properties measured on a bridge. Results of the analysis indicate that the method accurately locates the damage. Subjects relating to practical implementation of this damage identification algorithm that need further study are discussed.

  10. Modeling and Verification of Distributed Generation and Voltage Regulation Equipment for Unbalanced Distribution Power Systems; Annual Subcontract Report, June 2007

    SciTech Connect

    Davis, M. W.; Broadwater, R.; Hambrick, J.

    2007-07-01

    This report summarizes the development of models for distributed generation and distribution circuit voltage regulation equipment for unbalanced power systems and their verification through actual field measurements.

  11. GENERIC VERIFICATION PROTOCOL: DISTRIBUTED GENERATION AND COMBINED HEAT AND POWER FIELD TESTING PROTOCOL

    EPA Science Inventory

    This report is a generic verification protocol by which EPA’s Environmental Technology Verification program tests newly developed equipment for distributed generation of electric power, usually micro-turbine generators and internal combustion engine generators. The protocol will ...

  12. GENERIC VERIFICATION PROTOCOL: DISTRIBUTED GENERATION AND COMBINED HEAT AND POWER FIELD TESTING PROTOCOL

    EPA Science Inventory

    This report is a generic verification protocol by which EPA’s Environmental Technology Verification program tests newly developed equipment for distributed generation of electric power, usually micro-turbine generators and internal combustion engine generators. The protocol will ...

  13. Verification of Tail Distribution Bounds in a Theorem Prover

    NASA Astrophysics Data System (ADS)

    Hasan, Osman; Tahar, Sofiène

    2007-09-01

    In the field of probabilistic analysis, bounding the tail distribution is a major tool for estimating the failure probability of systems. In this paper, we present the verification of Markov's and Chebyshev's inequalities for discrete random variables using the HOL theorem prover. The formally verified Markov and Chebyshev's inequalities allow us to precisely reason about tail distribution bounds for probabilistic systems within the core of a higher-order-logic theorem prover and thus prove to be quite useful for the analysis of systems used in safety-critical domains, such as space, medicine and military. For illustration purposes, we show how we can obtain bounds on the tail distribution of the Coupon Collector's problem in HOL.

  14. DOE-EPRI distributed wind Turbine Verification Program (TVP III)

    SciTech Connect

    McGowin, C.; DeMeo, E.; Calvert, S.

    1997-12-31

    In 1992, the Electric Power Research Institute (EPRI) and the U.S. Department of Energy (DOE) initiated the Utility Wind Turbine Verification Program (TVP). The goal of the program is to evaluate prototype advanced wind turbines at several sites developed by U.S. electric utility companies. Two six MW wind projects have been installed under the TVP program by Central and South West Services in Fort Davis, Texas and Green Mountain Power Corporation in Searsburg, Vermont. In early 1997, DOE and EPRI selected five more utility projects to evaluate distributed wind generation using smaller {open_quotes}clusters{close_quotes} of wind turbines connected directly to the electricity distribution system. This paper presents an overview of the objectives, scope, and status of the EPRI-DOE TVP program and the existing and planned TVP projects.

  15. Design and Verification of a Distributed Communication Protocol

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar A.; Goodloe, Alwyn E.

    2009-01-01

    The safety of remotely operated vehicles depends on the correctness of the distributed protocol that facilitates the communication between the vehicle and the operator. A failure in this communication can result in catastrophic loss of the vehicle. To complicate matters, the communication system may be required to satisfy several, possibly conflicting, requirements. The design of protocols is typically an informal process based on successive iterations of a prototype implementation. Yet distributed protocols are notoriously difficult to get correct using such informal techniques. We present a formal specification of the design of a distributed protocol intended for use in a remotely operated vehicle, which is built from the composition of several simpler protocols. We demonstrate proof strategies that allow us to prove properties of each component protocol individually while ensuring that the property is preserved in the composition forming the entire system. Given that designs are likely to evolve as additional requirements emerge, we show how we have automated most of the repetitive proof steps to enable verification of rapidly changing designs.

  16. Distributed Engine Control Empirical/Analytical Verification Tools

    NASA Technical Reports Server (NTRS)

    DeCastro, Jonathan; Hettler, Eric; Yedavalli, Rama; Mitra, Sayan

    2013-01-01

    NASA's vision for an intelligent engine will be realized with the development of a truly distributed control system featuring highly reliable, modular, and dependable components capable of both surviving the harsh engine operating environment and decentralized functionality. A set of control system verification tools was developed and applied to a C-MAPSS40K engine model, and metrics were established to assess the stability and performance of these control systems on the same platform. A software tool was developed that allows designers to assemble easily a distributed control system in software and immediately assess the overall impacts of the system on the target (simulated) platform, allowing control system designers to converge rapidly on acceptable architectures with consideration to all required hardware elements. The software developed in this program will be installed on a distributed hardware-in-the-loop (DHIL) simulation tool to assist NASA and the Distributed Engine Control Working Group (DECWG) in integrating DCS (distributed engine control systems) components onto existing and next-generation engines.The distributed engine control simulator blockset for MATLAB/Simulink and hardware simulator provides the capability to simulate virtual subcomponents, as well as swap actual subcomponents for hardware-in-the-loop (HIL) analysis. Subcomponents can be the communication network, smart sensor or actuator nodes, or a centralized control system. The distributed engine control blockset for MATLAB/Simulink is a software development tool. The software includes an engine simulation, a communication network simulation, control algorithms, and analysis algorithms set up in a modular environment for rapid simulation of different network architectures; the hardware consists of an embedded device running parts of the CMAPSS engine simulator and controlled through Simulink. The distributed engine control simulation, evaluation, and analysis technology provides unique

  17. Location Management in Distributed Mobile Environments

    DTIC Science & Technology

    1994-09-01

    additional mes-sages need to be sent for this purpose. The update in fp time is done to avoid purging of the forward-ing pointer data at the MSSs. The...the average search- update cost for LU -JUis less than7 or equal to 1. Thus, the aggregate costof LU -JU is lower than LU -PC. LU -PC performsbetter...computing. Location management con-sists of location updates , searches and search-updates.An update occurs when a mobile host changes loca-tion. A search

  18. Multi-ball and one-ball geolocation and location verification

    NASA Astrophysics Data System (ADS)

    Nelson, D. J.; Townsend, J. L.

    2017-05-01

    We present analysis methods that may be used to geolocate emitters using one or more moving receivers. While some of the methods we present may apply to a broader class of signals, our primary interest is locating and tracking ships from short pulsed transmissions, such as the maritime Automatic Identification System (AIS.) The AIS signal is difficult to process and track since the pulse duration is only 25 milliseconds, and the pulses may only be transmitted every six to ten seconds. Several fundamental problems are addressed, including demodulation of AIS/GMSK signals, verification of the emitter location, accurate frequency and delay estimation and identification of pulse trains from the same emitter. In particular, we present several new correlation methods, including cross-cross correlation that greatly improves correlation accuracy over conventional methods and cross- TDOA and cross-FDOA functions that make it possible to estimate time and frequency delay without the need of computing a two dimensional cross-ambiguity surface. By isolating pulses from the same emitter and accurately tracking the received signal frequency, we are able to accurately estimate the emitter location from the received Doppler characteristics.

  19. Reconstructing Spatial Distributions from Anonymized Locations

    SciTech Connect

    Horey, James L; Forrest, Stephanie; Groat, Michael

    2012-01-01

    Devices such as mobile phones, tablets, and sensors are often equipped with GPS that accurately report a person's location. Combined with wireless communication, these devices enable a wide range of new social tools and applications. These same qualities, however, leave location-aware applications vulnerable to privacy violations. This paper introduces the Negative Quad Tree, a privacy protection method for location aware applications. The method is broadly applicable to applications that use spatial density information, such as social applications that measure the popularity of social venues. The method employs a simple anonymization algorithm running on mobile devices, and a more complex reconstruction algorithm on a central server. This strategy is well suited to low-powered mobile devices. The paper analyzes the accuracy of the reconstruction method in a variety of simulated and real-world settings and demonstrates that the method is accurate enough to be used in many real-world scenarios.

  20. Radionuclide Inventory Distribution Project Data Evaluation and Verification White Paper

    SciTech Connect

    NSTec Environmental Restoration

    2010-05-17

    Testing of nuclear explosives caused widespread contamination of surface soils on the Nevada Test Site (NTS). Atmospheric tests produced the majority of this contamination. The Radionuclide Inventory and Distribution Program (RIDP) was developed to determine distribution and total inventory of radionuclides in surface soils at the NTS to evaluate areas that may present long-term health hazards. The RIDP achieved this objective with aerial radiological surveys, soil sample results, and in situ gamma spectroscopy. This white paper presents the justification to support the use of RIDP data as a guide for future evaluation and to support closure of Soils Sub-Project sites under the purview of the Federal Facility Agreement and Consent Order. Use of the RIDP data as part of the Data Quality Objective process is expected to provide considerable cost savings and accelerate site closures. The following steps were completed: - Summarize the RIDP data set and evaluate the quality of the data. - Determine the current uses of the RIDP data and cautions associated with its use. - Provide recommendations for enhancing data use through field verification or other methods. The data quality is sufficient to utilize RIDP data during the planning process for site investigation and closure. Project planning activities may include estimating 25-millirem per industrial access year dose rate boundaries, optimizing characterization efforts, projecting final end states, and planning remedial actions. In addition, RIDP data may be used to identify specific radionuclide distributions, and augment other non-radionuclide dose rate data. Finally, the RIDP data can be used to estimate internal and external dose rates. The data quality is sufficient to utilize RIDP data during the planning process for site investigation and closure. Project planning activities may include estimating 25-millirem per industrial access year dose rate boundaries, optimizing characterization efforts, projecting final

  1. Handwritten numeral verification method using distribution maps of structural features

    NASA Astrophysics Data System (ADS)

    Itoh, Nobuyasu; Takahashi, Hiroyasu

    1990-08-01

    Character recognition methods can be categorized into two major approaches. One is pattern matching, which is little affected by topological changes such as breaks in strokes. The other is structural analysis, which tolerates distorted characters only if the topological features of their undistorted versions are kept. We developed a new recognition method for hand-written numerals by combining the merits of the two approaches. The recognition process consists of three steps: (1) an input character is recognized by a patternmatching method, which reduces the number of possible categories to 1.5 on the average, (2) the character is yenfled to be true, false, or uncertain by a structural analysis method that we have newly developed, and (3) special heuristic verification logics are applied to uncertain characters. In the second step, the new structural analysis method uses the positions and directions of terminal points extracted from thinned character images as a main feature. The extracted terminal points are labeled according to a structural-feature distribution map prepared for each category. The generated labels are matched with template label sets constructed by statistical analysis. The characteristics of the method are as follows: (1) it copes with distortion of hand-written characters by using distribution maps for the positions and directions of feature points, and (2) distribution maps can be automatically generated from statistical data in learning samples and easily tuned interactively. The merits of combining the two methods are as follows: (1) the advantages of both pattern matching and structural analysis are obtained, (2) the probabilities of steps 2 and 3 needing to be executed are 22% and 9% respectively, which hardly affect the total processing time, and (3) as a result of steps 1 and 2, only a small number of special logics are required. In a test using unconstrained hand-written characters of low quality, the recognition rate and substitution

  2. The Role of Experience in Location Estimation: Target Distributions Shift Location Memory Biases

    ERIC Educational Resources Information Center

    Lipinski, John; Simmering, Vanessa R.; Johnson, Jeffrey S.; Spencer, John P.

    2010-01-01

    Research based on the Category Adjustment model concluded that the spatial distribution of target locations does not influence location estimation responses [Huttenlocher, J., Hedges, L., Corrigan, B., & Crawford, L. E. (2004). Spatial categories and the estimation of location. "Cognition, 93", 75-97]. This conflicts with earlier results showing…

  3. The Role of Experience in Location Estimation: Target Distributions Shift Location Memory Biases

    ERIC Educational Resources Information Center

    Lipinski, John; Simmering, Vanessa R.; Johnson, Jeffrey S.; Spencer, John P.

    2010-01-01

    Research based on the Category Adjustment model concluded that the spatial distribution of target locations does not influence location estimation responses [Huttenlocher, J., Hedges, L., Corrigan, B., & Crawford, L. E. (2004). Spatial categories and the estimation of location. "Cognition, 93", 75-97]. This conflicts with earlier results showing…

  4. Protection of Location Privacy Based on Distributed Collaborative Recommendations

    PubMed Central

    Wang, Peng; Yang, Jing; Zhang, Jian-Pei

    2016-01-01

    In the existing centralized location services system structure, the server is easily attracted and be the communication bottleneck. It caused the disclosure of users’ location. For this, we presented a new distributed collaborative recommendation strategy that is based on the distributed system. In this strategy, each node establishes profiles of their own location information. When requests for location services appear, the user can obtain the corresponding location services according to the recommendation of the neighboring users’ location information profiles. If no suitable recommended location service results are obtained, then the user can send a service request to the server according to the construction of a k-anonymous data set with a centroid position of the neighbors. In this strategy, we designed a new model of distributed collaborative recommendation location service based on the users’ location information profiles and used generalization and encryption to ensure the safety of the user’s location information privacy. Finally, we used the real location data set to make theoretical and experimental analysis. And the results show that the strategy proposed in this paper is capable of reducing the frequency of access to the location server, providing better location services and protecting better the user’s location privacy. PMID:27649308

  5. Protection of Location Privacy Based on Distributed Collaborative Recommendations.

    PubMed

    Wang, Peng; Yang, Jing; Zhang, Jian-Pei

    2016-01-01

    In the existing centralized location services system structure, the server is easily attracted and be the communication bottleneck. It caused the disclosure of users' location. For this, we presented a new distributed collaborative recommendation strategy that is based on the distributed system. In this strategy, each node establishes profiles of their own location information. When requests for location services appear, the user can obtain the corresponding location services according to the recommendation of the neighboring users' location information profiles. If no suitable recommended location service results are obtained, then the user can send a service request to the server according to the construction of a k-anonymous data set with a centroid position of the neighbors. In this strategy, we designed a new model of distributed collaborative recommendation location service based on the users' location information profiles and used generalization and encryption to ensure the safety of the user's location information privacy. Finally, we used the real location data set to make theoretical and experimental analysis. And the results show that the strategy proposed in this paper is capable of reducing the frequency of access to the location server, providing better location services and protecting better the user's location privacy.

  6. Modeling and verification of distributed systems with labeled predicate transition nets

    NASA Astrophysics Data System (ADS)

    Lloret, Jean-Christophe

    Two main steps in the design of distributed systems are modeling and verification. Petri nets and CCS are two basic formal models. CCS is a modular language supporting compositional verification. Conversely, the petri net theory requires an accurate description of parallelism and focuses on property global verification. A structuring technique based on CCS concepts is introduced for predicate/transition nets. It consists of a high level petri net that permits expression of communication with value passing. In particular, a petri net composition operator, that can be interpreted as a multi-rendezvous between communicating systems, is defined. The multi rendezvous allows abstract modeling, with small state graphs. The developed formalism is highly convenient for refining abstract models relative to less abstract levels. Based on this work, a software tool, supporting distributed system design and verification, is developed. The advantage of this approach is shown in many research and industrial applications.

  7. Automated fault location and diagnosis on electric power distribution feeders

    SciTech Connect

    Zhu, J.; Lubkeman, D.L.; Girgis, A.A.

    1997-04-01

    This paper presents new techniques for locating and diagnosing faults on electric power distribution feeders. The proposed fault location and diagnosis scheme is capable of accurately identifying the location of a fault upon its occurrence, based on the integration of information available from disturbance recording devices with knowledge contained in a distribution feeder database. The developed fault location and diagnosis system can also be applied to the investigation of temporary faults that may not result in a blown fuse. The proposed fault location algorithm is based on the steady-state analysis of the faulted distribution network. To deal with the uncertainties inherent in the system modeling and the phasor estimation, the fault location algorithm has been adapted to estimate fault regions based on probabilistic modeling and analysis. Since the distribution feeder is a radial network, multiple possibilities of fault locations could be computed with measurements available only at the substation. To identify the actual fault location, a fault diagnosis algorithm has been developed to prune down and rank the possible fault locations by integrating the available pieces of evidence. Testing of the developed fault location and diagnosis system using field data has demonstrated its potential for practical use.

  8. Privacy-Preserving Location-Based Query Using Location Indexes and Parallel Searching in Distributed Networks

    PubMed Central

    Liu, Lei; Zhao, Jing

    2014-01-01

    An efficient location-based query algorithm of protecting the privacy of the user in the distributed networks is given. This algorithm utilizes the location indexes of the users and multiple parallel threads to search and select quickly all the candidate anonymous sets with more users and their location information with more uniform distribution to accelerate the execution of the temporal-spatial anonymous operations, and it allows the users to configure their custom-made privacy-preserving location query requests. The simulated experiment results show that the proposed algorithm can offer simultaneously the location query services for more users and improve the performance of the anonymous server and satisfy the anonymous location requests of the users. PMID:24790579

  9. Privacy-preserving location-based query using location indexes and parallel searching in distributed networks.

    PubMed

    Zhong, Cheng; Liu, Lei; Zhao, Jing

    2014-01-01

    An efficient location-based query algorithm of protecting the privacy of the user in the distributed networks is given. This algorithm utilizes the location indexes of the users and multiple parallel threads to search and select quickly all the candidate anonymous sets with more users and their location information with more uniform distribution to accelerate the execution of the temporal-spatial anonymous operations, and it allows the users to configure their custom-made privacy-preserving location query requests. The simulated experiment results show that the proposed algorithm can offer simultaneously the location query services for more users and improve the performance of the anonymous server and satisfy the anonymous location requests of the users.

  10. Ground vibration tests of a high fidelity truss for verification of on orbit damage location techniques

    NASA Technical Reports Server (NTRS)

    Kashangaki, Thomas A. L.

    1992-01-01

    This paper describes a series of modal tests that were performed on a cantilevered truss structure. The goal of the tests was to assemble a large database of high quality modal test data for use in verification of proposed methods for on orbit model verification and damage detection in flexible truss structures. A description of the hardware is provided along with details of the experimental setup and procedures for 16 damage cases. Results from selected cases are presented and discussed. Differences between ground vibration testing and on orbit modal testing are also described.

  11. The application of fuzzy neural network in distribution center location

    NASA Astrophysics Data System (ADS)

    Li, Yongpan; Liu, Yong

    2013-03-01

    In this paper, the establishment of the fuzzy neural network model for logistics distribution center location applied the fuzzy method to the input value of BP algorithm and took the experts' evaluation value as the expected output. At the same time, using the network learning to get the optimized selection and furthermore get a more accurate evaluation to the programs of location.

  12. A expert system for locating distribution system faults

    SciTech Connect

    Hsu, Y.Y.; Lu, F.C.; Chien, Y. . Dept. of Electrical Engineering); Liu, J.P.; Lin, J.T. ); Yu, H.S.; Kuo, R.T )

    1991-01-01

    A rule-based expert system is designed to locate the faults in a distribution system. Distribution system component data and network topology are stored in the database. A set of heuristic rules are compiled from the dispatchers' experience and are imbedded in the rule base. To locate distribution system fault, an inference engine is developed to perform deductive reasonings on the rules in the knowledge base. The inference engine comprises three major parts: the dynamic searching method, the backtracking approach, and the set intersection operation. The expert system is implemented on a personal computer using the artificial intelligence language PROLOG. To demonstrate the effectiveness of the proposed approach, the expert system has been applied to locate faults in a real underground distribution system.

  13. Event-Based Specification and Verification of Distributed Systems.

    DTIC Science & Technology

    1982-01-01

    B); RTI5(A, B); RT21(A, B); End behavior End system. A distributed design [HOA78] to generate prime * numbers using the "sieve of Eratosthenes ...D. Theorem 3.7. The distributed "sieve of Eratosthenes " is a correct prime number generator. Proof By the sequence of theorems 3.7.1. to 3.7.6

  14. On the calibration and verification of two-dimensional, distributed, Hortonian, continuous watershed models

    NASA Astrophysics Data System (ADS)

    Senarath, Sharika U. S.; Ogden, Fred L.; Downer, Charles W.; Sharif, Hatim O.

    2000-02-01

    Physically based, two-dimensional, distributed parameter Hortonian hydrologic models are sensitive to a number of spatially varied parameters and inputs and are particularly sensitive to the initial soil moisture field. However, soil moisture data are generally unavailable for most catchments. Given an erroneous initial soil moisture field, single-event calibrations are easily achieved using different combinations of model parameters, including physically unrealistic values. Verification of single-event calibrations is very difficult for models of this type because of parameter estimation errors that arise from initial soil moisture field uncertainty. The purpose of this study is to determine if the likelihood of obtaining a verifiable calibration increases when a continuous flow record, consisting of multiple runoff producing events is used for model calibration. The physically based, two-dimensional, distributed, Hortonian hydrologic model CASC2D [Julien et al., 1995] is converted to a continuous formulation that simulates the temporal evolution of soil moisture between rainfall events. Calibration is performed using 6 weeks of record from the 21.3 km 2 Goodwin Creek Experimental Watershed, located in northern Mississippi. Model parameters are assigned based on soil textures, land use/land cover maps, and a combination of both. The sensitivity of the new model formulation to parameter variation is evaluated. Calibration is performed using the shuffled complex evolution method [Duan et al., 1991]. Three different tests are conducted to evaluate model performance based on continuous calibration. Results show that calibration on a continuous basis significantly improves model performance for periods, or subcatchments, not used in calibration and the likelihood of obtaining realistic simulations of spatially varied catchment dynamics. The automated calibration reveals that the parameter assignment methodology used in this study results in overparameterization

  15. [Application and verification of distributed model in simulating watershed evapotranspiration].

    PubMed

    Liu, Jianmei; Wang, Anzhi; Diao, Yiwei; Pei, Tiefan

    2006-01-01

    Based on the conventional observed meteorological data during 1989 approximately 2000, a distributed model was employed to simulate the temporal and spatial distribution of the potential and actual evapotranspiration in the upper Zagunao watershed of Sichuan Province. Grids of 500 m and step of 1 d were used to describe the spatial and temporal heterogeneity and variability in the study area. The potential evapotranspiration was simulated by a deformation form of Penman-Monteith Equation, and a method for calculating actual evapotranspiration was proposed, with the underlying conditions considered. The results showed that the relative error of the normal annual evapotranspiration in the 12 years between the simulation and the calculation was 3.47%, with a reasonable temporal and spatial distribution. The research results provided an effective method for the distributed rainfall-runoff model in simulating actual evapotranspiration.

  16. A distributed approach to verification and validation of electronic structure simulation data using ESTEST

    NASA Astrophysics Data System (ADS)

    Yuan, Gary; Gygi, François

    2012-08-01

    We present a Verification and Validation (V&V) approach for electronic structure computations based on a network of distributed servers running the ESTEST (Electronic Structure TEST) software. This network-based infrastructure enables remote verification, validation, comparison and sharing of electronic structure data obtained with different simulation codes. The implementation and configuration of the distributed framework is described. ESTEST features are enhanced by server communication and data sharing, minimizing the duplication of effort by separate research groups. We discuss challenges that arise from the use of a distributed network of ESTEST servers and outline possible solutions. A community web portal called ESTEST Discovery is introduced for the purpose of facilitating the collection and annotation of contents from multiple ESTEST servers. We describe examples of use of the framework using two currently running servers at the University of California Davis and at the Centre Européen de Calcul Atomique et Moléculaire (CECAM).

  17. The Role of Experience in Location Estimation: Target Distributions Shift Location Memory Biases

    PubMed Central

    Lipinski, John; Simmering, Vanessa R.; Johnson, Jeffrey S.; Spencer, John P.

    2010-01-01

    Research based on the Category Adjustment model concluded that the spatial distribution of target locations does not influence location estimation responses [Huttenlocher, J., Hedges, L., Corrigan, B., & Crawford, L.E. (2004). Spatial categories and the estimation of location. Cognition, 93, 75-97.]. This conflicts with earlier results showing that location estimation is biased relative to the spatial distribution of targets [Spencer, J.P. & Hund, A.M. (2002). Prototypes and particulars: Geometric and experience-dependent spatial categories. Journal of Experimental Psychology: General, 131, 16-37.]. Here, we resolve this controversy by using a task based on Huttenlocher et al. (Experiment 4) with minor modifications to enhance our ability to detect experience-dependent effects. Results after the first block of trials replicate the pattern reported in Huttenlocher et al. After additional experience, however, participants showed biases that significantly shifted according to the target distributions. These results are consistent with the Dynamic Field Theory, an alternative theory of spatial cognition that integrates long-term memory traces across trials relative to the perceived structure of the task space. PMID:20116784

  18. The role of experience in location estimation: Target distributions shift location memory biases.

    PubMed

    Lipinski, John; Simmering, Vanessa R; Johnson, Jeffrey S; Spencer, John P

    2010-04-01

    Research based on the Category Adjustment model concluded that the spatial distribution of target locations does not influence location estimation responses [Huttenlocher, J., Hedges, L., Corrigan, B., & Crawford, L. E. (2004). Spatial categories and the estimation of location. Cognition, 93, 75-97]. This conflicts with earlier results showing that location estimation is biased relative to the spatial distribution of targets [Spencer, J. P., & Hund, A. M. (2002). Prototypes and particulars: Geometric and experience-dependent spatial categories. Journal of Experimental Psychology: General, 131, 16-37]. Here, we resolve this controversy by using a task based on Huttenlocher et al. (Experiment 4) with minor modifications to enhance our ability to detect experience-dependent effects. Results after the first block of trials replicate the pattern reported in Huttenlocher et al. After additional experience, however, participants showed biases that significantly shifted according to the target distributions. These results are consistent with the Dynamic Field Theory, an alternative theory of spatial cognition that integrates long-term memory traces across trials relative to the perceived structure of the task space. Copyright 2009 Elsevier B.V. All rights reserved.

  19. 37 CFR 380.7 - Verification of royalty distributions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... rendering a written report to a Copyright Owner or Performer, except where the auditor has a reasonable... royalty distributions. (a) General. This section prescribes procedures by which any Copyright Owner or... shall be conducted by an independent and Qualified Auditor identified in the notice, and shall be...

  20. Abstractions for Fault-Tolerant Distributed System Verification

    NASA Technical Reports Server (NTRS)

    Pike, Lee S.; Maddalon, Jeffrey M.; Miner, Paul S.; Geser, Alfons

    2004-01-01

    Four kinds of abstraction for the design and analysis of fault tolerant distributed systems are discussed. These abstractions concern system messages, faults, fault masking voting, and communication. The abstractions are formalized in higher order logic, and are intended to facilitate specifying and verifying such systems in higher order theorem provers.

  1. An IMRT dose distribution study using commercial verification software.

    PubMed

    Liu, G; Fernando, W; Grace, M; Rykers, K

    2004-09-01

    The introduction of IMRT requires users to confirm that the isodose distributions and relative doses calculated by their planning system match the doses delivered by their linear accelerators. To this end the commercially available software, VeriSoft (PTW-Freiburg, Germany) was trialled to determine if the tools and functions it offered would be of benefit to this process. The CMS XiO (Computerized Medical System, St. Louis, MO) treatment planning system was used to generate IMRT plans that were delivered with an upgraded Elekta SL15 linac. Kodak EDR2 film sandwiched in RW3 solid water (PTW-Freiburg, Germany) was used to measure the IMRT fields delivered with 6 MV photons. The isodose and profiles measured with the film generally agreed to within +/- 3% or +/- 3 mm with the planned doses, in some regions (outside the field) the match fell to within +/- 5%. The isodose distributions of the planning system and the film could be compared on screen, allowing for electronic records of the comparison to be kept if desired. The features of this software would be of benefit to an IMRT QA program.

  2. Logistics distribution centers location problem and algorithm under fuzzy environment

    NASA Astrophysics Data System (ADS)

    Yang, Lixing; Ji, Xiaoyu; Gao, Ziyou; Li, Keping

    2007-11-01

    Distribution centers location problem is concerned with how to select distribution centers from the potential set so that the total relevant cost is minimized. This paper mainly investigates this problem under fuzzy environment. Consequentially, chance-constrained programming model for the problem is designed and some properties of the model are investigated. Tabu search algorithm, genetic algorithm and fuzzy simulation algorithm are integrated to seek the approximate best solution of the model. A numerical example is also given to show the application of the algorithm.

  3. Towards the Formal Verification of a Distributed Real-Time Automotive System

    NASA Technical Reports Server (NTRS)

    Endres, Erik; Mueller, Christian; Shadrin, Andrey; Tverdyshev, Sergey

    2010-01-01

    We present the status of a project which aims at building, formally and pervasively verifying a distributed automotive system. The target system is a gate-level model which consists of several interconnected electronic control units with independent clocks. This model is verified against the specification as seen by a system programmer. The automotive system is implemented on several FPGA boards. The pervasive verification is carried out using combination of interactive theorem proving (Isabelle/HOL) and model checking (LTL).

  4. Design and verification of distributed logic controllers with application of Petri nets

    SciTech Connect

    Wiśniewski, Remigiusz; Grobelna, Iwona; Grobelny, Michał; Wiśniewska, Monika

    2015-12-31

    The paper deals with the designing and verification of distributed logic controllers. The control system is initially modelled with Petri nets and formally verified against structural and behavioral properties with the application of the temporal logic and model checking technique. After that it is decomposed into separate sequential automata that are working concurrently. Each of them is re-verified and if the validation is successful, the system can be finally implemented.

  5. Design and verification of distributed logic controllers with application of Petri nets

    NASA Astrophysics Data System (ADS)

    Wiśniewski, Remigiusz; Grobelna, Iwona; Grobelny, Michał; Wiśniewska, Monika

    2015-12-01

    The paper deals with the designing and verification of distributed logic controllers. The control system is initially modelled with Petri nets and formally verified against structural and behavioral properties with the application of the temporal logic and model checking technique. After that it is decomposed into separate sequential automata that are working concurrently. Each of them is re-verified and if the validation is successful, the system can be finally implemented.

  6. Optimum Location of Voltage Regulators in the Radial Distribution Systems

    NASA Astrophysics Data System (ADS)

    Salkuti, Surender Reddy; Lho, Young Hwan

    2016-06-01

    In this paper, a new heuristic algorithm is proposed for the optimum voltage control, which is applicable for the large Radial Distribution Systems (RDSs). In the RDSs, voltage levels at different buses can be maintained within the specified limits using the conductor grading or placing the Voltage Regulators (VRs) and capacitors at suitable locations. The proposed Back Tracking Algorithm (BTA) proposes the optimal location, number and tap positions of VRs to maintain the voltage profile within the desired limits and decreases losses in the system, which in turn maximizes the net savings in the operation of distribution system. In addition to BTA, an approach using the fuzzy logic called Fuzzy Expert System (FES) is also proposed, and the results of FES are compared with the results of BTA. This heuristic algorithm proposes the optimal location and tap setting of VRs, which contributes a smooth voltage profile along the network. It also used to access the minimum number of initially considered VRs, by moving them in such way as to control the network voltage at minimum possible cost. It is concluded that the FES also gives the optimal placement and the number along with the tap settings of VRs. The proposed FES contributes good voltage regulation, and decreases the power loss which in turn increases the net savings when compared to the BTA. The effectiveness of the proposed heuristic approaches are examined on practical 47 bus and 69 bus Radial Distribution Systems (RDSs).

  7. Verification of IMRT dose distributions using a water beam imaging system.

    PubMed

    Li, J S; Boyer, A L; Ma, C M

    2001-12-01

    A water beam imaging system (WBIS) has been developed and used to verify dose distributions for intensity modulated radiotherapy using dynamic multileaf collimator. This system consisted of a water container, a scintillator screen, a charge-coupled device camera, and a portable personal computer. The scintillation image was captured by the camera. The pixel value in this image indicated the dose value in the scintillation screen. Images of radiation fields of known spatial distributions were used to calibrate the device. The verification was performed by comparing the image acquired from the measurement with a dose distribution from the IMRT plan. Because of light scattering in the scintillator screen, the image was blurred. A correction for this was developed by recognizing that the blur function could be fitted to a multiple Gaussian. The blur function was computed using the measured image of a 10 cm x 10 cm x-ray beam and the result of the dose distribution calculated using the Monte Carlo method. Based on the blur function derived using this method, an iterative reconstruction algorithm was applied to recover the dose distribution for an IMRT plan from the measured WBIS image. The reconstructed dose distribution was compared with Monte Carlo simulation result. Reasonable agreement was obtained from the comparison. The proposed approach makes it possible to carry out a real-time comparison of the dose distribution in a transverse plane between the measurement and the reference when we do an IMRT dose verification.

  8. The verification of lightning location accuracy in Finland deduced from lightning strikes to trees

    NASA Astrophysics Data System (ADS)

    Mäkelä, Antti; Mäkelä, Jakke; Haapalainen, Jussi; Porjo, Niko

    2016-05-01

    We present a new method to determine the ground truth and accuracy of lightning location systems (LLS), using natural lightning strikes to trees. Observations of strikes to trees are being collected with a Web-based survey tool at the Finnish Meteorological Institute. Since the Finnish thunderstorms tend to have on average a low flash rate, it is often possible to identify from the LLS data unambiguously the stroke that caused damage to a given tree. The coordinates of the tree are then the ground truth for that stroke. The technique has clear advantages over other methods used to determine the ground truth. Instrumented towers and rocket launches measure upward-propagating lightning. Video and audio records, even with triangulation, are rarely capable of high accuracy. We present data for 36 quality-controlled tree strikes in the years 2007-2008. We show that the average inaccuracy of the lightning location network for that period was 600 m. In addition, we show that the 50% confidence ellipse calculated by the lightning location network and used operationally for describing the location accuracy is physically meaningful: half of all the strikes were located within the uncertainty ellipse of the nearest recorded stroke. Using tree strike data thus allows not only the accuracy of the LLS to be estimated but also the reliability of the uncertainty ellipse. To our knowledge, this method has not been attempted before for natural lightning.

  9. Advanced Passive Acoustic Leak Location and Detection Verification System for Underground Fuel Pipelines

    DTIC Science & Technology

    2003-04-01

    Conference (March 1993). 4. E. G. Eckert, M. R. Fierro , and J. W. Maresca, Jr., “A Passive-Acoustic Approach to the Detection of Leaking Valves in...Pressurized Pipelines,” Technical Report for Martin Marietta Energy Systems, Inc., Vista Research Project 1050, Vista Research, Inc., Mountain View...California (August 1994). 5. E. G. Eckert, M. R. Fierro , and J. W. Maresca, Jr., “Demonstration of a Gas Acoustic Tracer (GAT) Method for the Location of

  10. Solute location in a nanoconfined liquid depends on charge distribution

    NASA Astrophysics Data System (ADS)

    Harvey, Jacob A.; Thompson, Ward H.

    2015-07-01

    Nanostructured materials that can confine liquids have attracted increasing attention for their diverse properties and potential applications. Yet, significant gaps remain in our fundamental understanding of such nanoconfined liquids. Using replica exchange molecular dynamics simulations of a nanoscale, hydroxyl-terminated silica pore system, we determine how the locations explored by a coumarin 153 (C153) solute in ethanol depend on its charge distribution, which can be changed through a charge transfer electronic excitation. The solute position change is driven by the internal energy, which favors C153 at the pore surface compared to the pore interior, but less so for the more polar, excited-state molecule. This is attributed to more favorable non-specific solvation of the large dipole moment excited-state C153 by ethanol at the expense of hydrogen-bonding with the pore. It is shown that a change in molecule location resulting from shifts in the charge distribution is a general result, though how the solute position changes will depend upon the specific system. This has important implications for interpreting measurements and designing applications of mesoporous materials.

  11. Solute location in a nanoconfined liquid depends on charge distribution

    SciTech Connect

    Harvey, Jacob A.; Thompson, Ward H.

    2015-07-28

    Nanostructured materials that can confine liquids have attracted increasing attention for their diverse properties and potential applications. Yet, significant gaps remain in our fundamental understanding of such nanoconfined liquids. Using replica exchange molecular dynamics simulations of a nanoscale, hydroxyl-terminated silica pore system, we determine how the locations explored by a coumarin 153 (C153) solute in ethanol depend on its charge distribution, which can be changed through a charge transfer electronic excitation. The solute position change is driven by the internal energy, which favors C153 at the pore surface compared to the pore interior, but less so for the more polar, excited-state molecule. This is attributed to more favorable non-specific solvation of the large dipole moment excited-state C153 by ethanol at the expense of hydrogen-bonding with the pore. It is shown that a change in molecule location resulting from shifts in the charge distribution is a general result, though how the solute position changes will depend upon the specific system. This has important implications for interpreting measurements and designing applications of mesoporous materials.

  12. Verification of secure distributed systems in higher order logic: A modular approach using generic components

    SciTech Connect

    Alves-Foss, J.; Levitt, K.

    1991-01-01

    In this paper we present a generalization of McCullough's restrictiveness model as the basis for proving security properties about distributed system designs. We mechanize this generalization and an event-based model of computer systems in the HOL (Higher Order Logic) system to prove the composability of the model and several other properties about the model. We then develop a set of generalized classes of system components and show for which families of user views they satisfied the model. Using these classes we develop a collection of general system components that are instantiations of one of these classes and show that the instantiations also satisfied the security property. We then conclude with a sample distributed secure system, based on the Rushby and Randell distributed system design and designed using our collection of components, and show how our mechanized verification system can be used to verify such designs. 16 refs., 20 figs.

  13. Direct and full-scale experimental verifications towards ground-satellite quantum key distribution

    NASA Astrophysics Data System (ADS)

    Wang, Jian-Yu; Yang, Bin; Liao, Sheng-Kai; Zhang, Liang; Shen, Qi; Hu, Xiao-Fang; Wu, Jin-Cai; Yang, Shi-Ji; Jiang, Hao; Tang, Yan-Lin; Zhong, Bo; Liang, Hao; Liu, Wei-Yue; Hu, Yi-Hua; Huang, Yong-Mei; Qi, Bo; Ren, Ji-Gang; Pan, Ge-Sheng; Yin, Juan; Jia, Jian-Jun; Chen, Yu-Ao; Chen, Kai; Peng, Cheng-Zhi; Pan, Jian-Wei

    2013-05-01

    Quantum key distribution (QKD) provides the only intrinsically unconditional secure method for communication based on the principle of quantum mechanics. Compared with fibre-based demonstrations, free-space links could provide the most appealing solution for communication over much larger distances. Despite significant efforts, all realizations to date rely on stationary sites. Experimental verifications are therefore extremely crucial for applications to a typical low Earth orbit satellite. To achieve direct and full-scale verifications of our set-up, we have carried out three independent experiments with a decoy-state QKD system, and overcome all conditions. The system is operated on a moving platform (using a turntable), on a floating platform (using a hot-air balloon), and with a high-loss channel to demonstrate performances under conditions of rapid motion, attitude change, vibration, random movement of satellites, and a high-loss regime. The experiments address wide ranges of all leading parameters relevant to low Earth orbit satellites. Our results pave the way towards ground-satellite QKD and a global quantum communication network.

  14. Mass and galaxy distributions of four massive galaxy clusters from Dark Energy Survey Science Verification data

    SciTech Connect

    Melchior, P.; Suchyta, E.; Huff, E.; Hirsch, M.; Kacprzak, T.; Rykoff, E.; Gruen, D.; Armstrong, R.; Bacon, D.; Bechtol, K.; Bernstein, G. M.; Bridle, S.; Clampitt, J.; Honscheid, K.; Jain, B.; Jouvel, S.; Krause, E.; Lin, H.; MacCrann, N.; Patton, K.; Plazas, A.; Rowe, B.; Vikram, V.; Wilcox, H.; Young, J.; Zuntz, J.; Abbott, T.; Abdalla, F. B.; Allam, S. S.; Banerji, M.; Bernstein, J. P.; Bernstein, R. A.; Bertin, E.; Buckley-Geer, E.; Burke, D. L.; Castander, F. J.; da Costa, L. N.; Cunha, C. E.; Depoy, D. L.; Desai, S.; Diehl, H. T.; Doel, P.; Estrada, J.; Evrard, A. E.; Neto, A. F.; Fernandez, E.; Finley, D. A.; Flaugher, B.; Frieman, J. A.; Gaztanaga, E.; Gerdes, D.; Gruendl, R. A.; Gutierrez, G. R.; Jarvis, M.; Karliner, I.; Kent, S.; Kuehn, K.; Kuropatkin, N.; Lahav, O.; Maia, M. A. G.; Makler, M.; Marriner, J.; Marshall, J. L.; Merritt, K. W.; Miller, C. J.; Miquel, R.; Mohr, J.; Neilsen, E.; Nichol, R. C.; Nord, B. D.; Reil, K.; Roe, N. A.; Roodman, A.; Sako, M.; Sanchez, E.; Santiago, B. X.; Schindler, R.; Schubnell, M.; Sevilla-Noarbe, I.; Sheldon, E.; Smith, C.; Soares-Santos, M.; Swanson, M. E. C.; Sypniewski, A. J.; Tarle, G.; Thaler, J.; Thomas, D.; Tucker, D. L.; Walker, A.; Wechsler, R.; Weller, J.; Wester, W.

    2015-03-31

    We measure the weak-lensing masses and galaxy distributions of four massive galaxy clusters observed during the Science Verification phase of the Dark Energy Survey. This pathfinder study is meant to 1) validate the DECam imager for the task of measuring weak-lensing shapes, and 2) utilize DECam's large field of view to map out the clusters and their environments over 90 arcmin. We conduct a series of rigorous tests on astrometry, photometry, image quality, PSF modelling, and shear measurement accuracy to single out flaws in the data and also to identify the optimal data processing steps and parameters. We find Science Verification data from DECam to be suitable for the lensing analysis described in this paper. The PSF is generally well-behaved, but the modelling is rendered difficult by a flux-dependent PSF width and ellipticity. We employ photometric redshifts to distinguish between foreground and background galaxies, and a red-sequence cluster finder to provide cluster richness estimates and cluster-galaxy distributions. By fitting NFW profiles to the clusters in this study, we determine weak-lensing masses that are in agreement with previous work. For Abell 3261, we provide the first estimates of redshift, weak-lensing mass, and richness. Additionally, the cluster-galaxy distributions indicate the presence of filamentary structures attached to 1E 0657-56 and RXC J2248.7-4431, stretching out as far as 1degree (approximately 20 Mpc), showcasing the potential of DECam and DES for detailed studies of degree-scale features on the sky.

  15. Mass and galaxy distributions of four massive galaxy clusters from Dark Energy Survey Science Verification data

    SciTech Connect

    Melchior, P.; Suchyta, E.; Huff, E.; Hirsch, M.; Kacprzak, T.; Rykoff, E.; Gruen, D.; Armstrong, R.; Bacon, D.; Bechtol, K.; Bernstein, G. M.; Bridle, S.; Clampitt, J.; Honscheid, K.; Jain, B.; Jouvel, S.; Krause, E.; Lin, H.; MacCrann, N.; Patton, K.; Plazas, A.; Rowe, B.; Vikram, V.; Wilcox, H.; Young, J.; Zuntz, J.; Abbott, T.; Abdalla, F. B.; Allam, S. S.; Banerji, M.; Bernstein, J. P.; Bernstein, R. A.; Bertin, E.; Buckley-Geer, E.; Burke, D. L.; Castander, F. J.; da Costa, L. N.; Cunha, C. E.; Depoy, D. L.; Desai, S.; Diehl, H. T.; Doel, P.; Estrada, J.; Evrard, A. E.; Neto, A. F.; Fernandez, E.; Finley, D. A.; Flaugher, B.; Frieman, J. A.; Gaztanaga, E.; Gerdes, D.; Gruendl, R. A.; Gutierrez, G. R.; Jarvis, M.; Karliner, I.; Kent, S.; Kuehn, K.; Kuropatkin, N.; Lahav, O.; Maia, M. A. G.; Makler, M.; Marriner, J.; Marshall, J. L.; Merritt, K. W.; Miller, C. J.; Miquel, R.; Mohr, J.; Neilsen, E.; Nichol, R. C.; Nord, B. D.; Reil, K.; Roe, N. A.; Roodman, A.; Sako, M.; Sanchez, E.; Santiago, B. X.; Schindler, R.; Schubnell, M.; Sevilla-Noarbe, I.; Sheldon, E.; Smith, C.; Soares-Santos, M.; Swanson, M. E. C.; Sypniewski, A. J.; Tarle, G.; Thaler, J.; Thomas, D.; Tucker, D. L.; Walker, A.; Wechsler, R.; Weller, J.; Wester, W.

    2015-03-31

    We measure the weak-lensing masses and galaxy distributions of four massive galaxy clusters observed during the Science Verification phase of the Dark Energy Survey. This pathfinder study is meant to 1) validate the DECam imager for the task of measuring weak-lensing shapes, and 2) utilize DECam's large field of view to map out the clusters and their environments over 90 arcmin. We conduct a series of rigorous tests on astrometry, photometry, image quality, PSF modeling, and shear measurement accuracy to single out flaws in the data and also to identify the optimal data processing steps and parameters. We find Science Verification data from DECam to be suitable for the lensing analysis described in this paper. The PSF is generally well-behaved, but the modeling is rendered difficult by a flux-dependent PSF width and ellipticity. We employ photometric redshifts to distinguish between foreground and background galaxies, and a red-sequence cluster finder to provide cluster richness estimates and cluster-galaxy distributions. By fitting NFW profiles to the clusters in this study, we determine weak-lensing masses that are in agreement with previous work. For Abell 3261, we provide the first estimates of redshift, weak-lensing mass, and richness. In addition, the cluster-galaxy distributions indicate the presence of filamentary structures attached to 1E 0657-56 and RXC J2248.7-4431, stretching out as far as 1 degree (approximately 20 Mpc), showcasing the potential of DECam and DES for detailed studies of degree-scale features on the sky.

  16. Mass and galaxy distributions of four massive galaxy clusters from Dark Energy Survey Science Verification data

    SciTech Connect

    Melchior, P.; Suchyta, E.; Huff, E.; Hirsch, M.; Kacprzak, T.; Rykoff, E.; Gruen, D.; Armstrong, R.; Bacon, D.; Bechtol, K.; Bernstein, G. M.; Bridle, S.; Clampitt, J.; Honscheid, K.; Jain, B.; Jouvel, S.; Krause, E.; Lin, H.; MacCrann, N.; Patton, K.; Plazas, A.; Rowe, B.; Vikram, V.; Wilcox, H.; Young, J.; Zuntz, J.; Abbott, T.; Abdalla, F. B.; Allam, S. S.; Banerji, M.; Bernstein, J. P.; Bernstein, R. A.; Bertin, E.; Buckley-Geer, E.; Burke, D. L.; Castander, F. J.; da Costa, L. N.; Cunha, C. E.; Depoy, D. L.; Desai, S.; Diehl, H. T.; Doel, P.; Estrada, J.; Evrard, A. E.; Neto, A. F.; Fernandez, E.; Finley, D. A.; Flaugher, B.; Frieman, J. A.; Gaztanaga, E.; Gerdes, D.; Gruendl, R. A.; Gutierrez, G. R.; Jarvis, M.; Karliner, I.; Kent, S.; Kuehn, K.; Kuropatkin, N.; Lahav, O.; Maia, M. A. G.; Makler, M.; Marriner, J.; Marshall, J. L.; Merritt, K. W.; Miller, C. J.; Miquel, R.; Mohr, J.; Neilsen, E.; Nichol, R. C.; Nord, B. D.; Reil, K.; Roe, N. A.; Roodman, A.; Sako, M.; Sanchez, E.; Santiago, B. X.; Schindler, R.; Schubnell, M.; Sevilla-Noarbe, I.; Sheldon, E.; Smith, C.; Soares-Santos, M.; Swanson, M. E. C.; Sypniewski, A. J.; Tarle, G.; Thaler, J.; Thomas, D.; Tucker, D. L.; Walker, A.; Wechsler, R.; Weller, J.; Wester, W.

    2015-03-31

    We measure the weak lensing masses and galaxy distributions of four massive galaxy clusters observed during the Science Verification phase of the Dark Energy Survey (DES). This pathfinder study is meant to (1) validate the Dark Energy Camera (DECam) imager for the task of measuring weak lensing shapes, and (2) utilize DECam's large field of view to map out the clusters and their environments over 90 arcmin. We conduct a series of rigorous tests on astrometry, photometry, image quality, point spread function (PSF) modelling, and shear measurement accuracy to single out flaws in the data and also to identify the optimal data processing steps and parameters. We find Science Verification data from DECam to be suitable for the lensing analysis described in this paper. The PSF is generally well behaved, but the modelling is rendered difficult by a flux-dependent PSF width and ellipticity. We employ photometric redshifts to distinguish between foreground and background galaxies, and a red-sequence cluster finder to provide cluster richness estimates and cluster-galaxy distributions. By fitting Navarro-Frenk-White profiles to the clusters in this study, we determine weak lensing masses that are in agreement with previous work. For Abell 3261, we provide the first estimates of redshift, weak lensing mass, and richness. In addition, the cluster-galaxy distributions indicate the presence of filamentary structures attached to 1E 0657-56 and RXC J2248.7-4431, stretching out as far as 1. (approximately 20 Mpc), showcasing the potential of DECam and DES for detailed studies of degree-scale features on the sky.

  17. Mass and galaxy distributions of four massive galaxy clusters from Dark Energy Survey Science Verification data

    DOE PAGES

    Melchior, P.; Suchyta, E.; Huff, E.; ...

    2015-03-31

    We measure the weak-lensing masses and galaxy distributions of four massive galaxy clusters observed during the Science Verification phase of the Dark Energy Survey. This pathfinder study is meant to 1) validate the DECam imager for the task of measuring weak-lensing shapes, and 2) utilize DECam's large field of view to map out the clusters and their environments over 90 arcmin. We conduct a series of rigorous tests on astrometry, photometry, image quality, PSF modelling, and shear measurement accuracy to single out flaws in the data and also to identify the optimal data processing steps and parameters. We find Sciencemore » Verification data from DECam to be suitable for the lensing analysis described in this paper. The PSF is generally well-behaved, but the modelling is rendered difficult by a flux-dependent PSF width and ellipticity. We employ photometric redshifts to distinguish between foreground and background galaxies, and a red-sequence cluster finder to provide cluster richness estimates and cluster-galaxy distributions. By fitting NFW profiles to the clusters in this study, we determine weak-lensing masses that are in agreement with previous work. For Abell 3261, we provide the first estimates of redshift, weak-lensing mass, and richness. Additionally, the cluster-galaxy distributions indicate the presence of filamentary structures attached to 1E 0657-56 and RXC J2248.7-4431, stretching out as far as 1degree (approximately 20 Mpc), showcasing the potential of DECam and DES for detailed studies of degree-scale features on the sky.« less

  18. Mass and galaxy distributions of four massive galaxy clusters from Dark Energy Survey Science Verification data

    NASA Astrophysics Data System (ADS)

    Melchior, P.; Suchyta, E.; Huff, E.; Hirsch, M.; Kacprzak, T.; Rykoff, E.; Gruen, D.; Armstrong, R.; Bacon, D.; Bechtol, K.; Bernstein, G. M.; Bridle, S.; Clampitt, J.; Honscheid, K.; Jain, B.; Jouvel, S.; Krause, E.; Lin, H.; MacCrann, N.; Patton, K.; Plazas, A.; Rowe, B.; Vikram, V.; Wilcox, H.; Young, J.; Zuntz, J.; Abbott, T.; Abdalla, F. B.; Allam, S. S.; Banerji, M.; Bernstein, J. P.; Bernstein, R. A.; Bertin, E.; Buckley-Geer, E.; Burke, D. L.; Castander, F. J.; da Costa, L. N.; Cunha, C. E.; Depoy, D. L.; Desai, S.; Diehl, H. T.; Doel, P.; Estrada, J.; Evrard, A. E.; Neto, A. Fausti; Fernandez, E.; Finley, D. A.; Flaugher, B.; Frieman, J. A.; Gaztanaga, E.; Gerdes, D.; Gruendl, R. A.; Gutierrez, G. R.; Jarvis, M.; Karliner, I.; Kent, S.; Kuehn, K.; Kuropatkin, N.; Lahav, O.; Maia, M. A. G.; Makler, M.; Marriner, J.; Marshall, J. L.; Merritt, K. W.; Miller, C. J.; Miquel, R.; Mohr, J.; Neilsen, E.; Nichol, R. C.; Nord, B. D.; Reil, K.; Roe, N. A.; Roodman, A.; Sako, M.; Sanchez, E.; Santiago, B. X.; Schindler, R.; Schubnell, M.; Sevilla-Noarbe, I.; Sheldon, E.; Smith, C.; Soares-Santos, M.; Swanson, M. E. C.; Sypniewski, A. J.; Tarle, G.; Thaler, J.; Thomas, D.; Tucker, D. L.; Walker, A.; Wechsler, R.; Weller, J.; Wester, W.

    2015-05-01

    We measure the weak lensing masses and galaxy distributions of four massive galaxy clusters observed during the Science Verification phase of the Dark Energy Survey (DES). This pathfinder study is meant to (1) validate the Dark Energy Camera (DECam) imager for the task of measuring weak lensing shapes, and (2) utilize DECam's large field of view to map out the clusters and their environments over 90 arcmin. We conduct a series of rigorous tests on astrometry, photometry, image quality, point spread function (PSF) modelling, and shear measurement accuracy to single out flaws in the data and also to identify the optimal data processing steps and parameters. We find Science Verification data from DECam to be suitable for the lensing analysis described in this paper. The PSF is generally well behaved, but the modelling is rendered difficult by a flux-dependent PSF width and ellipticity. We employ photometric redshifts to distinguish between foreground and background galaxies, and a red-sequence cluster finder to provide cluster richness estimates and cluster-galaxy distributions. By fitting Navarro-Frenk-White profiles to the clusters in this study, we determine weak lensing masses that are in agreement with previous work. For Abell 3261, we provide the first estimates of redshift, weak lensing mass, and richness. In addition, the cluster-galaxy distributions indicate the presence of filamentary structures attached to 1E 0657-56 and RXC J2248.7-4431, stretching out as far as 1°(approximately 20 Mpc), showcasing the potential of DECam and DES for detailed studies of degree-scale features on the sky.

  19. Verification of spatial and temporal pressure distributions in segmented solid rocket motors

    NASA Technical Reports Server (NTRS)

    Salita, Mark

    1989-01-01

    A wide variety of analytical tools are in use today to predict the history and spatial distributions of pressure in the combustion chambers of solid rocket motors (SRMs). Experimental and analytical methods are presented here that allow the verification of many of these predictions. These methods are applied to the redesigned space shuttle booster (RSRM). Girth strain-gage data is compared to the predictions of various one-dimensional quasisteady analyses in order to verify the axial drop in motor static pressure during ignition transients as well as quasisteady motor operation. The results of previous modeling of radial flows in the bore, slots, and around grain overhangs are supported by approximate analytical and empirical techniques presented here. The predictions of circumferential flows induced by inhibitor asymmetries, nozzle vectoring, and propellant slump are compared to each other and to subscale cold air and water tunnel measurements to ascertain their validity.

  20. Dosimetric verification of stereotactic radiosurgery/stereotactic radiotherapy dose distributions using Gafchromic EBT3

    SciTech Connect

    Cusumano, Davide; Fumagalli, Maria L.; Marchetti, Marcello; Fariselli, Laura; De Martin, Elena

    2015-10-01

    Aim of this study is to examine the feasibility of using the new Gafchromic EBT3 film in a high-dose stereotactic radiosurgery and radiotherapy quality assurance procedure. Owing to the reduced dimensions of the involved lesions, the feasibility of scanning plan verification films on the scanner plate area with the best uniformity rather than using a correction mask was evaluated. For this purpose, signal values dispersion and reproducibility of film scans were investigated. Uniformity was then quantified in the selected area and was found to be within 1.5% for doses up to 8 Gy. A high-dose threshold level for analyses using this procedure was established evaluating the sensitivity of the irradiated films. Sensitivity was found to be of the order of centiGray for doses up to 6.2 Gy and decreasing for higher doses. The obtained results were used to implement a procedure comparing dose distributions delivered with a CyberKnife system to planned ones. The procedure was validated through single beam irradiation on a Gafchromic film. The agreement between dose distributions was then evaluated for 13 patients (brain lesions, 5 Gy/die prescription isodose ~80%) using gamma analysis. Results obtained using Gamma test criteria of 5%/1 mm show a pass rate of 94.3%. Gamma frequency parameters calculation for EBT3 films showed to strongly depend on subtraction of unexposed film pixel values from irradiated ones. In the framework of the described dosimetric procedure, EBT3 films proved to be effective in the verification of high doses delivered to lesions with complex shapes and adjacent to organs at risk.

  1. Location Distribution Optimization of Photographing Sites for Indoor Panorama Modeling

    NASA Astrophysics Data System (ADS)

    Zhang, S.; Wu, J.; Zhang, Y.; Zhang, X.; Xin, Z.; Liu, J.

    2017-09-01

    Generally, panoramas image modeling is costly and time-consuming because of photographing continuously to capture enough photos along the routes, especially in complicated indoor environment. Thus, difficulty follows for a wider applications of panoramic image modeling for business. It is indispensable to make a feasible arrangement of panorama sites locations because the locations influence the clarity, coverage and the amount of panoramic images under the condition of certain device. This paper is aim to propose a standard procedure to generate the specific location and total amount of panorama sites in indoor panoramas modeling. Firstly, establish the functional relationship between one panorama site and its objectives. Then, apply the relationship to panorama sites network. We propose the Distance Clarity function (FC and Fe) manifesting the mathematical relationship between panoramas and objectives distance or obstacle distance. The Distance Buffer function (FB) is modified from traditional buffer method to generate the coverage of panorama site. Secondly, transverse every point in possible area to locate possible panorama site, calculate the clarity and coverage synthetically. Finally select as little points as possible to satiate clarity requirement preferentially and then the coverage requirement. In the experiments, detailed parameters of camera lens are given. Still, more experiments parameters need trying out given that relationship between clarity and distance is device dependent. In short, through the function FC, Fe and FB, locations of panorama sites can be generated automatically and accurately.

  2. Software for Statistical Analysis of Weibull Distributions with Application to Gear Fatigue Data: User Manual with Verification

    NASA Technical Reports Server (NTRS)

    Kranz, Timothy L.

    2002-01-01

    The Weibull distribution has been widely adopted for the statistical description and inference of fatigue data. This document provides user instructions, examples, and verification for software to analyze gear fatigue test data. The software was developed presuming the data are adequately modeled using a two-parameter Weibull distribution. The calculations are based on likelihood methods, and the approach taken is valid for data that include type I censoring. The software was verified by reproducing results published by others.

  3. Software for Statistical Analysis of Weibull Distributions with Application to Gear Fatigue Data: User Manual with Verification

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    2002-01-01

    The Weibull distribution has been widely adopted for the statistical description and inference of fatigue data. This document provides user instructions, examples, and verification for software to analyze gear fatigue test data. The software was developed presuming the data are adequately modeled using a two-parameter Weibull distribution. The calculations are based on likelihood methods, and the approach taken is valid for data that include type 1 censoring. The software was verified by reproducing results published by others.

  4. Distribution and Location of Genetic Effects for Dairy Traits

    USDA-ARS?s Scientific Manuscript database

    Genetic effects for many dairy traits and for total economic merit are fairly evenly distributed across all chromosomes. A high-density scan using 38,416 SNP markers for 5,285 bulls confirmed two previously-known major genes on Bos taurus autosomes (BTA) 6 and 14 but revealed few other large effects...

  5. Location cuing and response time distributions in visual attention.

    PubMed

    Gottlob, Lawrence R

    2004-11-01

    The allocation of visual attention was investigated in two experiments. In Experiment 1 (n = 24), a peripheral cue was presented, and in Experiment 2 (n = 24), a central cue was used. In both experiments, cue validity was 90%, and the task was four-choice target identification. Response time distributions were collected for valid trials over five cue-target stimulus onset asynchronies (SOAs), and ex-Gaussian parameters were extracted. In both experiments, only the mean of the Gaussian component decreased as a function of cue-target SOA, which implied a strict time axis translation of the distributions. The results were consistent with sequential sampling models featuring a variable delay in the onset of information uptake.

  6. Pretreatment verification of IMRT absolute dose distributions using a commercial a-Si EPID

    SciTech Connect

    Talamonti, C.; Casati, M.; Bucciolini, M.

    2006-11-15

    A commercial amorphous silicon electronic portal imaging device (EPID) has been studied to investigate its potential in the field of pretreatment verifications of step and shoot, intensity modulated radiation therapy (IMRT), 6 MV photon beams. The EPID was calibrated to measure absolute exit dose in a water-equivalent phantom at patient level, following an experimental approach, which does not require sophisticated calculation algorithms. The procedure presented was specifically intended to replace the time-consuming in-phantom film dosimetry. The dosimetric response was characterized on the central axis in terms of stability, linearity, and pulse repetition frequency dependence. The a-Si EPID demonstrated a good linearity with dose (within 2% from 1 monitor unit), which represent a prerequisite for the application in IMRT. A series of measurements, in which phantom thickness, air gap between the phantom and the EPID, field size and position of measurement of dose in the phantom (entrance or exit) varied, was performed to find the optimal calibration conditions, for which the field size dependence is minimized. In these conditions (20 cm phantom thickness, 56 cm air gap, exit dose measured at the isocenter), the introduction of a filter for the low-energy scattered radiation allowed us to define a universal calibration factor, independent of field size. The off-axis extension of the dose calibration was performed by applying a radial correction for the beam profile, distorted due to the standard flood field calibration of the device. For the acquisition of IMRT fields, it was necessary to employ home-made software and a specific procedure. This method was applied for the measurement of the dose distributions for 15 clinical IMRT fields. The agreement between the dose distributions, quantified by the gamma index, was found, on average, in 97.6% and 98.3% of the analyzed points for EPID versus TPS and for EPID versus FILM, respectively, thus suggesting a great

  7. Study On Burst Location Technology under Steady-state in Water Distribution System

    NASA Astrophysics Data System (ADS)

    Liu, Xianpin; Li, Shuping; Wang, Shaowei; He, Fang; He, Zhixun; Cao, Guodong

    2010-11-01

    According to the characteristics of hydraulic information under the state of burst in water distribution system, to get the correlation of monitoring values and burst location and locate the position of burst on time by mathematical fitting. This method can effectively make use of the information of SCADA in water distribution system to active locating burst position. A new idea of burst location in water distribution systems to shorten the burst time, reduce the impact on urban water supply, economic losses and waste of water resources.

  8. Redshift Distributions of Galaxies in the DES Science Verification Shear Catalogue and Implications for Weak Lensing

    SciTech Connect

    Bonnett, C.

    2015-07-21

    We present photometric redshift estimates for galaxies used in the weak lensing analysis of the Dark Energy Survey Science Verification (DES SV) data. Four model- or machine learning-based photometric redshift methods { annz2, bpz calibrated against BCC-U fig simulations, skynet, and tpz { are analysed. For training, calibration, and testing of these methods, we also construct a catalogue of spectroscopically confirmed galaxies matched against DES SV data. The performance of the methods is evalu-ated against the matched spectroscopic catalogue, focusing on metrics relevant for weak lensing analyses, with additional validation against COSMOS photo-zs. From the galaxies in the DES SV shear catalogue, which have mean redshift 0.72 ±0.01 over the range 0:3 < z < 1:3, we construct three tomographic bins with means of z = {0.45; 0.67,1.00g}. These bins each have systematic uncertainties δz ≲ 0.05 in the mean of the fiducial skynet photo-z n(z). We propagate the errors in the redshift distributions through to their impact on cosmological parameters estimated with cosmic shear, and find that they cause shifts in the value of σ8 of approx. 3%. This shift is within the one sigma statistical errors on σ8 for the DES SV shear catalog. We also found that further study of the potential impact of systematic differences on the critical surface density, Σcrit, contained levels of bias safely less than the statistical power of DES SV data. We recommend a final Gaussian prior for the photo-z bias in the mean of n(z) of width 0:05 for each of the three tomographic bins, and show that this is a sufficient bias model for the corresponding cosmology analysis.

  9. Operational flood-forecasting in the Piemonte region - development and verification of a fully distributed physically-oriented hydrological model

    NASA Astrophysics Data System (ADS)

    Rabuffetti, D.; Ravazzani, G.; Barbero, S.; Mancini, M.

    2009-03-01

    A hydrological model for real time flood forecasting to Civil Protection services requires reliability and rapidity. At present, computational capabilities overcome the rapidity needs even when a fully distributed hydrological model is adopted for a large river catchment as the Upper Po river basin closed at Ponte Becca (nearly 40 000 km2). This approach allows simulating the whole domain and obtaining the responses of large as well as of medium and little sized sub-catchments. The FEST-WB hydrological model (Mancini, 1990; Montaldo et al., 2007; Rabuffetti et al., 2008) is implemented. The calibration and verification activities are based on more than 100 flood events, occurred along the main tributaries of the Po river in the period 2000-2003. More than 300 meteorological stations are used to obtain the forcing fields, 10 cross sections with continuous and reliable discharge time series are used for calibration while verification is performed on about 40 monitored cross sections. Furthermore meteorological forecasting models are used to force the hydrological model with Quantitative Precipitation Forecasts (QPFs) for 36 h horizon in "operational setting" experiments. Particular care is devoted to understanding how QPF affects the accuracy of the Quantitative Discharge Forecasts (QDFs) and to assessing the QDF uncertainty impact on the warning system reliability. Results are presented either in terms of QDF and of warning issues highlighting the importance of an "operational based" verification approach.

  10. Experimental verification of distributed piezoelectric actuators for use in precision space structures

    NASA Technical Reports Server (NTRS)

    Crawley, E. F.; De Luis, J.

    1986-01-01

    An analytic model for structures with distributed piezoelectric actuators is experimentally verified for the cases of both surface-bonded and embedded actuators. A technique for the selection of such piezoelectric actuators' location has been developed, and is noted to indicate that segmented actuators are always more effective than continuous ones, since the output of each can be individually controlled. Manufacturing techniques for the bonding or embedding of segmented piezoelectric actuators are also developed which allow independent electrical contact to be made with each actuator. Static tests have been conducted to determine how the elastic properties of the composite are affected by the presence of an embedded actuator, for the case of glass/epoxy laminates.

  11. Locational Marginal Pricing in the Campus Power System at the Power Distribution Level

    SciTech Connect

    Hao, Jun; Gu, Yi; Zhang, Yingchen; Zhang, Jun Jason; Gao, David Wenzhong

    2016-11-14

    In the development of smart grid at distribution level, the realization of real-time nodal pricing is one of the key challenges. The research work in this paper implements and studies the methodology of locational marginal pricing at distribution level based on a real-world distribution power system. The pricing mechanism utilizes optimal power flow to calculate the corresponding distributional nodal prices. Both Direct Current Optimal Power Flow and Alternate Current Optimal Power Flow are utilized to calculate and analyze the nodal prices. The University of Denver campus power grid is used as the power distribution system test bed to demonstrate the pricing methodology.

  12. A Distribution-class Locational Marginal Price (DLMP) Index for Enhanced Distribution Systems

    NASA Astrophysics Data System (ADS)

    Akinbode, Oluwaseyi Wemimo

    The smart grid initiative is the impetus behind changes that are expected to culminate into an enhanced distribution system with the communication and control infrastructure to support advanced distribution system applications and resources such as distributed generation, energy storage systems, and price responsive loads. This research proposes a distribution-class analog of the transmission LMP (DLMP) as an enabler of the advanced applications of the enhanced distribution system. The DLMP is envisioned as a control signal that can incentivize distribution system resources to behave optimally in a manner that benefits economic efficiency and system reliability and that can optimally couple the transmission and the distribution systems. The DLMP is calculated from a two-stage optimization problem; a transmission system OPF and a distribution system OPF. An iterative framework that ensures accurate representation of the distribution system's price sensitive resources for the transmission system problem and vice versa is developed and its convergence problem is discussed. As part of the DLMP calculation framework, a DCOPF formulation that endogenously captures the effect of real power losses is discussed. The formulation uses piecewise linear functions to approximate losses. This thesis explores, with theoretical proofs, the breakdown of the loss approximation technique when non-positive DLMPs/LMPs occur and discusses a mixed integer linear programming formulation that corrects the breakdown. The DLMP is numerically illustrated in traditional and enhanced distribution systems and its superiority to contemporary pricing mechanisms is demonstrated using price responsive loads. Results show that the impact of the inaccuracy of contemporary pricing schemes becomes significant as flexible resources increase. At high elasticity, aggregate load consumption deviated from the optimal consumption by up to about 45 percent when using a flat or time-of-use rate. Individual load

  13. Robustness of location estimators under t-distributions: a literature review

    NASA Astrophysics Data System (ADS)

    Sumarni, C.; Sadik, K.; Notodiputro, K. A.; Sartono, B.

    2017-03-01

    The assumption of normality is commonly used in estimation of parameters in statistical modelling, but this assumption is very sensitive to outliers. The t-distribution is more robust than the normal distribution since the t-distributions have longer tails. The robustness measures of location estimators under t-distributions are reviewed and discussed in this paper. For the purpose of illustration we use the onion yield data which includes outliers as a case study and showed that the t model produces better fit than the normal model.

  14. Logistics Distribution Center Location Evaluation Based on Genetic Algorithm and Fuzzy Neural Network

    NASA Astrophysics Data System (ADS)

    Shao, Yuxiang; Chen, Qing; Wei, Zhenhua

    Logistics distribution center location evaluation is a dynamic, fuzzy, open and complicated nonlinear system, which makes it difficult to evaluate the distribution center location by the traditional analysis method. The paper proposes a distribution center location evaluation system which uses the fuzzy neural network combined with the genetic algorithm. In this model, the neural network is adopted to construct the fuzzy system. By using the genetic algorithm, the parameters of the neural network are optimized and trained so as to improve the fuzzy system’s abilities of self-study and self-adaptation. At last, the sampled data are trained and tested by Matlab software. The simulation results indicate that the proposed identification model has very small errors.

  15. A location-routing-inventory model for designing multisource distribution networks

    NASA Astrophysics Data System (ADS)

    Ahmadi-Javid, Amir; Seddighi, Amir Hossein

    2012-06-01

    This article studies a ternary-integration problem that incorporates location, inventory and routing decisions in designing a multisource distribution network. The objective of the problem is to minimize the total cost of location, routing and inventory. A mixed-integer programming formulation is first presented, and then a three-phase heuristic is developed to solve large-sized instances of the problem. The numerical study indicates that the proposed heuristic is both effective and efficient.

  16. Geographic location, network patterns and population distribution of rural settlements in Greece

    NASA Astrophysics Data System (ADS)

    Asimakopoulos, Avraam; Mogios, Emmanuel; Xenikos, Dimitrios G.

    2016-10-01

    Our work addresses the problem of how social networks are embedded in space, by studying the spread of human population over complex geomorphological terrain. We focus on villages or small cities up to a few thousand inhabitants located in mountainous areas in Greece. This terrain presents a familiar tree-like structure of valleys and land plateaus. Cities are found more often at lower altitudes and exhibit preference on south orientation. Furthermore, the population generally avoids flat land plateaus and river beds, preferring locations slightly uphill, away from the plateau edge. Despite the location diversity regarding geomorphological parameters, we find certain quantitative norms when we examine location and population distributions relative to the (man-made) transportation network. In particular, settlements at radial distance ℓ away from road network junctions have the same mean altitude, practically independent of ℓ ranging from a few meters to 10 km. Similarly, the distribution of the settlement population at any given ℓ is the same for all ℓ. Finally, the cumulative distribution of the number of rural cities n(ℓ) is fitted to the Weibull distribution, suggesting that human decisions for creating settlements could be paralleled to mechanisms typically attributed to this particular statistical distribution.

  17. Optimization of pressure gauge locations for water distribution systems using entropy theory.

    PubMed

    Yoo, Do Guen; Chang, Dong Eil; Jun, Hwandon; Kim, Joong Hoon

    2012-12-01

    It is essential to select the optimal pressure gauge location for effective management and maintenance of water distribution systems. This study proposes an objective and quantified standard for selecting the optimal pressure gauge location by defining the pressure change at other nodes as a result of demand change at a specific node using entropy theory. Two cases are considered in terms of demand change: that in which demand at all nodes shows peak load by using a peak factor and that comprising the demand change of the normal distribution whose average is the base demand. The actual pressure change pattern is determined by using the emitter function of EPANET to reflect the pressure that changes practically at each node. The optimal pressure gauge location is determined by prioritizing the node that processes the largest amount of information it gives to (giving entropy) and receives from (receiving entropy) the whole system according to the entropy standard. The suggested model is applied to one virtual and one real pipe network, and the optimal pressure gauge location combination is calculated by implementing the sensitivity analysis based on the study results. These analysis results support the following two conclusions. Firstly, the installation priority of the pressure gauge in water distribution networks can be determined with a more objective standard through the entropy theory. Secondly, the model can be used as an efficient decision-making guide for gauge installation in water distribution systems.

  18. A method to optimize sampling locations for measuring indoor air distributions

    NASA Astrophysics Data System (ADS)

    Huang, Yan; Shen, Xiong; Li, Jianmin; Li, Bingye; Duan, Ran; Lin, Chao-Hsin; Liu, Junjie; Chen, Qingyan

    2015-02-01

    Indoor air distributions, such as the distributions of air temperature, air velocity, and contaminant concentrations, are very important to occupants' health and comfort in enclosed spaces. When point data is collected for interpolation to form field distributions, the sampling locations (the locations of the point sensors) have a significant effect on time invested, labor costs and measuring accuracy on field interpolation. This investigation compared two different sampling methods: the grid method and the gradient-based method, for determining sampling locations. The two methods were applied to obtain point air parameter data in an office room and in a section of an economy-class aircraft cabin. The point data obtained was then interpolated to form field distributions by the ordinary Kriging method. Our error analysis shows that the gradient-based sampling method has 32.6% smaller error of interpolation than the grid sampling method. We acquired the function between the interpolation errors and the sampling size (the number of sampling points). According to the function, the sampling size has an optimal value and the maximum sampling size can be determined by the sensor and system errors. This study recommends the gradient-based sampling method for measuring indoor air distributions.

  19. Acoustic Emission Source Location Using a Distributed Feedback Fiber Laser Rosette

    PubMed Central

    Huang, Wenzhu; Zhang, Wentao; Li, Fang

    2013-01-01

    This paper proposes an approach for acoustic emission (AE) source localization in a large marble stone using distributed feedback (DFB) fiber lasers. The aim of this study is to detect damage in structures such as those found in civil applications. The directional sensitivity of DFB fiber laser is investigated by calculating location coefficient using a method of digital signal analysis. In this, autocorrelation is used to extract the location coefficient from the periodic AE signal and wavelet packet energy is calculated to get the location coefficient of a burst AE source. Normalization is processed to eliminate the influence of distance and intensity of AE source. Then a new location algorithm based on the location coefficient is presented and tested to determine the location of AE source using a Delta (Δ) DFB fiber laser rosette configuration. The advantage of the proposed algorithm over the traditional methods based on fiber Bragg Grating (FBG) include the capability of: having higher strain resolution for AE detection and taking into account two different types of AE source for location. PMID:24141266

  20. Acoustic emission source location using a distributed feedback fiber laser rosette.

    PubMed

    Huang, Wenzhu; Zhang, Wentao; Li, Fang

    2013-10-17

    This paper proposes an approach for acoustic emission (AE) source localization in a large marble stone using distributed feedback (DFB) fiber lasers. The aim of this study is to detect damage in structures such as those found in civil applications. The directional sensitivity of DFB fiber laser is investigated by calculating location coefficient using a method of digital signal analysis. In this, autocorrelation is used to extract the location coefficient from the periodic AE signal and wavelet packet energy is calculated to get the location coefficient of a burst AE source. Normalization is processed to eliminate the influence of distance and intensity of AE source. Then a new location algorithm based on the location coefficient is presented and tested to determine the location of AE source using a Delta (Δ) DFB fiber laser rosette configuration. The advantage of the proposed algorithm over the traditional methods based on fiber Bragg Grating (FBG) include the capability of: having higher strain resolution for AE detection and taking into account two different types of AE source for location.

  1. Multiple human location in a distributed binary pyroelectric infrared sensor network

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Wei, Qifan; Zhang, Meng

    2017-09-01

    This paper proposes a multi-human locating method for distributed wireless sensor network with binary pyroelectric infrared sensors. The uniformly deployed infrared sensor network consists of one sink node and nine sensor nodes, which can detect infrared information of moving human targets. An anti-logic bearing-crossing location and clustering algorithm is proposed to locate different targets. Firstly, dynamic virtual detection lines are generated based on the angular bisector of sensor's FOV(field of view) and all intersection points of these detection lines are primary measurement points. The location of multi-human targets can be achieved by first clustering the primary measurement points and then assigning these clusters to each target, which can simplify the assignment problem from multiple points to several clusters. Finally an anti-logic primary measurement points filtering method is used to get the location result of each target. Simulation and experimental results have shown that the measurement points can be obtained and assigned to different targets effectively, and our proposed location method can locate and track two human targets well.

  2. Spatially distributed energy balance snowmelt modelling in a mountainous river basin: estimation of meteorological inputs and verification of model results

    NASA Astrophysics Data System (ADS)

    Garen, David C.; Marks, Danny

    2005-12-01

    A spatially distributed energy balance snowmelt model has been applied to a 2150 km 2 drainage basin in the Boise River, ID, USA, to simulate the accumulation and melt of the snowpack for the years 1998-2000. The simulation was run at a 3 h time step and a spatial resolution of 250 m. Spatial field time series of meteorological input data were obtained using various spatial interpolation and simulation methods. The variables include precipitation, air temperature, dew point temperature, wind speed, and solar and thermal radiation. The goal was to use readily available data and relatively straightforward, yet physically meaningful, methods to develop the spatial fields. With these meteorological fields as input, the simulated fields of snow water equivalent, snow depth, and snow covered area reproduce observations very well. The simulated snowmelt fields are also used as input to a spatially distributed hydrologic model to estimate streamflow. This gives an additional verification of the snowmelt modelling results as well as provides a linkage of the two models to generate hydrographs for water management information. This project is a demonstration of spatially distributed energy balance snowmelt modelling in a large mountainous catchment using data from existing meteorological networks. This capability then suggests the potential for developing new spatial hydrologic informational products and the possibility of improving the accuracy of the prediction of hydrologic processes for water and natural resources management.

  3. Location, identification, and size distribution of depleted uranium grains in reservoir sediments.

    PubMed

    Lo, D; Fleischer, R L; Albert, E A; Arnason, J G

    2006-01-01

    The location, nature, and size distribution of uranium-rich grains in sediment layers can be identified by sunbursts of etched particle tracks if each sample is pressed against a track detector, next irradiated with thermal neutrons, and the detectors then chemically etched to reveal fission tracks. The total track abundance from the sample is a measure of the 235U content; hence, if the bulk uranium (mostly 238U) has been measured, the two sets of results give the depletion or enrichment of the uranium. Sunbursts of tracks mark the locations of low-abundance, high-uranium grains allowing them to be singled out for further study.

  4. Tomotherapy dose distribution verification using MAGIC-f polymer gel dosimetry

    SciTech Connect

    Pavoni, J. F.; Pike, T. L.; Snow, J.; DeWerd, L.; Baffa, O.

    2012-05-15

    Purpose: This paper presents the application of MAGIC-f gel in a three-dimensional dose distribution measurement and its ability to accurately measure the dose distribution from a tomotherapy unit. Methods: A prostate intensity-modulated radiation therapy (IMRT) irradiation was simulated in the gel phantom and the treatment was delivered by a TomoTherapy equipment. Dose distribution was evaluated by the R2 distribution measured in magnetic resonance imaging. Results: A high similarity was found by overlapping of isodoses of the dose distribution measured with the gel and expected by the treatment planning system (TPS). Another analysis was done by comparing the relative absorbed dose profiles in the measured and in the expected dose distributions extracted along indicated lines of the volume and the results were also in agreement. The gamma index analysis was also applied to the data and a high pass rate was achieved (88.4% for analysis using 3%/3 mm and of 96.5% using 4%/4 mm). The real three-dimensional analysis compared the dose-volume histograms measured for the planning volumes and expected by the treatment planning, being the results also in good agreement by the overlapping of the curves. Conclusions: These results show that MAGIC-f gel is a promise for tridimensional dose distribution measurements.

  5. Tomotherapy dose distribution verification using MAGIC-f polymer gel dosimetry.

    PubMed

    Pavoni, J F; Pike, T L; Snow, J; DeWerd, L; Baffa, O

    2012-05-01

    This paper presents the application of MAGIC-f gel in a three-dimensional dose distribution measurement and its ability to accurately measure the dose distribution from a tomotherapy unit. A prostate intensity-modulated radiation therapy (IMRT) irradiation was simulated in the gel phantom and the treatment was delivered by a TomoTherapy equipment. Dose distribution was evaluated by the R2 distribution measured in magnetic resonance imaging. A high similarity was found by overlapping of isodoses of the dose distribution measured with the gel and expected by the treatment planning system (TPS). Another analysis was done by comparing the relative absorbed dose profiles in the measured and in the expected dose distributions extracted along indicated lines of the volume and the results were also in agreement. The gamma index analysis was also applied to the data and a high pass rate was achieved (88.4% for analysis using 3%∕3 mm and of 96.5% using 4%∕4 mm). The real three-dimensional analysis compared the dose-volume histograms measured for the planning volumes and expected by the treatment planning, being the results also in good agreement by the overlapping of the curves. These results show that MAGIC-f gel is a promise for tridimensional dose distribution measurements.

  6. A simple method to determine leakage location in water distribution based on pressure profiles

    NASA Astrophysics Data System (ADS)

    Prihtiadi, Hafizh; Azwar, Azrul; Djamal, Mitra

    2016-03-01

    Nowadays, the pipeline leak is a serious problem for water distributions in big cities and the government that needs action and a great solution. Several techniques have been developed to improve the accuracy, the limitation of losses, and decrease environmental damage. However, these methods need highly costs and complexity equipment. This paper presents a simple method to determine leak location with the gradient intersection method calculations. A simple water distribution system have been built on PVC pipeline along 4m, diameter 15mm and 12 pressure sensors which placed into the pipeline. Each sensor measured the pressure for each point and send the data to microcontroller. The artificial hole was made between the sixth and seventh of sensor. With three holes, the system calculated and analyzed the leak location with error 3.67%.

  7. Role of origin and release location in pre-spawning distribution and movements of anadromous alewife

    USGS Publications Warehouse

    Frank, Holly J.; Mather, M. E.; Smith, Joseph M.; Muth, Robert M.; Finn, John T.

    2011-01-01

    Capturing adult anadromous fish that are ready to spawn from a self sustaining population and transferring them into a depleted system is a common fisheries enhancement tool. The behaviour of these transplanted fish, however, has not been fully evaluated. The movements of stocked and native anadromous alewife, Alosa pseudoharengus (Wilson), were monitored in the Ipswich River, Massachusetts, USA, to provide a scientific basis for this management tool. Radiotelemetry was used to examine the effect of origin (native or stocked) and release location (upstream or downstream) on distribution and movement during the spawning migration. Native fish remained in the river longer than stocked fish regardless of release location. Release location and origin influenced where fish spent time and how they moved. The spatial mosaic of available habitats and the entire trajectory of freshwater movements should be considered to restore effectively spawners that traverse tens of kilometres within coastal rivers.

  8. Biomechanical Assessment of Rucksack Shoulder Strap Attachment Location: Effect on Load Distribution to the Torso

    DTIC Science & Technology

    2001-05-01

    UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADPO 11003 TITLE: Biomechanical Assessment of Rucksack Shoulder Strap...ADP010987 thru ADPO11009 UNCLASSIFIED 20-1 Biomechanical Assessment of Rucksack Shoulder Strap Attachment Location: Effect on Load Distribution to the...Education Queen’s University Kingston, Ontario, Canada K7L 3N6 Summary The objective of this study was to conduct biomechanical testing of pack component

  9. Location of lightning stroke on OPGW by use of distributed optical fiber sensor

    NASA Astrophysics Data System (ADS)

    Lu, Lidong; Liang, Yun; Li, Binglin; Guo, Jinghong; Zhang, Hao; Zhang, Xuping

    2014-12-01

    A new method based on a distributed optical fiber sensor (DOFS) to locate the position of lightning stroke on the optical fiber ground wire (OPGW) is proposed and experimentally demonstrated. In the method, the lightning stroke process is considered to be a heat release process at the lightning stroke position, so Brillouin optical time domain reflectometry (BOTDR) with spatial resolution of 2m is used as the distributed temperature sensor. To simulate the lightning stroke process, an electric anode with high pulsed current and a negative electrode (the OPGW) are adopted to form a lightning impulse system with duration time of 200ms. In the experiment, lightning strokes with the quantity of electric discharge of 100 Coul and 200 Coul are generated respectively, and the DOFS can sensitively capture the temperature change of the lightning stroke position in the transient electric discharging process. Experimental results show that DOFS is a feasible instrument to locate the lightning stroke on the OPGW and it has excellent potential for the maintenance of electric power transmission line. Additionally, as the range of lightning stroke is usually within 10cm and the spatial resolution of a typical DOFS is beyond 1m, the temperature characteristics in a small area cannot be accurately represented by a DOFS with a large spatial resolution. Therefore, for further application of distributed optical fiber temperature sensors for lightning stroke location on OPGW, such as BOTDR and ROTDR, it is important to enhance the spatial resolution.

  10. Improvement on post-OPC verification efficiency for contact/via coverage check by final CD biasing of metal lines and considering their location on the metal layout

    NASA Astrophysics Data System (ADS)

    Kim, Youngmi; Choi, Jae-Young; Choi, Kwangseon; Choi, Jung-Hoe; Lee, Sooryong

    2011-04-01

    As IC design complexity keeps increasing, it is more and more difficult to ensure the pattern transfer after optical proximity correction (OPC) due to the continuous reduction of layout dimensions and lithographic limitation by k1 factor. To guarantee the imaging fidelity, resolution enhancement technologies (RET) such as off-axis illumination (OAI), different types of phase shift masks and OPC technique have been developed. In case of model-based OPC, to cross-confirm the contour image versus target layout, post-OPC verification solutions continuously keep developed - contour generation method and matching it to target structure, method for filtering and sorting the patterns to eliminate false errors and duplicate patterns. The way to detect only real errors by excluding false errors is the most important thing for accurate and fast verification process - to save not only reviewing time and engineer resource, but also whole wafer process time and so on. In general case of post-OPC verification for metal-contact/via coverage (CC) check, verification solution outputs huge of errors due to borderless design, so it is too difficult to review and correct all points of them. It should make OPC engineer to miss the real defect, and may it cause the delay time to market, at least. In this paper, we studied method for increasing efficiency of post-OPC verification, especially for the case of CC check. For metal layers, final CD after etch process shows various CD bias, which depends on distance with neighbor patterns, so it is more reasonable that consider final metal shape to confirm the contact/via coverage. Through the optimization of biasing rule for different pitches and shapes of metal lines, we could get more accurate and efficient verification results and decrease the time for review to find real errors. In this paper, the suggestion in order to increase efficiency of OPC verification process by using simple biasing rule to metal layout instead of etch model

  11. Experimental Verification of Modeled Thermal Distribution Produced by a Piston Source in Physiotherapy Ultrasound

    PubMed Central

    Lopez-Haro, S. A.; Leija, L.

    2016-01-01

    Objectives. To present a quantitative comparison of thermal patterns produced by the piston-in-a-baffle approach with those generated by a physiotherapy ultrasonic device and to show the dependency among thermal patterns and acoustic intensity distributions. Methods. The finite element (FE) method was used to model an ideal acoustic field and the produced thermal pattern to be compared with the experimental acoustic and temperature distributions produced by a real ultrasonic applicator. A thermal model using the measured acoustic profile as input is also presented for comparison. Temperature measurements were carried out with thermocouples inserted in muscle phantom. The insertion place of thermocouples was monitored with ultrasound imaging. Results. Modeled and measured thermal profiles were compared within the first 10 cm of depth. The ideal acoustic field did not adequately represent the measured field having different temperature profiles (errors 10% to 20%). Experimental field was concentrated near the transducer producing a region with higher temperatures, while the modeled ideal temperature was linearly distributed along the depth. The error was reduced to 7% when introducing the measured acoustic field as the input variable in the FE temperature modeling. Conclusions. Temperature distributions are strongly related to the acoustic field distributions. PMID:27999801

  12. Experimental Verification of Modeled Thermal Distribution Produced by a Piston Source in Physiotherapy Ultrasound.

    PubMed

    Gutierrez, M I; Lopez-Haro, S A; Vera, A; Leija, L

    2016-01-01

    Objectives. To present a quantitative comparison of thermal patterns produced by the piston-in-a-baffle approach with those generated by a physiotherapy ultrasonic device and to show the dependency among thermal patterns and acoustic intensity distributions. Methods. The finite element (FE) method was used to model an ideal acoustic field and the produced thermal pattern to be compared with the experimental acoustic and temperature distributions produced by a real ultrasonic applicator. A thermal model using the measured acoustic profile as input is also presented for comparison. Temperature measurements were carried out with thermocouples inserted in muscle phantom. The insertion place of thermocouples was monitored with ultrasound imaging. Results. Modeled and measured thermal profiles were compared within the first 10 cm of depth. The ideal acoustic field did not adequately represent the measured field having different temperature profiles (errors 10% to 20%). Experimental field was concentrated near the transducer producing a region with higher temperatures, while the modeled ideal temperature was linearly distributed along the depth. The error was reduced to 7% when introducing the measured acoustic field as the input variable in the FE temperature modeling. Conclusions. Temperature distributions are strongly related to the acoustic field distributions.

  13. Estimation of distributed Fermat-point location for wireless sensor networking.

    PubMed

    Huang, Po-Hsian; Chen, Jiann-Liang; Larosa, Yanuarius Teofilus; Chiang, Tsui-Lien

    2011-01-01

    This work presents a localization scheme for use in wireless sensor networks (WSNs) that is based on a proposed connectivity-based RF localization strategy called the distributed Fermat-point location estimation algorithm (DFPLE). DFPLE applies triangle area of location estimation formed by intersections of three neighboring beacon nodes. The Fermat point is determined as the shortest path from three vertices of the triangle. The area of estimated location then refined using Fermat point to achieve minimum error in estimating sensor nodes location. DFPLE solves problems of large errors and poor performance encountered by localization schemes that are based on a bounding box algorithm. Performance analysis of a 200-node development environment reveals that, when the number of sensor nodes is below 150, the mean error decreases rapidly as the node density increases, and when the number of sensor nodes exceeds 170, the mean error remains below 1% as the node density increases. Second, when the number of beacon nodes is less than 60, normal nodes lack sufficient beacon nodes to enable their locations to be estimated. However, the mean error changes slightly as the number of beacon nodes increases above 60. Simulation results revealed that the proposed algorithm for estimating sensor positions is more accurate than existing algorithms, and improves upon conventional bounding box strategies.

  14. Verification, Validation, and Accreditation Challenges of Distributed Simulation for Space Exploration Technology

    NASA Technical Reports Server (NTRS)

    Thomas, Danny; Hartway, Bobby; Hale, Joe

    2006-01-01

    Throughout its rich history, NASA has invested heavily in sophisticated simulation capabilities. These capabilities reside in NASA facilities across the country - and with partners around the world. NASA s Exploration Systems Mission Directorate (ESMD) has the opportunity to leverage these considerable investments to resolve technical questions relating to its missions. The distributed nature of the assets, both in terms of geography and organization, present challenges to their combined and coordinated use, but precedents of geographically distributed real-time simulations exist. This paper will show how technological advances in simulation can be employed to address the issues associated with netting NASA simulation assets.

  15. Verification of 3D Dose Distributions of a Beta-Emitting Radionuclide Using PRESAGE^ Dosimeters

    NASA Astrophysics Data System (ADS)

    Crowder, Mandi; Grant, Ryan; Ibbott, Geoff; Wendt, Richard

    2010-11-01

    Liquid Brachytherapy involves the direct administration of a beta-emitting radioactive solution into the selected tissue. The solution does not migrate from the injection point and uses the limited range of beta particles to produce a three-dimensional dose distribution. We simulated distributions by beta-dose kernels and validated those estimates by irradiating PRESAGE^ polyurethane dosimeters that measure the three-dimensional dose distributions by a change in optical density that is proportional to dose. The dosimeters were injected with internal beta-emitting radionuclide yttrium-90, exposed for 5.75 days, imaged with optical tomography, and analyzed with radiotherapy software. Dosimeters irradiated with an electron beam to 2 or 3 Gy were used for calibration. The shapes and dose distributions in the PRESAGE^ dosimeters were consistent with the predicted dose kernels. Our experiments have laid the groundwork for future application to individualized patient therapy by ultimately designing a treatment plan that conforms to the shape of any appropriate tumor.

  16. Specification/Verification of Temporal Properties for Distributed Systems: Issues and Approaches. Volume 1

    DTIC Science & Technology

    1990-02-01

    Philip A. Bernstein and Nathan Goodman. Concurrency control in distributed database systems. ACM Computing Surveys, 13(2):185-221, June 1981. [5] K. J...Sequential Processe8. Series in Computer Science. PrenticeHall International, Englewood Cliff, NJ, 1985. 96 [24] A. L. Hopkins Jr., T. Basil Smith, III, and J

  17. Gas Chromatographic Verification of a Mathematical Model: Product Distribution Following Methanolysis Reactions.

    ERIC Educational Resources Information Center

    Lam, R. B.; And Others

    1983-01-01

    Investigated application of binomial statistics to equilibrium distribution of ester systems by employing gas chromatography to verify the mathematical model used. Discusses model development and experimental techniques, indicating the model enables a straightforward extension to symmetrical polyfunctional esters and presents a mathematical basis…

  18. Gas Chromatographic Verification of a Mathematical Model: Product Distribution Following Methanolysis Reactions.

    ERIC Educational Resources Information Center

    Lam, R. B.; And Others

    1983-01-01

    Investigated application of binomial statistics to equilibrium distribution of ester systems by employing gas chromatography to verify the mathematical model used. Discusses model development and experimental techniques, indicating the model enables a straightforward extension to symmetrical polyfunctional esters and presents a mathematical basis…

  19. Sensor Location Problem Optimization for Traffic Network with Different Spatial Distributions of Traffic Information.

    PubMed

    Bao, Xu; Li, Haijian; Qin, Lingqiao; Xu, Dongwei; Ran, Bin; Rong, Jian

    2016-10-27

    To obtain adequate traffic information, the density of traffic sensors should be sufficiently high to cover the entire transportation network. However, deploying sensors densely over the entire network may not be realistic for practical applications due to the budgetary constraints of traffic management agencies. This paper describes several possible spatial distributions of traffic information credibility and proposes corresponding different sensor information credibility functions to describe these spatial distribution properties. A maximum benefit model and its simplified model are proposed to solve the traffic sensor location problem. The relationships between the benefit and the number of sensors are formulated with different sensor information credibility functions. Next, expanding models and algorithms in analytic results are performed. For each case, the maximum benefit, the optimal number and spacing of sensors are obtained and the analytic formulations of the optimal sensor locations are derived as well. Finally, a numerical example is proposed to verify the validity and availability of the proposed models for solving a network sensor location problem. The results show that the optimal number of sensors of segments with different model parameters in an entire freeway network can be calculated. Besides, it can also be concluded that the optimal sensor spacing is independent of end restrictions but dependent on the values of model parameters that represent the physical conditions of sensors and roads.

  20. Sensor Location Problem Optimization for Traffic Network with Different Spatial Distributions of Traffic Information

    PubMed Central

    Bao, Xu; Li, Haijian; Qin, Lingqiao; Xu, Dongwei; Ran, Bin; Rong, Jian

    2016-01-01

    To obtain adequate traffic information, the density of traffic sensors should be sufficiently high to cover the entire transportation network. However, deploying sensors densely over the entire network may not be realistic for practical applications due to the budgetary constraints of traffic management agencies. This paper describes several possible spatial distributions of traffic information credibility and proposes corresponding different sensor information credibility functions to describe these spatial distribution properties. A maximum benefit model and its simplified model are proposed to solve the traffic sensor location problem. The relationships between the benefit and the number of sensors are formulated with different sensor information credibility functions. Next, expanding models and algorithms in analytic results are performed. For each case, the maximum benefit, the optimal number and spacing of sensors are obtained and the analytic formulations of the optimal sensor locations are derived as well. Finally, a numerical example is proposed to verify the validity and availability of the proposed models for solving a network sensor location problem. The results show that the optimal number of sensors of segments with different model parameters in an entire freeway network can be calculated. Besides, it can also be concluded that the optimal sensor spacing is independent of end restrictions but dependent on the values of model parameters that represent the physical conditions of sensors and roads. PMID:27801794

  1. Optimum aggregation of geographically distributed flexible resources in strategic smart-grid/microgrid locations

    DOE PAGES

    Bhattarai, Bishnu P.; Myers, Kurt S.; Bak-Jensen, Brigitte; ...

    2017-05-17

    This paper determines optimum aggregation areas for a given distribution network considering spatial distribution of loads and costs of aggregation. An elitist genetic algorithm combined with a hierarchical clustering and a Thevenin network reduction is implemented to compute strategic locations and aggregate demand within each area. The aggregation reduces large distribution networks having thousands of nodes to an equivalent network with few aggregated loads, thereby significantly reducing the computational burden. Furthermore, it not only helps distribution system operators in making faster operational decisions by understanding during which time of the day will be in need of flexibility, from which specificmore » area, and in which amount, but also enables the flexibilities stemming from small distributed resources to be traded in various power/energy markets. A combination of central and local aggregation scheme where a central aggregator enables market participation, while local aggregators materialize the accepted bids, is implemented to realize this concept. The effectiveness of the proposed method is evaluated by comparing network performances with and without aggregation. Finally, for a given network configuration, steady-state performance of aggregated network is significantly accurate (≈ ±1.5% error) compared to very high errors associated with forecast of individual consumer demand.« less

  2. Locations of Sampling Stations for Water Quality Monitoring in Water Distribution Networks.

    PubMed

    Rathi, Shweta; Gupta, Rajesh

    2014-04-01

    Water quality is required to be monitored in the water distribution networks (WDNs) at salient locations to assure the safe quality of water supplied to the consumers. Such monitoring stations (MSs) provide warning against any accidental contaminations. Various objectives like demand coverage, time for detection, volume of water contaminated before detection, extent of contamination, expected population affected prior to detection, detection likelihood and others, have been independently or jointly considered in determining optimal number and location of MSs in WDNs. "Demand coverage" defined as the percentage of network demand monitored by a particular monitoring station is a simple measure to locate MSs. Several methods based on formulation of coverage matrix using pre-specified coverage criteria and optimization have been suggested. Coverage criteria is defined as some minimum percentage of total flow received at the monitoring stations that passed through any upstream node included then as covered node of the monitoring station. Number of monitoring stations increases with the increase in the value of coverage criteria. Thus, the design of monitoring station becomes subjective. A simple methodology is proposed herein which priority wise iteratively selects MSs to achieve targeted demand coverage. The proposed methodology provided the same number and location of MSs for illustrative network as an optimization method did. Further, the proposed method is simple and avoids subjectivity that could arise from the consideration of coverage criteria. The application of methodology is also shown on a WDN of Dharampeth zone (Nagpur city WDN in Maharashtra, India) having 285 nodes and 367 pipes.

  3. Locating of the earth fault of the single-phase at the tree distribution network

    NASA Astrophysics Data System (ADS)

    Guo, Li-Ping

    2013-07-01

    This paper proposes method through the combination of ranging the C-type traveling wave and the analysis and selection of line to the component of the line mode to locate the fault point. Inject into high amplitude and narrow signals in the beginning of a line and detect circuit returning from the arrival time of the waveform. Comparing of the normal waveform and failure waveform in both cases, receive the reflected wave fault at the arrival time. And then determine the fault distance. Using the effect from the fault which generate the shocks of the traveling wave and comparing the shock time of each branch line, the line which acquires the longest duration of vibration is the fault line. Through the theoretical analysis, Matlab simulation and effective analysis of the selected data, this paper proves the correctness of the method and demonstrate that the method of fault location in distribution networks is practical.

  4. Distributed fiber sensing system with wide frequency response and accurate location

    NASA Astrophysics Data System (ADS)

    Shi, Yi; Feng, Hao; Zeng, Zhoumo

    2016-02-01

    A distributed fiber sensing system merging Mach-Zehnder interferometer and phase-sensitive optical time domain reflectometer (Φ-OTDR) is demonstrated for vibration measurement, which requires wide frequency response and accurate location. Two narrow line-width lasers with delicately different wavelengths are used to constitute the interferometer and reflectometer respectively. A narrow band Fiber Bragg Grating is responsible for separating the two wavelengths. In addition, heterodyne detection is applied to maintain the signal to noise rate of the locating signal. Experiment results show that the novel system has a wide frequency from 1 Hz to 50 MHz, limited by the sample frequency of data acquisition card, and a spatial resolution of 20 m, according to 200 ns pulse width, along 2.5 km fiber link.

  5. Nanofibre distribution in composites manufactured with epoxy reinforced with nanofibrillated cellulose: model prediction and verification

    NASA Astrophysics Data System (ADS)

    Aitomäki, Yvonne; Westin, Mikael; Korpimäki, Jani; Oksman, Kristiina

    2016-07-01

    In this study a model based on simple scattering is developed and used to predict the distribution of nanofibrillated cellulose in composites manufactured by resin transfer moulding (RTM) where the resin contains nanofibres. The model is a Monte Carlo based simulation where nanofibres are randomly chosen from probability density functions for length, diameter and orientation. Their movements are then tracked as they advance through a random arrangement of fibres in defined fibre bundles. The results of the model show that the fabric filters the nanofibres within the first 20 µm unless clear inter-bundle channels are available. The volume fraction of the fabric fibres, flow velocity and size of nanofibre influence this to some extent. To verify the model, an epoxy with 0.5 wt.% Kraft Birch nanofibres was made through a solvent exchange route and stained with a colouring agent. This was infused into a glass fibre fabric using an RTM process. The experimental results confirmed the filtering of the nanofibres by the fibre bundles and their penetration in the fabric via the inter-bundle channels. Hence, the model is a useful tool for visualising the distribution of the nanofibres in composites in this manufacturing process.

  6. Experimental verification of a model describing the intensity distribution from a single mode optical fiber

    SciTech Connect

    Moro, Erik A; Puckett, Anthony D; Todd, Michael D

    2011-01-24

    The intensity distribution of a transmission from a single mode optical fiber is often approximated using a Gaussian-shaped curve. While this approximation is useful for some applications such as fiber alignment, it does not accurately describe transmission behavior off the axis of propagation. In this paper, another model is presented, which describes the intensity distribution of the transmission from a single mode optical fiber. A simple experimental setup is used to verify the model's accuracy, and agreement between model and experiment is established both on and off the axis of propagation. Displacement sensor designs based on the extrinsic optical lever architecture are presented. The behavior of the transmission off the axis of propagation dictates the performance of sensor architectures where large lateral offsets (25-1500 {micro}m) exist between transmitting and receiving fibers. The practical implications of modeling accuracy over this lateral offset region are discussed as they relate to the development of high-performance intensity modulated optical displacement sensors. In particular, the sensitivity, linearity, resolution, and displacement range of a sensor are functions of the relative positioning of the sensor's transmitting and receiving fibers. Sensor architectures with high combinations of sensitivity and displacement range are discussed. It is concluded that the utility of the accurate model is in its predicative capability and that this research could lead to an improved methodology for high-performance sensor design.

  7. Estimation of distributional parameters for censored trace level water quality data. 2. Verification and applications

    USGS Publications Warehouse

    Helsel, D.R.; Gilliom, R.J.

    1986-01-01

    Estimates of distributional parameters (mean, standard deviation, median, interquartile range) are often desired for data sets containing censored observations. Eight methods for estimating these parameters have been evaluated by R. J. Gilliom and D. R. Helsel (this issue) using Monte Carlo simulations. To verify those findings, the same methods are now applied to actual water quality data. The best method (lowest root-mean-squared error (rmse)) over all parameters, sample sizes, and censoring levels is log probability regression (LR), the method found best in the Monte Carlo simulations. Best methods for estimating moment or percentile parameters separately are also identical to the simulations. Reliability of these estimates can be expressed as confidence intervals using rmse and bias values taken from the simulation results. Finally, a new simulation study shows that best methods for estimating uncensored sample statistics from censored data sets are identical to those for estimating population parameters.

  8. Atomic Scale Verification of Oxide-Ion Vacancy Distribution near a Single Grain Boundary in YSZ

    PubMed Central

    An, Jihwan; Park, Joong Sun; Koh, Ai Leen; Lee, Hark B.; Jung, Hee Joon; Schoonman, Joop; Sinclair, Robert; Gür, Turgut M.; Prinz, Fritz B.

    2013-01-01

    This study presents atomic scale characterization of grain boundary defect structure in a functional oxide with implications for a wide range of electrochemical and electronic behavior. Indeed, grain boundary engineering can alter transport and kinetic properties by several orders of magnitude. Here we report experimental observation and determination of oxide-ion vacancy concentration near the Σ13 (510)/[001] symmetric tilt grain-boundary of YSZ bicrystal using aberration-corrected TEM operated under negative spherical aberration coefficient imaging condition. We show significant oxygen deficiency due to segregation of oxide-ion vacancies near the grain-boundary core with half-width < 0.6 nm. Electron energy loss spectroscopy measurements with scanning TEM indicated increased oxide-ion vacancy concentration at the grain boundary core. Oxide-ion density distribution near a grain boundary simulated by molecular dynamics corroborated well with experimental results. Such column-by-column quantification of defect concentration in functional materials can provide new insights that may lead to engineered grain boundaries designed for specific functionalities. PMID:24042150

  9. Verification of the efficiency of chemical disinfection and sanitation measures in in-building distribution systems.

    PubMed

    Lenz, J; Linke, S; Gemein, S; Exner, M; Gebel, J

    2010-06-01

    Previous investigations of biofilms, generated in a silicone tube model have shown that the number of colony forming units (CFU) can reach 10(7)/cm(2), the total cell count (TCC) of microorganisms can be up to 10(8)cells/cm(2). The present study focuses on the situation in in-building distribution systems. Different chemical disinfectants were tested for their efficacy on drinking water biofilms in silicone tubes: free chlorine (electrochemically activated), chlorine dioxide, hydrogen peroxide (H(2)O(2)), silver, and fruit acids. With regard to the widely differing manufacturers' instructions for the usage of their disinfectants three different variations of the silicone tube model were developed to simulate practical use conditions. First the continuous treatment, second the intermittent treatment, third the efficacy of external disinfection treatment and the monitoring for possible biofilm formation with the Hygiene-Monitor. The working experience showed that it is important to know how to handle the individual disinfectants. Every active ingredient has its own optimal application concerning its concentration, exposure time, physical parameters like pH, temperature or redox potential. When used correctly all products tested were able to reduce the CFU to a value below the detection limit. Most of the active ingredients could not significantly reduce the TCC/cm(2), which means that viable microorganisms may still be present in the system. Thus the question arises what happened with these cells? In some cases SEM pictures of the biofilm matrix after a successful disinfection still showed biofilm residues. According to these results, no general correlation between CFU/cm(2), TCC/cm(2) and the visualised biofilm matrix on the silicone tube surface (SEM) could be demonstrated after a treatment with disinfectants.

  10. Distributed measurement of acoustic vibration location with frequency multiplexed phase-OTDR

    NASA Astrophysics Data System (ADS)

    Iida, Daisuke; Toge, Kunihiro; Manabe, Tetsuya

    2017-07-01

    All-fiber distributed vibration sensing is attracting attention in relation to structural health monitoring because it is cost effective, offers high coverage of the monitored area and can detect various structural problems. And in particular the demand for high-speed vibration sensing operating at more than 10 kHz has increased because high frequency vibration indicates high energy and severe trouble in the monitored object. Optical fiber vibration sensing with phase-sensitive optical time domain reflectometry (phase-OTDR) has long been studied because it can be used for distributed vibration sensing in optical fiber. However, pulse reflectometry such as OTDR cannot measure high-frequency vibration whose cycle is shorter than the repetition time of the OTDR. That is, the maximum detectable frequency depends on fiber length. In this paper, we describe a vibration sensing technique with frequency-multiplexed OTDR that can detect the entire distribution of a high-frequency vibration thus allowing us to locate a high-speed vibration point. We can measure the position, frequency and dynamic change of a high-frequency vibration whose cycle is shorter than the repetition time. Both frequency and position are visualized simultaneously for a 5-km fiber with an 80-kHz frequency response and a 20-m spatial resolution.

  11. Using Distributed Temperature Sensing for evaporation measurements: background, verification, and future applications.

    NASA Astrophysics Data System (ADS)

    Schilperoort, Bart; Coenders-Gerrits, Miriam; van Iersel, Tara; Jiménez Rodríguez, Cesar; Luxemburg, Willem; Cisneros Vaca, Cesar; Ucer, Murat

    2017-04-01

    Distributed temperature sensing (DTS) is a relatively new method for measuring latent and sensible heat fluxes. The method has been successfully tested before on multiple sites (Euser, 2014). It uses a glass fibre optic cable of which the temperature can be measured every 12.5cm. By placing the cable vertically along a structure, the air temperature profile can be measured. If the cable is wrapped with cloth and kept wet (akin to a psychrometer), a vertical wet-bulb temperature gradient over height can be calculated. From these dry and wet-bulb temperatures over the height the Bowen ratio is determined and together with the energy balance the latent and sensible heat can be determined. To verify the measurements of the DTS based Bowen ratio method (BR-DTS) we assessed in detail; the accuracy of the air temperature and wet-bulb temperature measurements, the influence of solar radiation and wind on these temperatures, and a comparison to standard methods of evaporation measurement. We tested the performance of the BR-DTS on a 45m high tower in a tall mixed forest in the centre of the Netherlands in August. The average tree height is 30m, hence we measure temperature gradients above, in, and underneath the canopy. We found that solar radiation has a significant effect on the temperature measurements due to heating of the cable coating and leads to deviations up to 2° C. By using cables with different coating thickness we could theoretically correct for this effect, but this introduces too much uncertainty for calculating the temperature gradient. By installing screens the effect of direct sunlight on the cable is sufficiently reduced, and the correlation of the cable temperature with reference air temperature sensors is very high (R2=0.988 to 0.998). Wind speed seems to have a minimal effect on the measured wet-bulb temperature, both below and above the canopy. The latent heat fluxes of the BR-DTS were compared to an eddy covariance system using data from 10 days

  12. Redshift distributions of galaxies in the Dark Energy Survey Science Verification shear catalogue and implications for weak lensing

    SciTech Connect

    Bonnett, C.; Troxel, M. A.; Hartley, W.; Amara, A.; Leistedt, B.; Becker, M. R.; Bernstein, G. M.; Bridle, S. L.; Bruderer, C.; Busha, M. T.; Carrasco Kind, M.; Childress, M. J.; Castander, F. J.; Chang, C.; Crocce, M.; Davis, T. M.; Eifler, T. F.; Frieman, J.; Gangkofner, C.; Gaztanaga, E.; Glazebrook, K.; Gruen, D.; Kacprzak, T.; King, A.; Kwan, J.; Lahav, O.; Lewis, G.; Lidman, C.; Lin, H.; MacCrann, N.; Miquel, R.; O’Neill, C. R.; Palmese, A.; Peiris, H. V.; Refregier, A.; Rozo, E.; Rykoff, E. S.; Sadeh, I.; Sánchez, C.; Sheldon, E.; Uddin, S.; Wechsler, R. H.; Zuntz, J.; Abbott, T.; Abdalla, F. B.; Allam, S.; Armstrong, R.; Banerji, M.; Bauer, A. H.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Capozzi, D.; Carnero Rosell, A.; Carretero, J.; Cunha, C. E.; D’Andrea, C. B.; da Costa, L. N.; DePoy, D. L.; Desai, S.; Diehl, H. T.; Dietrich, J. P.; Doel, P.; Fausti Neto, A.; Fernandez, E.; Flaugher, B.; Fosalba, P.; Gerdes, D. W.; Gruendl, R. A.; Honscheid, K.; Jain, B.; James, D. J.; Jarvis, M.; Kim, A. G.; Kuehn, K.; Kuropatkin, N.; Li, T. S.; Lima, M.; Maia, M. A. G.; March, M.; Marshall, J. L.; Martini, P.; Melchior, P.; Miller, C. J.; Neilsen, E.; Nichol, R. C.; Nord, B.; Ogando, R.; Plazas, A. A.; Reil, K.; Romer, A. K.; Roodman, A.; Sako, M.; Sanchez, E.; Santiago, B.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thaler, J.; Thomas, D.; Vikram, V.; Walker, A. R.

    2016-08-30

    Here we present photometric redshift estimates for galaxies used in the weak lensing analysis of the Dark Energy Survey Science Verification (DES SV) data. Four model- or machine learning-based photometric redshift methods—annz2, bpz calibrated against BCC-Ufig simulations, skynet, and tpz—are analyzed. For training, calibration, and testing of these methods, we construct a catalogue of spectroscopically confirmed galaxies matched against DES SV data. The performance of the methods is evaluated against the matched spectroscopic catalogue, focusing on metrics relevant for weak lensing analyses, with additional validation against COSMOS photo-z’s. From the galaxies in the DES SV shear catalogue, which have mean redshift 0.72±0.01 over the range 0.3distributions through to their impact on cosmological parameters estimated with cosmic shear, and find that they cause shifts in the value of σ8 of approximately 3%. This shift is within the one sigma statistical errors on σ8 for the DES SV shear catalogue. We further study the potential impact of systematic differences on the critical surface density, Σcrit, finding levels of bias safely less than the statistical power of DES SV data. In conclusion, we recommend a final Gaussian prior for the photo-z bias in the mean of n(z) of width 0.05 for each of the three tomographic bins, and show that this is a sufficient bias model for the corresponding cosmology analysis.

  13. Redshift distributions of galaxies in the Dark Energy Survey Science Verification shear catalogue and implications for weak lensing

    NASA Astrophysics Data System (ADS)

    Bonnett, C.; Troxel, M. A.; Hartley, W.; Amara, A.; Leistedt, B.; Becker, M. R.; Bernstein, G. M.; Bridle, S. L.; Bruderer, C.; Busha, M. T.; Carrasco Kind, M.; Childress, M. J.; Castander, F. J.; Chang, C.; Crocce, M.; Davis, T. M.; Eifler, T. F.; Frieman, J.; Gangkofner, C.; Gaztanaga, E.; Glazebrook, K.; Gruen, D.; Kacprzak, T.; King, A.; Kwan, J.; Lahav, O.; Lewis, G.; Lidman, C.; Lin, H.; MacCrann, N.; Miquel, R.; O'Neill, C. R.; Palmese, A.; Peiris, H. V.; Refregier, A.; Rozo, E.; Rykoff, E. S.; Sadeh, I.; Sánchez, C.; Sheldon, E.; Uddin, S.; Wechsler, R. H.; Zuntz, J.; Abbott, T.; Abdalla, F. B.; Allam, S.; Armstrong, R.; Banerji, M.; Bauer, A. H.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Capozzi, D.; Carnero Rosell, A.; Carretero, J.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; DePoy, D. L.; Desai, S.; Diehl, H. T.; Dietrich, J. P.; Doel, P.; Fausti Neto, A.; Fernandez, E.; Flaugher, B.; Fosalba, P.; Gerdes, D. W.; Gruendl, R. A.; Honscheid, K.; Jain, B.; James, D. J.; Jarvis, M.; Kim, A. G.; Kuehn, K.; Kuropatkin, N.; Li, T. S.; Lima, M.; Maia, M. A. G.; March, M.; Marshall, J. L.; Martini, P.; Melchior, P.; Miller, C. J.; Neilsen, E.; Nichol, R. C.; Nord, B.; Ogando, R.; Plazas, A. A.; Reil, K.; Romer, A. K.; Roodman, A.; Sako, M.; Sanchez, E.; Santiago, B.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thaler, J.; Thomas, D.; Vikram, V.; Walker, A. R.; Dark Energy Survey Collaboration

    2016-08-01

    We present photometric redshift estimates for galaxies used in the weak lensing analysis of the Dark Energy Survey Science Verification (DES SV) data. Four model- or machine learning-based photometric redshift methods—annz2, bpz calibrated against BCC-Ufig simulations, skynet, and tpz—are analyzed. For training, calibration, and testing of these methods, we construct a catalogue of spectroscopically confirmed galaxies matched against DES SV data. The performance of the methods is evaluated against the matched spectroscopic catalogue, focusing on metrics relevant for weak lensing analyses, with additional validation against COSMOS photo-z 's. From the galaxies in the DES SV shear catalogue, which have mean redshift 0.72 ±0.01 over the range 0.3 distributions through to their impact on cosmological parameters estimated with cosmic shear, and find that they cause shifts in the value of σ8 of approximately 3%. This shift is within the one sigma statistical errors on σ8 for the DES SV shear catalogue. We further study the potential impact of systematic differences on the critical surface density, Σcrit , finding levels of bias safely less than the statistical power of DES SV data. We recommend a final Gaussian prior for the photo-z bias in the mean of n (z ) of width 0.05 for each of the three tomographic bins, and show that this is a sufficient bias model for the corresponding cosmology analysis.

  14. Redshift distributions of galaxies in the Dark Energy Survey Science Verification shear catalogue and implications for weak lensing

    SciTech Connect

    Bonnett, C.; Troxel, M. A.; Hartley, W.; Amara, A.; Leistedt, B.; Becker, M. R.; Bernstein, G. M.; Bridle, S. L.; Bruderer, C.; Busha, M. T.; Carrasco Kind, M.; Childress, M. J.; Castander, F. J.; Chang, C.; Crocce, M.; Davis, T. M.; Eifler, T. F.; Frieman, J.; Gangkofner, C.; Gaztanaga, E.; Glazebrook, K.; Gruen, D.; Kacprzak, T.; King, A.; Kwan, J.; Lahav, O.; Lewis, G.; Lidman, C.; Lin, H.; MacCrann, N.; Miquel, R.; O’Neill, C. R.; Palmese, A.; Peiris, H. V.; Refregier, A.; Rozo, E.; Rykoff, E. S.; Sadeh, I.; Sánchez, C.; Sheldon, E.; Uddin, S.; Wechsler, R. H.; Zuntz, J.; Abbott, T.; Abdalla, F. B.; Allam, S.; Armstrong, R.; Banerji, M.; Bauer, A. H.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Capozzi, D.; Carnero Rosell, A.; Carretero, J.; Cunha, C. E.; D’Andrea, C. B.; da Costa, L. N.; DePoy, D. L.; Desai, S.; Diehl, H. T.; Dietrich, J. P.; Doel, P.; Fausti Neto, A.; Fernandez, E.; Flaugher, B.; Fosalba, P.; Gerdes, D. W.; Gruendl, R. A.; Honscheid, K.; Jain, B.; James, D. J.; Jarvis, M.; Kim, A. G.; Kuehn, K.; Kuropatkin, N.; Li, T. S.; Lima, M.; Maia, M. A. G.; March, M.; Marshall, J. L.; Martini, P.; Melchior, P.; Miller, C. J.; Neilsen, E.; Nichol, R. C.; Nord, B.; Ogando, R.; Plazas, A. A.; Reil, K.; Romer, A. K.; Roodman, A.; Sako, M.; Sanchez, E.; Santiago, B.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thaler, J.; Thomas, D.; Vikram, V.; Walker, A. R.

    2016-08-30

    Here we present photometric redshift estimates for galaxies used in the weak lensing analysis of the Dark Energy Survey Science Verification (DES SV) data. Four model- or machine learning-based photometric redshift methods—annz2, bpz calibrated against BCC-Ufig simulations, skynet, and tpz—are analyzed. For training, calibration, and testing of these methods, we construct a catalogue of spectroscopically confirmed galaxies matched against DES SV data. The performance of the methods is evaluated against the matched spectroscopic catalogue, focusing on metrics relevant for weak lensing analyses, with additional validation against COSMOS photo-z’s. From the galaxies in the DES SV shear catalogue, which have mean redshift 0.72±0.01 over the range 0.3distributions through to their impact on cosmological parameters estimated with cosmic shear, and find that they cause shifts in the value of σ8 of approximately 3%. This shift is within the one sigma statistical errors on σ8 for the DES SV shear catalogue. We further study the potential impact of systematic differences on the critical surface density, Σcrit, finding levels of bias safely less than the statistical power of DES SV data. In conclusion, we recommend a final Gaussian prior for the photo-z bias in the mean of n(z) of width 0.05 for each of the three tomographic bins, and show that this is a sufficient bias model for the corresponding cosmology analysis.

  15. Location, Location, Location!

    ERIC Educational Resources Information Center

    Ramsdell, Kristin

    2004-01-01

    Of prime importance in real estate, location is also a key element in the appeal of romances. Popular geographic settings and historical periods sell, unpopular ones do not--not always with a logical explanation, as the author discovered when she conducted a survey on this topic last year. (Why, for example, are the French Revolution and the…

  16. Location, Location, Location!

    ERIC Educational Resources Information Center

    Ramsdell, Kristin

    2004-01-01

    Of prime importance in real estate, location is also a key element in the appeal of romances. Popular geographic settings and historical periods sell, unpopular ones do not--not always with a logical explanation, as the author discovered when she conducted a survey on this topic last year. (Why, for example, are the French Revolution and the…

  17. Degree of target utilization influences the location of movement endpoint distributions.

    PubMed

    Slifkin, Andrew B; Eder, Jeffrey R

    2017-03-01

    According to dominant theories of motor control, speed and accuracy are optimized when, on the average, movement endpoints are located at the target center and when the variability of the movement endpoint distributions is matched to the width of the target (viz., Meyer, Abrams, Kornblum, Wright, & Smith, 1988). The current study tested those predictions. According to the speed-accuracy trade-off, expanding the range of variability to the amount permitted by the limits of the target boundaries allows for maximization of movement speed while centering the distribution on the target center prevents movement errors that would have occurred had the distribution been off center. Here, participants (N=20) were required to generate 100 consecutive targeted hand movements under each of 15 unique conditions: There were three movement amplitude requirements (80, 160, 320mm) and within each there were five target widths (5, 10, 20, 40, 80mm). According to the results, it was only at the smaller target widths (5, 10mm) that movement endpoint distributions were centered on the target center and the range of movement endpoint variability matched the range specified by the target boundaries. As target width increased (20, 40, 80mm), participants increasingly undershot the target center and the range of movement endpoint variability increasingly underestimated the variability permitted by the target region. The degree of target center undershooting was strongly predicted by the difference between the size of the target and the amount of movement endpoint variability, i.e., the amount of unused space in the target. The results suggest that participants have precise knowledge of their variability relative to that permitted by the target, and they use that knowledge to systematically reduce the travel distance to targets. The reduction in travel distance across the larger target widths might have resulted in greater cost savings than those associated with increases in speed

  18. Commercial milk distribution profiles and production locations. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Deonigi, D.E.; Anderson, D.M.; Wilfert, G.L.

    1993-12-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project was established to estimate radiation doses that people could have received from nuclear operations at the Hanford Site since 1944. For this period iodine-131 is the most important offsite contributor to radiation doses from Hanford operations. Consumption of milk from cows that ate vegetation contaminated by iodine-131 is the dominant radiation pathway for individuals who drank milk. Information has been developed on commercial milk cow locations and commercial milk distribution during 1945 and 1951. The year 1945 was selected because during 1945 the largest amount of iodine-131 was released from Hanford facilities in a calendar year; therefore, 1945 was the year in which an individual was likely to have received the highest dose. The year 1951 was selected to provide data for comparing the changes that occurred in commercial milk flows (i.e., sources, processing locations, and market areas) between World War II and the post-war period. To estimate the doses people could have received from this milk flow, it is necessary to estimate the amount of milk people consumed, the source of the milk, the specific feeding regime used for milk cows, and the amount of iodine-131 contamination deposited on feed.

  19. Commercial milk distribution profiles and production locations. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Deonigi, D.E.; Anderson, D.M.; Wilfert, G.L.

    1994-04-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project was established to estimate radiation doses that people could have received from nuclear operations at the Hanford Site since 1944. For this period iodine-131 is the most important offsite contributor to radiation doses from Hanford operations. Consumption of milk from cows that ate vegetation contaminated by iodine-131 is the dominant radiation pathway for individuals who drank milk (Napier 1992). Information has been developed on commercial milk cow locations and commercial milk distribution during 1945 and 1951. The year 1945 was selected because during 1945 the largest amount of iodine-131 was released from Hanford facilities in a calendar year (Heeb 1993); therefore, 1945 was the year in which an individual was likely to have received the highest dose. The year 1951 was selected to provide data for comparing the changes that occurred in commercial milk flows (i.e., sources, processing locations, and market areas) between World War II and the post-war period. To estimate the doses people could have received from this milk flow, it is necessary to estimate the amount of milk people consumed, the source of the milk, the specific feeding regime used for milk cows, and the amount of iodine-131 contamination deposited on feed.

  20. Locating illicit connections in storm water sewers using fiber-optic distributed temperature sensing.

    PubMed

    Hoes, O A C; Schilperoort, R P S; Luxemburg, W M J; Clemens, F H L R; van de Giesen, N C

    2009-12-01

    A newly developed technique using distributed temperature sensing (DTS) has been developed to find illicit household sewage connections to storm water systems in the Netherlands. DTS allows for the accurate measurement of temperature along a fiber-optic cable, with high spatial (2m) and temporal (30s) resolution. We inserted a fiber-optic cable of 1300m in two storm water drains. At certain locations, significant temperature differences with an intermittent character were measured, indicating inflow of water that was not storm water. In all cases, we found that foul water from households or companies entered the storm water system through an illicit sewage connection. The method of using temperature differences for illicit connection detection in storm water networks is discussed. The technique of using fiber-optic cables for distributed temperature sensing is explained in detail. The DTS method is a reliable, inexpensive and practically feasible method to detect illicit connections to storm water systems, which does not require access to private property.

  1. [Spatial correlation of active mounds locative distribution of Solenopsis invicta Buren polygyne populations].

    PubMed

    Lu, Yong-yue; Li, Ning-dong; Liang, Guang-wen; Zeng, Ling

    2007-01-01

    By using geostatistic method, this paper studied the spatial distribution patterns of the active mounds of Solenopsis invicta Buren polygyne populations in Wuchuan and Shenzhen, and built up the spherical models of the interval distances and semivariances of the mounds. The semivariograms were described at the two directions of east-west and south-north, which were obviously positively correlated to the interval distances, revealing that the active mounds in locative area were space-dependent. The ranges of the 5 spherical models constructed for 5 sampling plots in Wuchuan were 9.1 m, 7.6 m, 23.5 m, 7.5 m and 14.5 m, respectively, with an average of 12.4 m. The mounds of any two plots in this range were significantly correlated. There was a randomicity in the spatial distribution of active mounds, and the randomicity index (Nugget/Sill) was 0.7034, 0.9247, 0.4398, 1.1196 and 0.4624, respectively. In Shenzhen, the relationships between the interval distances and semivariances were described by 7 spherical models, and the ranges were 14.5 m, 11.2 m, 10.8 m, 17.6 m, 11.3 m, 9.9 m and 12.8 m, respectively, with an average of 12.6 m.

  2. Distributed solar photovoltaic array location and extent dataset for remote sensing object identification

    NASA Astrophysics Data System (ADS)

    Bradbury, Kyle; Saboo, Raghav; L. Johnson, Timothy; Malof, Jordan M.; Devarajan, Arjun; Zhang, Wuming; M. Collins, Leslie; G. Newell, Richard

    2016-12-01

    Earth-observing remote sensing data, including aerial photography and satellite imagery, offer a snapshot of the world from which we can learn about the state of natural resources and the built environment. The components of energy systems that are visible from above can be automatically assessed with these remote sensing data when processed with machine learning methods. Here, we focus on the information gap in distributed solar photovoltaic (PV) arrays, of which there is limited public data on solar PV deployments at small geographic scales. We created a dataset of solar PV arrays to initiate and develop the process of automatically identifying solar PV locations using remote sensing imagery. This dataset contains the geospatial coordinates and border vertices for over 19,000 solar panels across 601 high-resolution images from four cities in California. Dataset applications include training object detection and other machine learning algorithms that use remote sensing imagery, developing specific algorithms for predictive detection of distributed PV systems, estimating installed PV capacity, and analysis of the socioeconomic correlates of PV deployment.

  3. Distributed solar photovoltaic array location and extent dataset for remote sensing object identification.

    PubMed

    Bradbury, Kyle; Saboo, Raghav; L Johnson, Timothy; Malof, Jordan M; Devarajan, Arjun; Zhang, Wuming; M Collins, Leslie; G Newell, Richard

    2016-12-06

    Earth-observing remote sensing data, including aerial photography and satellite imagery, offer a snapshot of the world from which we can learn about the state of natural resources and the built environment. The components of energy systems that are visible from above can be automatically assessed with these remote sensing data when processed with machine learning methods. Here, we focus on the information gap in distributed solar photovoltaic (PV) arrays, of which there is limited public data on solar PV deployments at small geographic scales. We created a dataset of solar PV arrays to initiate and develop the process of automatically identifying solar PV locations using remote sensing imagery. This dataset contains the geospatial coordinates and border vertices for over 19,000 solar panels across 601 high-resolution images from four cities in California. Dataset applications include training object detection and other machine learning algorithms that use remote sensing imagery, developing specific algorithms for predictive detection of distributed PV systems, estimating installed PV capacity, and analysis of the socioeconomic correlates of PV deployment.

  4. Distributed solar photovoltaic array location and extent dataset for remote sensing object identification

    PubMed Central

    Bradbury, Kyle; Saboo, Raghav; L. Johnson, Timothy; Malof, Jordan M.; Devarajan, Arjun; Zhang, Wuming; M. Collins, Leslie; G. Newell, Richard

    2016-01-01

    Earth-observing remote sensing data, including aerial photography and satellite imagery, offer a snapshot of the world from which we can learn about the state of natural resources and the built environment. The components of energy systems that are visible from above can be automatically assessed with these remote sensing data when processed with machine learning methods. Here, we focus on the information gap in distributed solar photovoltaic (PV) arrays, of which there is limited public data on solar PV deployments at small geographic scales. We created a dataset of solar PV arrays to initiate and develop the process of automatically identifying solar PV locations using remote sensing imagery. This dataset contains the geospatial coordinates and border vertices for over 19,000 solar panels across 601 high-resolution images from four cities in California. Dataset applications include training object detection and other machine learning algorithms that use remote sensing imagery, developing specific algorithms for predictive detection of distributed PV systems, estimating installed PV capacity, and analysis of the socioeconomic correlates of PV deployment. PMID:27922592

  5. Sub-micron particle number size distribution characteristics at two urban locations in Leicester

    NASA Astrophysics Data System (ADS)

    Hama, Sarkawt M. L.; Cordell, Rebecca L.; Kos, Gerard P. A.; Weijers, E. P.; Monks, Paul S.

    2017-09-01

    The particle number size distribution (PNSD) of atmospheric particles not only provides information about sources and atmospheric processing of particles, but also plays an important role in determining regional lung dose. Owing to the importance of PNSD in understanding particulate pollution two short-term campaigns (March-June 2014) measurements of sub-micron PNSD were conducted at two urban background locations in Leicester, UK. At the first site, Leicester Automatic Urban Rural Network (AURN), the mean number concentrations of nucleation, Aitken, accumulation modes, the total particles, equivalent black carbon (eBC) mass concentrations were 2002, 3258, 1576, 6837 # cm-3, 1.7 μg m-3, respectively, and at the second site, Brookfield (BF), were 1455, 2407, 874, 4737 # cm-3, 0.77 μg m-3, respectively. The total particle number was dominated by the nucleation and Aitken modes, with both consisting of 77%, and 81% of total number concentrations at AURN and BF sites, respectively. This behaviour could be attributed to primary emissions (traffic) of ultrafine particles and the temporal evolution of mixing layer. The size distribution at the AURN site shows bimodal distribution at 22 nm with a minor peak at 70 nm. The size distribution at BF site, however, exhibits unimodal distribution at 35 nm. This study has for the first time investigated the effect of Easter holiday on PNSD in UK. The temporal variation of PNSD demonstrated a good degree of correlation with traffic-related pollutants (NOX, and eBC at both sites). The meteorological conditions, also had an impact on the PNSD and eBC at both sites. During the measurement period, the frequency of NPF events was calculated to be 13.3%, and 22.2% at AURN and BF sites, respectively. The average value of formation and growth rates of nucleation mode particles were 1.3, and 1.17 cm-3 s-1 and 7.42, and 5.3 nm h-1 at AURN, and BF sites, respectively. It can suggested that aerosol particles in Leicester originate mainly

  6. Relation Between Sprite Distribution and Source Locations of VHF Pulses Derived From JEM- GLIMS Measurements

    NASA Astrophysics Data System (ADS)

    Sato, Mitsuteru; Mihara, Masahiro; Ushio, Tomoo; Morimoto, Takeshi; Kikuchi, Hiroshi; Adachi, Toru; Suzuki, Makoto; Yamazaki, Atsushi; Takahashi, Yukihiro

    2015-04-01

    JEM-GLIMS is continuing the comprehensive nadir observations of lightning and TLEs using optical instruments and electromagnetic wave receivers since November 2012. For the period between November 20, 2012 and November 30, 2014, JEM-GLIMS succeeded in detecting 5,048 lightning events. A total of 567 events in 5,048 lightning events were TLEs, which were mostly elves events. To identify the sprite occurrences from the transient optical flash data, it is necessary to perform the following data analysis: (1) a subtraction of the appropriately scaled wideband camera data from the narrowband camera data; (2) a calculation of intensity ratio between different spectrophotometer channels; and (3) an estimation of the polarization and CMC for the parent CG discharges using ground-based ELF measurement data. From a synthetic comparison of these results, it is confirmed that JEM-GLISM succeeded in detecting sprite events. The VHF receiver (VITF) onboard JEM-GLIMS uses two patch-type antennas separated by a 1.6-m interval and can detect VHF pulses emitted by lightning discharges in the 70-100 MHz frequency range. Using both an interferometric technique and a group delay technique, we can estimate the source locations of VHF pulses excited by lightning discharges. In the event detected at 06:41:15.68565 UT on June 12, 2014 over central North America, sprite was distributed with a horizontal displacement of 20 km from the peak location of the parent lightning emission. In this event, a total of 180 VHF pulses were simultaneously detected by VITF. From the detailed data analysis of these VHF pulse data, it is found that the majority of the source locations were placed near the area of the dim lightning emission, which may imply that the VHF pulses were associated with the in-cloud lightning current. At the presentation, we will show detailed comparison between the spatiotemporal characteristics of sprite emission and source locations of VHF pulses excited by the parent lightning

  7. Use of geographical information system data for emergency management points of distribution analysis with POD Locator 2.0.

    PubMed

    Chung, Christopher A

    In 2010, the article Location and Analysis of Emergency Management Point of Distributions (PODs) for Hurricane Ike was published in the Journal of Emergency Management. Using a program titled point of distribution locator (POD Locator 1.0), the article reported a 46 percent improvement in positioning PODs over the locations selected by emergency managers during Hurricane Ike in 2008. While the program could produce more effective POD locations for a given situation, a major weakness of the program was the difficulty with which population and location data were manually entered into the program for subsequent analysis. This prevented organizations that could have otherwise benefited from the program from successfully utilizing it without additional training. This research effort focuses on the leveraging of readily available geographic information system (GIS) electronic data to address this problem. Analysis of the difference between the previous manual data entry method and the GIS assisted method was statistically significant.

  8. Distribution of persistent organohalogen compounds in pine needles from selected locations in Kentucky and Georgia, USA.

    PubMed

    Loganathan, Bommanna G; Kumar, Kurunthachalam Senthil; Seaford, Kosta D; Sajwan, Kenneth S; Hanari, Nobuyasu; Yamashita, Nobuyoshi

    2008-04-01

    Epicuticular wax of pine needles accumulates organic pollutants from the atmosphere, and the pine needle samples have been used for monitoring both local and regional distributions of semivolatile organic air pollutants. One-year-old pine needles collected from residential and industrial locations in western Kentucky and the vicinity of Linden Chemicals and Plastics, a Superfund Site at Brunswick, Georgia, were analyzed for polychlorinated biphenyls (PCBs), major chlorinated pesticides, and polychlorinated naphthalenes (PCNs). Total PCB concentrations in pine needles from Kentucky ranged from 5.2 to 12 ng/g dry weight (dw). These sites were comparatively less polluted than those from the Superfund Site, which had total PCB concentrations in pine needles in the range of 15-34 ng/g dw. Total chlorinated pesticides concentrations in pine needles ranged from 3.5 to 10 ng/g dw from Kentucky. A similar range of concentrations of chlorinated pesticides (7.3-12 ng/g dw) was also found in pine needle samples from the Superfund site. Total PCN concentrations in pine needles ranged from 76 to 150 pg/g dw in Kentucky. At the Superfund Site, total PCN concentrations ranged from 610 pg/g dw to 38,000 pg/g dw. When the toxic equivalencies (TEQs) of PCBs in pine needles were compared, Kentucky was relatively lower (0.03-0.11 pg/g dry wt) than the TEQs at the Superfund Site (0.24-0.48 pg/g dry wt). The TEQs of PCNs from Kentucky (0.004-0.067 pg/g dw) were much lower than the TEQs from locations near the Superfund Site (0.30-19 pg/g dry wt). The results revealed that pine needles are excellent, passive, nondestructive bioindicators for monitoring and evaluating PCBs, chlorinated pesticides, and PCNs.

  9. The Location and Distribution of Transurethral Bulking Agent: 3-Dimensional Ultrasound Study.

    PubMed

    Yune, Junchan Joshua; Quiroz, Lieschen; Nihira, Mikio A; Siddighi, Sam; O'Leary, Dena E; Santiago, A; Shobeiri, S Abbas

    2016-01-01

    To use 3-dimensional endovaginal ultrasound to describe the location and distribution of bulking agent after an uncomplicated transurethral injection. Endovaginal ultrasound was performed in 24 treatment-naive patients immediately after bulking agent was injected. The distance between the center of the hyperechoic density of bulking agent and the urethrovesical junction (UVJ) was measured in the sagittal and axial views. This was calculated in percentile length of urethra. Also, the pattern of tracking of bulking agent was assessed if it is presented. After the 2 subjects were excluded because of the poor quality of images, 22 patients were included in this study. Eighteen (82%) subjects showed 2 sites of bulking agents, and mostly, they were located around 3- and 9-o'clock positions. The average distance of bulking agent from left UVJ was at 16.9% of the length of the urethra (6.2 mm; range, 0.5-17 mm) and at 25.5% of the length of the urethra (8.9 mm; range, 0-24.8 mm) in the right side. The average length of urethra was 36.7 mm. Eleven of the 22 subjects (50%) had both sides within upper one third of urethra. The difference in distance between the 2 sides was less than 10 mm in 12 of 22 patients (54%). Nine of the 22 patients (41%) had a significant spread of bulking agent mostly either into the bladder neck or toward the distal urethra. Although the bulking agent is most often found at 3- and 9-o'clock positions as intended, the distance from the UVJ is highly variable after an uncomplicated office-based transurethral injection. The bulking material does not form the characteristic spheres in 41% of cases and tracks toward the bladder neck or the distal urethra.

  10. Genomic distribution of AFLP markers relative to gene locations for different eukaryotic species

    PubMed Central

    2013-01-01

    Background Amplified fragment length polymorphism (AFLP) markers are frequently used for a wide range of studies, such as genome-wide mapping, population genetic diversity estimation, hybridization and introgression studies, phylogenetic analyses, and detection of signatures of selection. An important issue to be addressed for some of these fields is the distribution of the markers across the genome, particularly in relation to gene sequences. Results Using in-silico restriction fragment analysis of the genomes of nine eukaryotic species we characterise the distribution of AFLP fragments across the genome and, particularly, in relation to gene locations. First, we identify the physical position of markers across the chromosomes of all species. An observed accumulation of fragments around (peri) centromeric regions in some species is produced by repeated sequences, and this accumulation disappears when AFLP bands rather than fragments are considered. Second, we calculate the percentage of AFLP markers positioned within gene sequences. For the typical EcoRI/MseI enzyme pair, this ranges between 28 and 87% and is usually larger than that expected by chance because of the higher GC content of gene sequences relative to intergenic ones. In agreement with this, the use of enzyme pairs with GC-rich restriction sites substantially increases the above percentages. For example, using the enzyme system SacI/HpaII, 86% of AFLP markers are located within gene sequences in A. thaliana, and 100% of markers in Plasmodium falciparun. We further find that for a typical trait controlled by 50 genes of average size, if 1000 AFLPs are used in a study, the number of those within 1 kb distance from any of the genes would be only about 1–2, and only about 50% of the genes would have markers within that distance. Conclusions The high coverage of AFLP markers across the genomes and the high proportion of markers within or close to gene sequences make them suitable for genome scans and

  11. Pallet use in grocery distribution affects forest resource consumption location: a spatial model of grocery pallet use

    Treesearch

    R. Bruce Anderson; R. Bruce Anderson

    1991-01-01

    To assess the impact of grocery pallet production on future hardwood resources, better information is needed on the current use of reusable pallets by the grocery and related products industry. A spatial model of pallet use in the grocery distribution system that identifies the locational aspects of grocery pallet production and distribution, determines how these...

  12. Optimal sensor placement for leak location in water distribution networks using genetic algorithms.

    PubMed

    Casillas, Myrna V; Puig, Vicenç; Garza-Castañón, Luis E; Rosich, Albert

    2013-11-04

    This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach.

  13. Optimal Sensor Placement for Leak Location in Water Distribution Networks Using Genetic Algorithms

    PubMed Central

    Casillas, Myrna V.; Puig, Vicenç; Garza-Castañón, Luis E.; Rosich, Albert

    2013-01-01

    This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach. PMID:24193099

  14. Influence of Interhemispheric Asymmetry in Volcanic Forcing on ITCZ Location and Oxygen Isotope Distribution

    NASA Astrophysics Data System (ADS)

    Colose, C.; LeGrande, A. N.; Vuille, M. F.

    2014-12-01

    Volcanic eruptions are a dominant source of natural forced variability during the Common Era. Although transient, eruptions strongly cool the planet through the liberation of sulfur gases that enter the stratosphere (converting to sulfate aerosol) and scatter sunlight. In particular, such events source the largest amplitude radiative forcings that perturb the terrestrial climate during the Last Millennium. Previous studies have highlighted the global climate impact of large volcanic events, including the role of latitude and time-of-year of a given eruption. Here, we focus on the influence of hemispheric asymmetry in Aerosol Optical Depth (AOD) and its projection onto the tropical hydrologic cycle. This is assessed using a suite of simulations from a fully coupled isotope-enabled General Circulation Model (NASA GISS ModelE2-R) run from 850-2005 CE. This study builds upon prior work that demonstrate the role of inter-hemispheric forcing gradients on Intertropical Convergence Zone (ITCZ) location. In addition to unveiling the physical mechanisms that alter tropical hydroclimate, we highlight the anticipated tropical oxygen isotope distribution following large eruptions. Thus, through the vehicle of an isotope-enabled model, we formulate a potentially falsifiable prediction for how volcanic forcing may manifest itself in high-resolution proxies across the tropics.

  15. Aerosol number size distributions over a coastal semi urban location: Seasonal changes and ultrafine particle bursts.

    PubMed

    Babu, S Suresh; Kompalli, Sobhan Kumar; Moorthy, K Krishna

    2016-09-01

    Number-size distribution is one of the important microphysical properties of atmospheric aerosols that influence aerosol life cycle, aerosol-radiation interaction as well as aerosol-cloud interactions. Making use of one-yearlong measurements of aerosol particle number-size distributions (PNSD) over a broad size spectrum (~15-15,000nm) from a tropical coastal semi-urban location-Trivandrum (Thiruvananthapuram), the size characteristics, their seasonality and response to mesoscale and synoptic scale meteorology are examined. While the accumulation mode contributed mostly to the annual mean concentration, ultrafine particles (having diameter <100nm) contributed as much as 45% to the total concentration, and thus constitute a strong reservoir, that would add to the larger particles through size transformation. The size distributions were, in general, bimodal with well-defined modes in the accumulation and coarse regimes, with mode diameters lying in the range 141 to 167nm and 1150 to 1760nm respectively, in different seasons. Despite the contribution of the coarse sized particles to the total number concentration being meager, they contributed significantly to the surface area and volume, especially during transport of marine air mass highlighting the role of synoptic air mass changes. Significant diurnal variation occurred in the number concentrations, geometric mean diameters, which is mostly attributed to the dynamics of the local coastal atmospheric boundary layer and the effect of mesoscale land/sea breeze circulation. Bursts of ultrafine particles (UFP) occurred quite frequently, apparently during periods of land-sea breeze transitions, caused by the strong mixing of precursor-rich urban air mass with the cleaner marine air mass; the resulting turbulence along with boundary layer dynamics aiding the nucleation. These ex-situ particles were observed at the surface due to the transport associated with boundary layer dynamics. The particle growth rates from

  16. Lexical distributional cues, but not situational cues, are readily used to learn abstract locative verb-structure associations.

    PubMed

    Twomey, Katherine E; Chang, Franklin; Ambridge, Ben

    2016-08-01

    Children must learn the structural biases of locative verbs in order to avoid making overgeneralisation errors (e.g., (∗)I filled water into the glass). It is thought that they use linguistic and situational information to learn verb classes that encode structural biases. In addition to situational cues, we examined whether children and adults could use the lexical distribution of nouns in the post-verbal noun phrase of transitive utterances to assign novel verbs to locative classes. In Experiment 1, children and adults used lexical distributional cues to assign verb classes, but were unable to use situational cues appropriately. In Experiment 2, adults generalised distributionally-learned classes to novel verb arguments, demonstrating that distributional information can cue abstract verb classes. Taken together, these studies show that human language learners can use a lexical distributional mechanism that is similar to that used by computational linguistic systems that use large unlabelled corpora to learn verb meaning.

  17. Probability distributions for locations of calling animals, receivers, sound speeds, winds, and data from travel time differences.

    PubMed

    Spiesberger, John L

    2005-09-01

    A new nonlinear sequential Monte Carlo technique is used to estimate posterior probability distributions for the location of a calling animal, the locations of acoustic receivers, sound speeds, winds, and the differences in sonic travel time between pairs of receivers from measurements of those differences, while adopting realistic prior distributions of the variables. Other algorithms in the literature appear to be too inefficient to yield distributions for this large number of variables (up to 41) without recourse to a linear approximation. The new technique overcomes the computational inefficiency of other algorithms because it does not sequentially propagate the joint probability distribution of the variables between adjacent data. Instead, the lower and upper bounds of the distributions are propagated. The technique is applied to commonly encountered problems that were previously intractable such as estimating how accurately sound speed and poorly known initial locations of receivers can be estimated from the differences in sonic travel time from calling animals, while explicitly modeling distributions of all the variables in the problem. In both cases, the new technique yields one or two orders of magnitude improvements compared with initial uncertainties. The technique is suitable for accurately estimating receiver locations from animal calls.

  18. Tsunami Size Distributions at Far-Field Locations from Aggregated Earthquake Sources

    NASA Astrophysics Data System (ADS)

    Geist, E. L.; Parsons, T.

    2015-12-01

    The distribution of tsunami amplitudes at far-field tide gauge stations is explained by aggregating the probability of tsunamis derived from individual subduction zones and scaled by their seismic moment. The observed tsunami amplitude distributions of both continental (e.g., San Francisco) and island (e.g., Hilo) stations distant from subduction zones are examined. Although the observed probability distributions nominally follow a Pareto (power-law) distribution, there are significant deviations. Some stations exhibit varying degrees of tapering of the distribution at high amplitudes and, in the case of the Hilo station, there is a prominent break in slope on log-log probability plots. There are also differences in the slopes of the observed distributions among stations that can be significant. To explain these differences we first estimate seismic moment distributions of observed earthquakes for major subduction zones. Second, regression models are developed that relate the tsunami amplitude at a station to seismic moment at a subduction zone, correcting for epicentral distance. The seismic moment distribution is then transformed to a site-specific tsunami amplitude distribution using the regression model. Finally, a mixture distribution is developed, aggregating the transformed tsunami distributions from all relevant subduction zones. This mixture distribution is compared to the observed distribution to assess the performance of the method described above. This method allows us to estimate the largest tsunami that can be expected in a given time period at a station.

  19. Spatial distribution of soil water repellency in a grassland located in Lithuania

    NASA Astrophysics Data System (ADS)

    Pereira, Paulo; Novara, Agata

    2014-05-01

    Soil water repellency (SWR) it is recognized to be very heterogeneous in time in space and depends on soil type, climate, land use, vegetation and season (Doerr et al., 2002). It prevents or reduces water infiltration, with important impacts on soil hydrology, influencing the mobilization and transport of substances into the soil profile. The reduced infiltration increases surface runoff and soil erosion. SWR reduce also the seed emergency and plant growth due the reduced amount of water in the root zone. Positive aspects of SWR are the increase of soil aggregate stability, organic carbon sequestration and reduction of water evaporation (Mataix-Solera and Doerr, 2004; Diehl, 2013). SWR depends on the soil aggregate size. In fire affected areas it was founded that SWR was more persistent in small size aggregates (Mataix-Solera and Doerr, 2004; Jordan et al., 2011). However, little information is available about SWR spatial distribution according to soil aggregate size. The aim of this work is study the spatial distribution of SWR in fine earth (<2 mm) and different aggregate sizes, 2-1 mm, 1-0.5 mm, 0.5-0.25 mm and <0.25 mm. The studied area is located near Vilnius (Lithuania) at 54° 42' N, 25° 08 E, 158 masl. A plot with 400 m2 (20 x 20 m with 5 m space between sampling points) and 25 soil samples were collected in the top soil (0-5 cm) and taken to the laboratory. Previously to SWR assessment, the samples were air dried. The persistence of SWR was analysed according to the Water Drop Penetration Method, which involves placing three drops of distilled water onto the soil surface and registering the time in seconds (s) required for the drop complete penetration (Wessel, 1988). Data did not respected Gaussian distribution, thus in order to meet normality requirements it was log-normal transformed. Spatial interpolations were carried out using Ordinary Kriging. The results shown that SWR was on average in fine earth 2.88 s (Coeficient of variation % (CV%)=44.62), 2

  20. Responses of European precipitation distributions and regimes to different blocking locations

    NASA Astrophysics Data System (ADS)

    Sousa, Pedro M.; Trigo, Ricardo M.; Barriopedro, David; Soares, Pedro M. M.; Ramos, Alexandre M.; Liberato, Margarida L. R.

    2017-02-01

    In this work we performed an analysis on the impacts of blocking episodes on seasonal and annual European precipitation and the associated physical mechanisms. Distinct domains were considered in detail taking into account different blocking center positions spanning between the Atlantic and western Russia. Significant positive precipitation anomalies are found for southernmost areas while generalized negative anomalies (up to 75 % in some areas) occur in large areas of central and northern Europe. This dipole of anomalies is reversed when compared to that observed during episodes of strong zonal flow conditions. We illustrate that the location of the maximum precipitation anomalies follows quite well the longitudinal positioning of the blocking centers and discuss regional and seasonal differences in the precipitation responses. To better understand the precipitation anomalies, we explore the blocking influence on cyclonic activity. The results indicate a split of the storm-tracks north and south of blocking systems, leading to an almost complete reduction of cyclonic centers in northern and central Europe and increases in southern areas, where cyclone frequency doubles during blocking episodes. However, the underlying processes conductive to the precipitation anomalies are distinct between northern and southern European regions, with a significant role of atmospheric instability in southern Europe, and moisture availability as the major driver at higher latitudes. This distinctive underlying process is coherent with the characteristic patterns of latent heat release from the ocean associated with blocked and strong zonal flow patterns. We also analyzed changes in the full range of the precipitation distribution of several regional sectors during blocked and zonal days. Results show that precipitation reductions in the areas under direct blocking influence are driven by a substantial drop in the frequency of moderate rainfall classes. Contrarily, southwards of

  1. Approaching hydrate and free gas distribution at the SUGAR-Site location in the Danube Delta

    NASA Astrophysics Data System (ADS)

    Bialas, Joerg; Dannowski, Anke; Zander, Timo; Klaeschen, Dirk; Klaucke, Ingo

    2017-04-01

    Gas hydrates did receive a lot of attention over the last decades when investigating their potential to serve as a possible source for Methane production. Among other world-wide programs the German SUGAR project sets out to investigate the entire chain from exploitation to production in Europe. Therefore research in the scope of the SUGAR project sets out to investigate a site in European EEZ for the detailed studies of hydrate and gas distribution in a permeable sediment matrix. Among others one aim of the project is to provide in situ samples of natural methane hydrate for further investigations by MEBO drilling. The Danube paleo-delta with its ancient canyon and levee systems was chosen as a possible candidate for hydrate formation within the available drilling range of 200 m below seafloor. In order to decide on the best drilling location cruise MSM34 (Bialas et al., 2014) of the German RV MARIA S MERIAN set out to acquire geophysical, geological and geochemical datasets for assessment of the hydrate content within the Danube paleo-delta, Black Sea. The Black Sea is well known for a significant gas content in the sedimentary column. Reports on observations of bottom simulating reflectors (BSR) by Popescu et al. (2007) and others indicate that free gas and hydrate occurrence can be expected within the ancient passive channel levee systems. A variety of inverted reflection events within the gas hydrate stability zone (GHSZ) were observed within the drilling range of MEBO and chosen for further investigation. Here we report on combined seismic investigations of high-resolution 2D & 3D multichannel seismic (MCS) acquisition accompanied by four component Ocean-Bottom-Seismometer (OBS) observations. P- and converted S-wave arrivals within the OBS datasets were analysed to provide overall velocity depth models. Due to the limited length of profiles the majority of OBS events are caused by near vertical reflections. While P-wave events have a significant lateral

  2. Spatiotemporal distribution of location and object effects in reach-to-grasp kinematics

    PubMed Central

    Rouse, Adam G.

    2015-01-01

    In reaching to grasp an object, the arm transports the hand to the intended location as the hand shapes to grasp the object. Prior studies that tracked arm endpoint and grip aperture have shown that reaching and grasping, while proceeding in parallel, are interdependent to some degree. Other studies of reaching and grasping that have examined the joint angles of all five digits as the hand shapes to grasp various objects have not tracked the joint angles of the arm as well. We, therefore, examined 22 joint angles from the shoulder to the five digits as monkeys reached, grasped, and manipulated in a task that dissociated location and object. We quantified the extent to which each angle varied depending on location, on object, and on their interaction, all as a function of time. Although joint angles varied depending on both location and object beginning early in the movement, an early phase of location effects in joint angles from the shoulder to the digits was followed by a later phase in which object effects predominated at all joint angles distal to the shoulder. Interaction effects were relatively small throughout the reach-to-grasp. Whereas reach trajectory was influenced substantially by the object, grasp shape was comparatively invariant to location. Our observations suggest that neural control of reach-to-grasp may occur largely in two sequential phases: the first determining the location to which the arm transports the hand, and the second shaping the entire upper extremity to grasp and manipulate the object. PMID:26445870

  3. Tests of Variance Equality When Distributions Differ in Form, Scale and Location.

    ERIC Educational Resources Information Center

    Olejnik, Stephen F.; Algina, James

    Sampling distributions for ten tests for comparing population variances in a two group design were generated for several combinations of equal and unequal sample sizes, population means, and group variances when distributional forms differed. The ten procedures included: (1) O'Brien's (OB); (2) O'Brien's with adjusted degrees of freedom; (3)…

  4. Exploring the Estimation of Examinee Locations Using Multidimensional Latent Trait Models under Different Distributional Assumptions

    ERIC Educational Resources Information Center

    Jang, Hyesuk

    2014-01-01

    This study aims to evaluate a multidimensional latent trait model to determine how well the model works in various empirical contexts. Contrary to the assumption of these latent trait models that the traits are normally distributed, situations in which the latent trait is not shaped with a normal distribution may occur (Sass et al, 2008; Woods…

  5. Exploring the Estimation of Examinee Locations Using Multidimensional Latent Trait Models under Different Distributional Assumptions

    ERIC Educational Resources Information Center

    Jang, Hyesuk

    2014-01-01

    This study aims to evaluate a multidimensional latent trait model to determine how well the model works in various empirical contexts. Contrary to the assumption of these latent trait models that the traits are normally distributed, situations in which the latent trait is not shaped with a normal distribution may occur (Sass et al, 2008; Woods…

  6. A robust confidence interval for location for symmetric, long-tailed distributions.

    PubMed

    Gross, A M

    1973-07-01

    A procedure called the wave-interval is presented for obtaining a 95% confidence interval for the center (mean, median) of a symmetric distribution that is not only highly efficient when the data have a Normal distribution but also performs well when some or all of the data come from a long-tailed distribution such as the Cauchy. Use of the wave-interval greatly reduces the risk of asserting much less than one's data will support. The only table required is the usual t-table. The wave-interval procedure is definitely recommended for samples of ten or more, and appears satisfactory for samples of nine or eight.

  7. Pathological fracture of the patella due to an atypical located aneurysmal bone cyst: verification by means of ultrasound-guided biopsy.

    PubMed

    Plaikner, Michaela; Gruber, Hannes; Henninger, Benjamin; Gruber, Leonhard; Kosiol, Juana; Loizides, Alexander

    2016-03-01

    We report on a rare case of an atypical located aneurysmal bone cyst (ABC) in the patella presenting with pathological fracture after trauma. Using all available diagnostic modalities and by means of ultrasound-guided core-needle biopsy an unclear and suspected pathological fractured cystic bone lesion in the patella of a young man could be further clarified. The acquired images suggested the diagnosis of a pathological fractured aneurysmal bone cyst after mild trauma. However, due to the extraordinary location and clinical presentation the diagnosis was secured by means of ultrasound-guided biopsy through a small cortical gap. As shown in this rare case of an atypical aneurysmal bone cyst of the patella, the quite seldom but sometimes possible ultrasound-guided biopsy of intraosseous lesions can help to achieve the diagnostic clarification and should also be taken into account as a non-standard procedure.

  8. Where exactly am I? Self-location judgements distribute between head and torso.

    PubMed

    Alsmith, Adrian J T; Longo, Matthew R

    2014-02-01

    I am clearly located where my body is located. But is there one particular place inside my body where I am? Recent results have provided apparently contradictory findings about this question. Here, we addressed this issue using a more direct approach than has been used in previous studies. Using a simple pointing task, we asked participants to point directly at themselves, either by manual manipulation of the pointer whilst blindfolded or by visually discerning when the pointer was in the correct position. Self-location judgements in haptic and visual modalities were highly similar, and were clearly modulated by the starting location of the pointer. Participants most frequently chose to point to one of two likely regions, the upper face or the upper torso, according to which they reached first. These results suggest that while the experienced self is not spread out homogeneously across the entire body, nor is it localised in any single point. Rather, two distinct regions, the upper face and upper torso, appear to be judged as where "I" am.

  9. On intra-supply chain system with an improved distribution plan, multiple sales locations and quality assurance.

    PubMed

    Chiu, Singa Wang; Huang, Chao-Chih; Chiang, Kuo-Wei; Wu, Mei-Fang

    2015-01-01

    Transnational companies, operating in extremely competitive global markets, always seek to lower different operating costs, such as inventory holding costs in their intra- supply chain system. This paper incorporates a cost reducing product distribution policy into an intra-supply chain system with multiple sales locations and quality assurance studied by [Chiu et al., Expert Syst Appl, 40:2669-2676, (2013)]. Under the proposed cost reducing distribution policy, an added initial delivery of end items is distributed to multiple sales locations to meet their demand during the production unit's uptime and rework time. After rework when the remaining production lot goes through quality assurance, n fixed quantity installments of finished items are then transported to sales locations at a fixed time interval. Mathematical modeling and optimization techniques are used to derive closed-form optimal operating policies for the proposed system. Furthermore, the study demonstrates significant savings in stock holding costs for both the production unit and sales locations. Alternative of outsourcing product delivery task to an external distributor is analyzed to assist managerial decision making in potential outsourcing issues in order to facilitate further reduction in operating costs.

  10. The hemodynamic effects of the LVAD outflow cannula location on the thrombi distribution in the aorta: A primary numerical study.

    PubMed

    Zhang, Yage; Gao, Bin; Yu, Chang

    2016-09-01

    Although a growing number of patients undergo LVAD implantation for heart failure treatment, thrombi are still the devastating complication for patients who used LVAD. LVAD outflow cannula location and thrombi generation sources were hypothesized to affect the thrombi distribution in the aorta. To test this hypothesis, numerical studies were conducted by using computational fluid dynamic (CFD) theory. Two anastomotic configurations, in which the LVAD outflow cannula is anastomosed to the anterior and lateral ascending aortic wall (named as anterior configurations and lateral configurations, respectively), are designed. The particles, whose sized are same as those of thrombi, are released at the LVAD output cannula and the aortic valve (named as thrombiP and thrombiL, respectively) to calculate the distribution of thrombi. The simulation results demonstrate that the thrombi distribution in the aorta is significantly affected by the LVAD outflow cannula location. In anterior configuration, the thrombi probability of entering into the three branches is 23.60%, while that in lateral configuration is 36.68%. Similarly, in anterior configuration, the thrombi probabilities of entering into brachiocephalic artery, left common carotid artery and left subclavian artery, is 8.51%, 9.64%, 5.45%, respectively, while that in lateral configuration it is 11.39%, 3.09%, 22.20% respectively. Moreover, the origins of thrombi could affect their distributions in the aorta. In anterior configuration, the thrombiP has a lower probability to enter into the three branches than thrombiL (12% vs. 25%). In contrast, in lateral configuration, the thrombiP has a higher probability to enter into the three branches than thrombiL (47% vs. 35%). In brief, the LVAD outflow cannula location significantly affects the distribution of thrombi in the aorta. Thus, in the clinical practice, the selection of outflow location of LVAD and the risk of thrombi formed in the left ventricle should be paid more

  11. Collaborative Planning in Network-Enabled Co-Located and Distributed Environments

    DTIC Science & Technology

    2008-03-01

    PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER U.S. Army Research Institute for the Behavioral and Social Sceinces ...the supply, transportation, and security teams. The REPSS simulation is delivered on networked computers with one computer devoted to each position...Kimble, Thompson, and Garloch (1997) suggest that computer -mediated communications associated with distributed group collaboration can offer both

  12. Assessing cadmium distribution applying neutron radiography in moss trophical levels, located in Szarvasko, Hungary.

    PubMed

    Varga, János; Korösi, Ferenc; Balaskó, Márton; Naár, Zoltán

    2004-10-01

    The measuring station of the 10 MW VVR-SM research reactor at the Budapest Neutron Centre (Hungary) was used to perform dynamic neutron radiography (DNR), which was, to our best knowledge, the first time, in a Tortella tortuosa biotope. In the conducted study, two trophical levels, moss and spider Thomisidae sp. juv., were examined. Cadmium penetration routes, distribution and accumulation zones were visualized in the leafy gametophyte life cycle of Tortella tortuosa and in the organs of the spider.

  13. Redshift distributions of galaxies in the Dark Energy Survey Science Verification shear catalogue and implications for weak lensing

    DOE PAGES

    Bonnett, C.; Troxel, M. A.; Hartley, W.; ...

    2016-08-30

    Here we present photometric redshift estimates for galaxies used in the weak lensing analysis of the Dark Energy Survey Science Verification (DES SV) data. Four model- or machine learning-based photometric redshift methods—annz2, bpz calibrated against BCC-Ufig simulations, skynet, and tpz—are analyzed. For training, calibration, and testing of these methods, we construct a catalogue of spectroscopically confirmed galaxies matched against DES SV data. The performance of the methods is evaluated against the matched spectroscopic catalogue, focusing on metrics relevant for weak lensing analyses, with additional validation against COSMOS photo-z’s. From the galaxies in the DES SV shear catalogue, which have meanmore » redshift 0.72±0.01 over the range 0.38 of approximately 3%. This shift is within the one sigma statistical errors on σ8 for the DES SV shear catalogue. We further study the potential impact of systematic differences on the critical surface density, Σcrit, finding levels of bias safely less than the statistical power of DES SV data. In conclusion, we recommend a final Gaussian prior for the photo-z bias in the mean of n(z) of width 0.05 for each of the three tomographic bins, and show that this is a sufficient bias model for the corresponding cosmology analysis.« less

  14. Hydraulic head estimation at unobserved locations: Approximating the distribution of the absolute error based on geologic interpretations

    NASA Astrophysics Data System (ADS)

    Langousis, Andreas; Kaleris, Vassilios; Xeygeni, Vagia; Magkou, Foteini

    2017-04-01

    Assessing the availability of groundwater reserves at a regional level, requires accurate and robust hydraulic head estimation at multiple locations of an aquifer. To that extent, one needs groundwater observation networks that can provide sufficient information to estimate the hydraulic head at unobserved locations. The density of such networks is largely influenced by the spatial distribution of the hydraulic conductivity in the aquifer, and it is usually determined through trial-and-error, by solving the groundwater flow based on a properly selected set of alternative but physically plausible geologic structures. In this work, we use: 1) dimensional analysis, and b) a pulse-based stochastic model for simulation of synthetic aquifer structures, to calculate the distribution of the absolute error in hydraulic head estimation as a function of the standardized distance from the nearest measuring locations. The resulting distributions are proved to encompass all possible small-scale structural dependencies, exhibiting characteristics (bounds, multi-modal features etc.) that can be explained using simple geometric arguments. The obtained results are promising, pointing towards the direction of establishing design criteria based on large-scale geologic maps.

  15. Distributed fiber optic sensor employing phase generate carrier for disturbance detection and location

    NASA Astrophysics Data System (ADS)

    Xu, Haiyan; Wu, Hongyan; Zhang, Xuewu; Zhang, Zhuo; Li, Min

    2015-05-01

    Distributed optic fiber sensor is a new type of system, which could be used in the long-distance and strong-EMI condition for monitoring and inspection. A method of external modulation with a phase modulator is proposed in this paper to improve the positioning accuracy of the disturbance in a distributed optic-fiber sensor. We construct distributed disturbance detecting system based on Michelson interferometer, and a phase modulator has been attached to the fiber sensor in front of the Faraday rotation mirror (FRM), to elevate the signal produced by interfering of the two lights reflected by the Faraday rotation Mirror to a high frequency, while other signals remain in the low frequency. Through a high pass filter and phase retrieve circus, a signal which is proportional to the external disturbance is acquired. The accuracy of disturbance positioning with this signal can be largely improved. The method is quite simple and easy to achieve. Theoretical analysis and experimental results show that, this method can effectively improve the positioning accuracy.

  16. Syringe filtration methods for examining dissolved and colloidal trace element distributions in remote field locations

    NASA Technical Reports Server (NTRS)

    Shiller, Alan M.

    2003-01-01

    It is well-established that sampling and sample processing can easily introduce contamination into dissolved trace element samples if precautions are not taken. However, work in remote locations sometimes precludes bringing bulky clean lab equipment into the field and likewise may make timely transport of samples to the lab for processing impossible. Straightforward syringe filtration methods are described here for collecting small quantities (15 mL) of 0.45- and 0.02-microm filtered river water in an uncontaminated manner. These filtration methods take advantage of recent advances in analytical capabilities that require only small amounts of waterfor analysis of a suite of dissolved trace elements. Filter clogging and solute rejection artifacts appear to be minimal, although some adsorption of metals and organics does affect the first approximately 10 mL of water passing through the filters. Overall the methods are clean, easy to use, and provide reproducible representations of the dissolved and colloidal fractions of trace elements in river waters. Furthermore, sample processing materials can be prepared well in advance in a clean lab and transported cleanly and compactly to the field. Application of these methods is illustrated with data from remote locations in the Rocky Mountains and along the Yukon River. Evidence from field flow fractionation suggests that the 0.02-microm filters may provide a practical cutoff to distinguish metals associated with small inorganic and organic complexes from those associated with silicate and oxide colloids.

  17. Syringe filtration methods for examining dissolved and colloidal trace element distributions in remote field locations

    NASA Technical Reports Server (NTRS)

    Shiller, Alan M.

    2003-01-01

    It is well-established that sampling and sample processing can easily introduce contamination into dissolved trace element samples if precautions are not taken. However, work in remote locations sometimes precludes bringing bulky clean lab equipment into the field and likewise may make timely transport of samples to the lab for processing impossible. Straightforward syringe filtration methods are described here for collecting small quantities (15 mL) of 0.45- and 0.02-microm filtered river water in an uncontaminated manner. These filtration methods take advantage of recent advances in analytical capabilities that require only small amounts of waterfor analysis of a suite of dissolved trace elements. Filter clogging and solute rejection artifacts appear to be minimal, although some adsorption of metals and organics does affect the first approximately 10 mL of water passing through the filters. Overall the methods are clean, easy to use, and provide reproducible representations of the dissolved and colloidal fractions of trace elements in river waters. Furthermore, sample processing materials can be prepared well in advance in a clean lab and transported cleanly and compactly to the field. Application of these methods is illustrated with data from remote locations in the Rocky Mountains and along the Yukon River. Evidence from field flow fractionation suggests that the 0.02-microm filters may provide a practical cutoff to distinguish metals associated with small inorganic and organic complexes from those associated with silicate and oxide colloids.

  18. MPL-Net Measurements of Aerosol and Cloud Vertical Distributions at Co-Located AERONET Sites

    NASA Technical Reports Server (NTRS)

    Welton, Ellsworth J.; Campbell, James R.; Berkoff, Timothy A.; Spinhirne, James D.; Tsay, Si-Chee; Holben, Brent; Starr, David OC. (Technical Monitor)

    2002-01-01

    In the early 1990s, the first small, eye-safe, and autonomous lidar system was developed, the Micropulse Lidar (MPL). The MPL acquires signal profiles of backscattered laser light from aerosols and clouds. The signals are analyzed to yield multiple layer heights, optical depths of each layer, average extinction-to-backscatter ratios for each layer, and profiles of extinction in each layer. In 2000, several MPL sites were organized into a coordinated network, called MPL-Net, by the Cloud and Aerosol Lidar Group at NASA Goddard Space Flight Center (GSFC) using funding provided by the NASA Earth Observing System. tn addition to the funding provided by NASA EOS, the NASA CERES Ground Validation Group supplied four MPL systems to the project, and the NASA TOMS group contributed their MPL for work at GSFC. The Atmospheric Radiation Measurement Program (ARM) also agreed to make their data available to the MPL-Net project for processing. In addition to the initial NASA and ARM operated sites, several other independent research groups have also expressed interest in joining the network using their own instruments. Finally, a limited amount of EOS funding was set aside to participate in various field experiments each year. The NASA Sensor Intercomparison and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) project also provides funds to deploy their MPL during ocean research cruises. All together, the MPL-Net project has participated in four major field experiments since 2000. Most MPL-Net sites and field experiment locations are also co-located with sunphotometers in the NASA Aerosol Robotic Network. (AERONET). Therefore, at these locations data is collected on both aerosol and cloud vertical structure as well as column optical depth and sky radiance. Real-time data products are now available from most MPL-Net sites. Our real-time products are generated at times of AERONET aerosol optical depth (AOD) measurements. The AERONET AOD is used as input to our

  19. MPL-Net Measurements of Aerosol and Cloud Vertical Distributions at Co-Located AERONET Sites

    NASA Technical Reports Server (NTRS)

    Welton, Ellsworth J.; Campbell, James R.; Berkoff, Timothy A.; Spinhirne, James D.; Tsay, Si-Chee; Holben, Brent; Starr, David OC. (Technical Monitor)

    2002-01-01

    In the early 1990s, the first small, eye-safe, and autonomous lidar system was developed, the Micropulse Lidar (MPL). The MPL acquires signal profiles of backscattered laser light from aerosols and clouds. The signals are analyzed to yield multiple layer heights, optical depths of each layer, average extinction-to-backscatter ratios for each layer, and profiles of extinction in each layer. In 2000, several MPL sites were organized into a coordinated network, called MPL-Net, by the Cloud and Aerosol Lidar Group at NASA Goddard Space Flight Center (GSFC) using funding provided by the NASA Earth Observing System. tn addition to the funding provided by NASA EOS, the NASA CERES Ground Validation Group supplied four MPL systems to the project, and the NASA TOMS group contributed their MPL for work at GSFC. The Atmospheric Radiation Measurement Program (ARM) also agreed to make their data available to the MPL-Net project for processing. In addition to the initial NASA and ARM operated sites, several other independent research groups have also expressed interest in joining the network using their own instruments. Finally, a limited amount of EOS funding was set aside to participate in various field experiments each year. The NASA Sensor Intercomparison and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) project also provides funds to deploy their MPL during ocean research cruises. All together, the MPL-Net project has participated in four major field experiments since 2000. Most MPL-Net sites and field experiment locations are also co-located with sunphotometers in the NASA Aerosol Robotic Network. (AERONET). Therefore, at these locations data is collected on both aerosol and cloud vertical structure as well as column optical depth and sky radiance. Real-time data products are now available from most MPL-Net sites. Our real-time products are generated at times of AERONET aerosol optical depth (AOD) measurements. The AERONET AOD is used as input to our

  20. Circumferential distribution and location of Mallory-Weiss tears: recent trends

    PubMed Central

    Okada, Mayumi; Ishimura, Norihisa; Shimura, Shino; Mikami, Hironobu; Okimoto, Eiko; Aimi, Masahito; Uno, Goichi; Oshima, Naoki; Yuki, Takafumi; Ishihara, Shunji; Kinoshita, Yoshikazu

    2015-01-01

    Background and study aims: Mallory-Weiss tears (MWTs) are not only a common cause of acute nonvariceal gastrointestinal bleeding but also an iatrogenic adverse event related to endoscopic procedures. However, changes in the clinical characteristics and endoscopic features of MWTs over the past decade have not been reported. The aim of this study was to investigate recent trends in the etiology and endoscopic features of MWTs. Patients and methods: We retrospectively reviewed the medical records of patients with a diagnosis of MWT at our university hospital between August 2003 and September 2013. The information regarding etiology, clinical parameters, endoscopic findings, therapeutic interventions, and outcome was reviewed. Results: A total of 190 patients with MWTs were evaluated. More than half (n = 100) of the cases occurred during endoscopic procedures; cases related to alcohol consumption were less frequent (n = 13). MWTs were most frequently located in the lesser curvature of the stomach and right lateral wall (2 – to 4-o’clock position) of the esophagus, irrespective of the cause. The condition of more than 90 % of the patients (n = 179) was improved by conservative or endoscopic treatment, whereas 11 patients (5.8 %) required blood transfusion. Risk factors for blood transfusion were a longer laceration (odds ratio [OR] 2.3) and a location extending from the esophagus to the stomach (OR 5.3). Conclusions: MWTs were frequently found on the right lateral wall (2 – to 4-o’clock position) of the esophagus aligned with the lesser curvature of the stomach, irrespective of etiology. Longer lacerations extending from the esophagus to the gastric cardia were associated with an elevated risk for bleeding and requirement for blood transfusion. PMID:26528495

  1. FIBWR: a steady-state core flow distribution code for boiling water reactors code verification and qualification report. Final report

    SciTech Connect

    Ansari, A.F.; Gay, R.R.; Gitnick, B.J.

    1981-07-01

    A steady-state core flow distribution code (FIBWR) is described. The ability of the recommended models to predict various pressure drop components and void distribution is shown by comparison to the experimental data. Application of the FIBWR code to the Vermont Yankee Nuclear Power Station is shown by comparison to the plant measured data.

  2. Exomars Mission Verification Approach

    NASA Astrophysics Data System (ADS)

    Cassi, Carlo; Gilardi, Franco; Bethge, Boris

    According to the long-term cooperation plan established by ESA and NASA in June 2009, the ExoMars project now consists of two missions: A first mission will be launched in 2016 under ESA lead, with the objectives to demonstrate the European capability to safely land a surface package on Mars, to perform Mars Atmosphere investigation, and to provide communi-cation capability for present and future ESA/NASA missions. For this mission ESA provides a spacecraft-composite, made up of an "Entry Descent & Landing Demonstrator Module (EDM)" and a Mars Orbiter Module (OM), NASA provides the Launch Vehicle and the scientific in-struments located on the Orbiter for Mars atmosphere characterisation. A second mission with it launch foreseen in 2018 is lead by NASA, who provides spacecraft and launcher, the EDL system, and a rover. ESA contributes the ExoMars Rover Module (RM) to provide surface mobility. It includes a drill system allowing drilling down to 2 meter, collecting samples and to investigate them for signs of past and present life with exobiological experiments, and to investigate the Mars water/geochemical environment, In this scenario Thales Alenia Space Italia as ESA Prime industrial contractor is in charge of the design, manufacturing, integration and verification of the ESA ExoMars modules, i.e.: the Spacecraft Composite (OM + EDM) for the 2016 mission, the RM for the 2018 mission and the Rover Operations Control Centre, which will be located at Altec-Turin (Italy). The verification process of the above products is quite complex and will include some pecu-liarities with limited or no heritage in Europe. Furthermore the verification approach has to be optimised to allow full verification despite significant schedule and budget constraints. The paper presents the verification philosophy tailored for the ExoMars mission in line with the above considerations, starting from the model philosophy, showing the verification activities flow and the sharing of tests

  3. Optimal Location through Distributed Algorithm to Avoid Energy Hole in Mobile Sink WSNs

    PubMed Central

    Qing-hua, Li; Wei-hua, Gui; Zhi-gang, Chen

    2014-01-01

    In multihop data collection sensor network, nodes near the sink need to relay on remote data and, thus, have much faster energy dissipation rate and suffer from premature death. This phenomenon causes energy hole near the sink, seriously damaging the network performance. In this paper, we first compute energy consumption of each node when sink is set at any point in the network through theoretical analysis; then we propose an online distributed algorithm, which can adjust sink position based on the actual energy consumption of each node adaptively to get the actual maximum lifetime. Theoretical analysis and experimental results show that the proposed algorithms significantly improve the lifetime of wireless sensor network. It lowers the network residual energy by more than 30% when it is dead. Moreover, the cost for moving the sink is relatively smaller. PMID:24895668

  4. Distribution of Brazilian dermatologists according to geographic location, population and HDI of municipalities: an ecological study*

    PubMed Central

    Schmitt, Juliano Vilaverde; Miot, Hélio Amante

    2014-01-01

    This study investigated the geographic distribution of dermatologists in Brazilian municipalities in relation to the population, regions of the country and human development index. We conducted an ecological study based on data from the 2010 census, the 2010 human development index, and the records of the Brazilian Society of Dermatology. 5565 municipalities and 6718 dermatologists were surveyed. Only 504 (9.1%) municipalities had dermatologists, and accounted for 56.2% of the Brazilian population. The smallest population size and lowest HDI rate that best discriminated municipalities that did not have dermatologists were found to be 28,000 and 0.71, respectively. The average population density of dermatologists in cities was 1/23.000 inhabitants, and variations were independently associated with the HDI, the population of the municipalities and the region of the country. PMID:25387516

  5. A mitochondrial location for haemoglobins--dynamic distribution in ageing and Parkinson's disease.

    PubMed

    Shephard, Freya; Greville-Heygate, Oliver; Marsh, Oliver; Anderson, Susan; Chakrabarti, Lisa

    2014-01-01

    Haemoglobins are iron-containing proteins that transport oxygen in the blood of most vertebrates. The mitochondrion is the cellular organelle which consumes oxygen in order to synthesise ATP. Mitochondrial dysfunction is implicated in neurodegeneration and ageing. We find that α and β haemoglobin (Hba and Hbb) proteins are altered in their distribution in mitochondrial fractions from degenerating brain. We demonstrate that both Hba and Hbb are co-localised with the mitochondrion in mammalian brain. The precise localisation of the Hbs is within the inner membrane space and associated with inner mitochondrial membrane. Relative mitochondrial to cytoplasmic ratios of Hba and Hbb show changing distributions of these proteins during the process of neurodegeneration in the pcd(5j) mouse brain. A significant difference in mitochondrial Hba and Hbb content in the mitochondrial fraction is seen at 31 days after birth, this corresponds to a stage when dynamic neuronal loss is measured to be greatest in the Purkinje Cell Degeneration mouse. We also report changes in mitochondrial Hba and Hbb levels in ageing brain and muscle. Significant differences in mitochondrial Hba and Hbb can be seen when comparing aged brain to muscle, suggesting tissue specific functions of these proteins in the mitochondrion. In muscle there are significant differences between Hba levels in old and young mitochondria. To understand whether the changes detected in mitochondrial Hbs are of clinical significance, we examined Parkinson's disease brain, immunohistochemistry studies suggest that cell bodies in the substantia nigra accumulate mitochondrial Hb. However, western blotting of mitochondrial fractions from PD and control brains indicates significantly less Hb in PD brain mitochondria. One explanation could be a specific loss of cells containing mitochondria loaded with Hb proteins. Our study opens the door to an examination of the role of Hb function, within the context of the mitochondrion

  6. A mitochondrial location for haemoglobins—Dynamic distribution in ageing and Parkinson's disease☆

    PubMed Central

    Shephard, Freya; Greville-Heygate, Oliver; Marsh, Oliver; Anderson, Susan; Chakrabarti, Lisa

    2014-01-01

    Haemoglobins are iron-containing proteins that transport oxygen in the blood of most vertebrates. The mitochondrion is the cellular organelle which consumes oxygen in order to synthesise ATP. Mitochondrial dysfunction is implicated in neurodegeneration and ageing. We find that α and β haemoglobin (Hba and Hbb) proteins are altered in their distribution in mitochondrial fractions from degenerating brain. We demonstrate that both Hba and Hbb are co-localised with the mitochondrion in mammalian brain. The precise localisation of the Hbs is within the inner membrane space and associated with inner mitochondrial membrane. Relative mitochondrial to cytoplasmic ratios of Hba and Hbb show changing distributions of these proteins during the process of neurodegeneration in the pcd5j mouse brain. A significant difference in mitochondrial Hba and Hbb content in the mitochondrial fraction is seen at 31 days after birth, this corresponds to a stage when dynamic neuronal loss is measured to be greatest in the Purkinje Cell Degeneration mouse. We also report changes in mitochondrial Hba and Hbb levels in ageing brain and muscle. Significant differences in mitochondrial Hba and Hbb can be seen when comparing aged brain to muscle, suggesting tissue specific functions of these proteins in the mitochondrion. In muscle there are significant differences between Hba levels in old and young mitochondria. To understand whether the changes detected in mitochondrial Hbs are of clinical significance, we examined Parkinson's disease brain, immunohistochemistry studies suggest that cell bodies in the substantia nigra accumulate mitochondrial Hb. However, western blotting of mitochondrial fractions from PD and control brains indicates significantly less Hb in PD brain mitochondria. One explanation could be a specific loss of cells containing mitochondria loaded with Hb proteins. Our study opens the door to an examination of the role of Hb function, within the context of the mitochondrion

  7. Arsenic distribution in soils and rye plants of a cropland located in an abandoned mining area.

    PubMed

    Álvarez-Ayuso, Esther; Abad-Valle, Patricia; Murciego, Ascensión; Villar-Alonso, Pedro

    2016-01-15

    A mining impacted cropland was studied in order to assess its As pollution level and the derived environmental and health risks. Profile soil samples (0-50 cm) and rye plant samples were collected at different distances (0-150 m) from the near mine dump and analyzed for their As content and distribution. These cropland soils were sandy, acidic and poor in organic matter and Fe/Al oxides. The soil total As concentrations (38-177 mg kg(-1)) and, especially, the soil soluble As concentrations (0.48-4.1 mg kg(-1)) importantly exceeded their safe limits for agricultural use of soils. Moreover, the soil As contents more prone to be mobilized could rise up to 25-69% of total As levels as determined using (NH4)2SO4, NH4H2PO4 and (NH4)2C2O4·H2O as sequential extractants. Arsenic in rye plants was primarily distributed in roots (3.4-18.8 mg kg(-1)), with restricted translocation to shoots (TF=0.05-0.26) and grains (TF=<0.02-0.14). The mechanism for this excluder behavior should be likely related to arsenate reduction to arsenite in roots, followed by its complexation with thiols, as suggested by the high arsenite level in rye roots (up to 95% of the total As content) and the negative correlation between thiol concentrations in rye roots and As concentrations in rye shoots (|R|=0.770; p<0.01). Accordingly, in spite of the high mobile and mobilizable As contents in soils, As concentrations in rye above-ground tissues comply with the European regulation on undesirable substances in animal feed. Likewise, rye grain As concentrations were below its maximum tolerable concentration in cereals established by international legislation. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Verification of patient-specific dose distributions in proton therapy using a commercial two-dimensional ion chamber array

    SciTech Connect

    Arjomandy, Bijan; Sahoo, Narayan; Ciangaru, George; Zhu, Ronald; Song Xiaofei; Gillin, Michael

    2010-11-15

    Purpose: The purpose of this study was to determine whether a two-dimensional (2D) ion chamber array detector quickly and accurately measures patient-specific dose distributions in treatment with passively scattered and spot scanning proton beams. Methods: The 2D ion chamber array detector MatriXX was used to measure the dose distributions in plastic water phantom from passively scattered and spot scanning proton beam fields planned for patient treatment. Planar dose distributions were measured using MatriXX, and the distributions were compared to those calculated using a treatment-planning system. The dose distributions generated by the treatment-planning system and a film dosimetry system were similarly compared. Results: For passively scattered proton beams, the gamma index for the dose-distribution comparison for treatment fields for three patients with prostate cancer and for one patient with lung cancer was less than 1.0 for 99% and 100% of pixels for a 3% dose tolerance and 3 mm distance-to-dose agreement, respectively. For spot scanning beams, the mean ({+-} standard deviation) percentages of pixels with gamma indices meeting the passing criteria were 97.1%{+-}1.4% and 98.8%{+-}1.4% for MatriXX and film dosimetry, respectively, for 20 fields used to treat patients with prostate cancer. Conclusions: Unlike film dosimetry, MatriXX provides not only 2D dose-distribution information but also absolute dosimetry in fractions of minutes with acceptable accuracy. The results of this study indicate that MatriXX can be used to verify patient-field specific dose distributions in proton therapy.

  9. Experimental Verification of Application of Looped System and Centralized Voltage Control in a Distribution System with Renewable Energy Sources

    NASA Astrophysics Data System (ADS)

    Hanai, Yuji; Hayashi, Yasuhiro; Matsuki, Junya

    The line voltage control in a distribution network is one of the most important issues for a penetration of Renewable Energy Sources (RES). A loop distribution network configuration is an effective solution to resolve voltage and distribution loss issues concerned about a penetration of RES. In this paper, for a loop distribution network, the authors propose a voltage control method based on tap change control of LRT and active/reactive power control of RES. The tap change control of LRT takes a major role of the proposed voltage control. Additionally the active/reactive power control of RES supports the voltage control when voltage deviation from the upper or lower voltage limit is unavoidable. The proposed method adopts SCADA system based on measured data from IT switches, which are sectionalizing switch with sensor installed in distribution feeder. In order to check the validity of the proposed voltage control method, experimental simulations using a distribution system analog simulator “ANSWER” are carried out. In the simulations, the voltage maintenance capability in the normal and the emergency is evaluated.

  10. Impacts to the chest of PMHSs - Influence of impact location and load distribution on chest response.

    PubMed

    Holmqvist, Kristian; Svensson, Mats Y; Davidsson, Johan; Gutsche, Andreas; Tomasch, Ernst; Darok, Mario; Ravnik, Dean

    2016-02-01

    The chest response of the human body has been studied for several load conditions, but is not well known in the case of steering wheel rim-to-chest impact in heavy goods vehicle frontal collisions. The aim of this study was to determine the response of the human chest in a set of simulated steering wheel impacts. PMHS tests were carried out and analysed. The steering wheel load pattern was represented by a rigid pendulum with a straight bar-shaped front. A crash test dummy chest calibration pendulum was utilised for comparison. In this study, a set of rigid bar impacts were directed at various heights of the chest, spanning approximately 120mm around the fourth intercostal space. The impact energy was set below a level estimated to cause rib fracture. The analysed results consist of responses, evaluated with respect to differences in the impacting shape and impact heights on compression and viscous criteria chest injury responses. The results showed that the bar impacts consistently produced lesser scaled chest compressions than the hub; the Middle bar responses were around 90% of the hub responses. A superior bar impact provided lesser chest compression; the average response was 86% of the Middle bar response. For inferior bar impacts, the chest compression response was 116% of the chest compression in the middle. The damping properties of the chest caused the compression to decrease in the high speed bar impacts to 88% of that in low speed impacts. From the analysis it could be concluded that the bar impact shape provides lower chest criteria responses compared to the hub. Further, the bar responses are dependent on the impact location of the chest. Inertial and viscous effects of the upper body affect the responses. The results can be used to assess the responses of human substitutes such as anthropomorphic test devices and finite element human body models, which will benefit the development process of heavy goods vehicle safety systems. Copyright © 2015

  11. SU-E-T-798: Verification of 3DVH Dose Distribution Before Clinical Implementation for Patient-Specific IMRT QA

    SciTech Connect

    McFadden, D

    2015-06-15

    Purpose: In recent years patient-specific IMRT QA has transitioned from film and chamber measurements to beam-by-beam 2D array measurements. 3DVH takes this transition a step further by estimating the 3D dose distribution delivered using 2D per beam diode array measurements. In this study, the 3D dose distribution generated by 3DVH is compared to film and chamber measurements. In addition, the accuracy ROI volume and error detection is investigated. Methods: Composite film and ion chamber measurements in a solid water phantom were performed for 9 IMRT PINNACLE patient plans for 4 treatment sites. The film and chamber measurements were compared to the dose distribution predicted by 3DVH using MAPCHECK2 per beam measurements. The absolute point dose measurement (CAX) was extracted from the predicted 3DVH and PINNACLE dose distribution and was compared by taking the ratio of measured to predicted doses. The dose distribution measured with film was compared to the distribution in the corresponding plane (AX, SAG, COR) extracted from predicted dose distribution by 3DVH and PINNACLE using a 2D gamma analysis. Gamma analysis was performed with 2% dose, 2 mm DTA, 20% threshold, and global normalization. In addition, the percent difference between 3DVH and PINNACLE ROI volumes was calculated. Results: The average ratio of the measured point dose vs the 3DVH predicted dose was 1.017 (σ=0.011). The average gamma passing rate for measured vs 3DVH dose distributions was 95.1% (σ=2.53%). The average percent difference of 3DVH vs PINNACLE ROI volume was 2.29% (σ=2.5%). Conclusion: The dose distributions predicted by 3DVH using MAPCHECK2 measurements are the same as the distributions that would have been obtained using film and chamber. The ROI volumes used in 3DVH are not an exact match to those in PINNACLE; the effect requires more investigation. The accuracy of error detection by 3DVH is currently being investigated.

  12. Spatiotemporal Distribution of Location and Object Effects in Primary Motor Cortex Neurons during Reach-to-Grasp

    PubMed Central

    Rouse, Adam G.

    2016-01-01

    Reaching and grasping typically are considered to be spatially separate processes that proceed concurrently in the arm and the hand, respectively. The proximal representation in the primary motor cortex (M1) controls the arm for reaching, while the distal representation controls the hand for grasping. Many studies of M1 activity therefore have focused either on reaching to various locations without grasping different objects, or else on grasping different objects all at the same location. Here, we recorded M1 neurons in the anterior bank and lip of the central sulcus as monkeys performed more naturalistic movements, reaching toward, grasping, and manipulating four different objects in up to eight different locations. We quantified the extent to which variation in firing rates depended on location, on object, and on their interaction—all as a function of time. Activity proceeded largely in two sequential phases: the first related predominantly to the location to which the upper extremity reached, and the second related to the object about to be grasped. Both phases involved activity distributed widely throughout the sampled territory, spanning both the proximal and the distal upper extremity representation in caudal M1. Our findings indicate that naturalistic reaching and grasping, rather than being spatially segregated processes that proceed concurrently, each are spatially distributed processes controlled by caudal M1 in large part sequentially. Rather than neuromuscular processes separated in space but not time, reaching and grasping are separated more in time than in space. SIGNIFICANCE STATEMENT Reaching and grasping typically are viewed as processes that proceed concurrently in the arm and hand, respectively. The arm region in the primary motor cortex (M1) is assumed to control reaching, while the hand region controls grasping. During naturalistic reach–grasp–manipulate movements, we found, however, that neuron activity proceeds largely in two sequential

  13. Drop size distributions and related properties of fog for five locations measured from aircraft

    NASA Technical Reports Server (NTRS)

    Zak, J. Allen

    1994-01-01

    Fog drop size distributions were collected from aircraft as part of the Synthetic Vision Technology Demonstration Program. Three west coast marine advection fogs, one frontal fog, and a radiation fog were sampled from the top of the cloud to the bottom as the aircraft descended on a 3-degree glideslope. Drop size versus altitude versus concentration are shown in three dimensional plots for each 10-meter altitude interval from 1-minute samples. Also shown are median volume radius and liquid water content. Advection fogs contained the largest drops with median volume radius of 5-8 micrometers, although the drop sizes in the radiation fog were also large just above the runway surface. Liquid water content increased with height, and the total number of drops generally increased with time. Multimodal variations in number density and particle size were noted in most samples where there was a peak concentration of small drops (2-5 micrometers) at low altitudes, midaltitude peak of drops 5-11 micrometers, and high-altitude peak of the larger drops (11-15 micrometers and above). These observations are compared with others and corroborate previous results in fog gross properties, although there is considerable variation with time and altitude even in the same type of fog.

  14. Locating ethics in data science: responsibility and accountability in global and distributed knowledge production systems.

    PubMed

    Leonelli, Sabina

    2016-12-28

    The distributed and global nature of data science creates challenges for evaluating the quality, import and potential impact of the data and knowledge claims being produced. This has significant consequences for the management and oversight of responsibilities and accountabilities in data science. In particular, it makes it difficult to determine who is responsible for what output, and how such responsibilities relate to each other; what 'participation' means and which accountabilities it involves, with regard to data ownership, donation and sharing as well as data analysis, re-use and authorship; and whether the trust placed on automated tools for data mining and interpretation is warranted (especially as data processing strategies and tools are often developed separately from the situations of data use where ethical concerns typically emerge). To address these challenges, this paper advocates a participative, reflexive management of data practices. Regulatory structures should encourage data scientists to examine the historical lineages and ethical implications of their work at regular intervals. They should also foster awareness of the multitude of skills and perspectives involved in data science, highlighting how each perspective is partial and in need of confrontation with others. This approach has the potential to improve not only the ethical oversight for data science initiatives, but also the quality and reliability of research outputs.This article is part of the themed issue 'The ethical impact of data science'.

  15. Locating ethics in data science: responsibility and accountability in global and distributed knowledge production systems

    PubMed Central

    2016-01-01

    The distributed and global nature of data science creates challenges for evaluating the quality, import and potential impact of the data and knowledge claims being produced. This has significant consequences for the management and oversight of responsibilities and accountabilities in data science. In particular, it makes it difficult to determine who is responsible for what output, and how such responsibilities relate to each other; what ‘participation’ means and which accountabilities it involves, with regard to data ownership, donation and sharing as well as data analysis, re-use and authorship; and whether the trust placed on automated tools for data mining and interpretation is warranted (especially as data processing strategies and tools are often developed separately from the situations of data use where ethical concerns typically emerge). To address these challenges, this paper advocates a participative, reflexive management of data practices. Regulatory structures should encourage data scientists to examine the historical lineages and ethical implications of their work at regular intervals. They should also foster awareness of the multitude of skills and perspectives involved in data science, highlighting how each perspective is partial and in need of confrontation with others. This approach has the potential to improve not only the ethical oversight for data science initiatives, but also the quality and reliability of research outputs. This article is part of the themed issue ‘The ethical impact of data science’. PMID:28336799

  16. Feasibility of Locating Leakages in Sewage Pressure Pipes Using the Distributed Temperature Sensing Technology.

    PubMed

    Apperl, Benjamin; Pressl, Alexander; Schulz, Karsten

    2017-01-01

    The cost effective maintenance of underwater pressure pipes for sewage disposal in Austria requires the detection and localization of leakages. Extrusion of wastewater in lakes can heavily influence the water and bathing quality of surrounding waters. The Distributed Temperature Sensing (DTS) technology is a widely used technique for oil and gas pipeline leakage detection. While in pipeline leakage detection, fiber optic cables are installed permanently at the outside or within the protective sheathing of the pipe; this paper aims at testing the feasibility of detecting leakages with temporary introduced fiber optic cable inside the pipe. The detection and localization were tested in a laboratory experiment. The intrusion of water from leakages into the pipe, producing a local temperature drop, served as indicator for leakages. Measurements were taken under varying measurement conditions, including the number of leakages as well as the positioning of the fiber optic cable. Experiments showed that leakages could be detected accurately with the proposed methodology, when measuring resolution, temperature gradient and measurement time were properly selected. Despite the successful application of DTS for leakage detection in this lab environment, challenges in real system applications may arise from temperature gradients within the pipe system over longer distances and the placement of the cable into the real pipe system.

  17. SU-D-BRF-02: In Situ Verification of Radiation Therapy Dose Distributions From High-Energy X-Rays Using PET Imaging

    SciTech Connect

    Zhang, Q; Kai, L; Wang, X; Hua, B; Chui, L; Wang, Q; Ma, C

    2014-06-01

    Purpose: To study the possibility of in situ verification of radiation therapy dose distributions using PET imaging based on the activity distribution of 11C and 15O produced via photonuclear reactions in patient irradiated by 45MV x-rays. Methods: The method is based on the photonuclear reactions in the most elemental composition {sup 12}C and {sup 16}O in body tissues irradiated by bremsstrahlung photons with energies up to 45 MeV, resulting primarily in {sup 11}C and {sup 15}O, which are positron-emitting nuclei. The induced positron activity distributions were obtained with a PET scanner in the same room of a LA45 accelerator (Top Grade Medical, Beijing, China). The experiments were performed with a brain phantom using realistic treatment plans. The phantom was scanned at 20min and 2-5min after irradiation for {sup 11}C and {sup 15}, respectively. The interval between the two scans was 20 minutes. The activity distributions of {sup 11}C and {sup 15}O within the irradiated volume can be separated from each other because the half-life is 20min and 2min for {sup 11}C and {sup 15}O, respectively. Three x-ray energies were used including 10MV, 25MV and 45MV. The radiation dose ranged from 1.0Gy to 10.0Gy per treatment. Results: It was confirmed that no activity was detected at 10 MV beam energy, which was far below the energy threshold for photonuclear reactions. At 25 MV x-ray activity distribution images were observed on PET, which needed much higher radiation dose in order to obtain good quality. For 45 MV photon beams, good quality activation images were obtained with 2-3Gy radiation dose, which is the typical daily dose for radiation therapy. Conclusion: The activity distribution of {sup 15}O and {sup 11}C could be used to derive the dose distribution of 45MV x-rays at the regular daily dose level. This method can potentially be used to verify in situ dose distributions of patients treated on the LA45 accelerator.

  18. Estimation of hydrothermal deposits location from magnetization distribution and magnetic properties in the North Fiji Basin

    NASA Astrophysics Data System (ADS)

    Choi, S.; Kim, C.; Park, C.; Kim, H.

    2013-12-01

    The North Fiji Basin is belong to one of the youngest basins of back-arc basins in the southwest Pacific (from 12 Ma ago). We performed the marine magnetic and the bathymetry survey in the North Fiji Basin for finding the submarine hydrothermal deposits in April 2012. We acquired magnetic and bathymetry datasets by using Multi-Beam Echo Sounder EM120 (Kongsberg Co.) and Overhouser Proton Magnetometer SeaSPY (Marine Magnetics Co.). We conducted the data processing to obtain detailed seabed topography, magnetic anomaly, reduce to the pole(RTP), analytic signal and magnetization. The study areas composed of the two areas(KF-1(longitude : 173.5 ~ 173.7 and latitude : -16.2 ~ -16.5) and KF-3(longitude : 173.4 ~ 173.6 and latitude : -18.7 ~ -19.1)) in Central Spreading Ridge(CSR) and one area(KF-2(longitude : 173.7 ~ 174 and latitude : -16.8 ~ -17.2)) in Triple Junction(TJ). The seabed topography of KF-1 existed thin horst in two grabens that trends NW-SE direction. The magnetic properties of KF-1 showed high magnetic anomalies in center part and magnetic lineament structure of trending E-W direction. In the magnetization distribution of KF-1, the low magnetization zone matches well with a strong analytic signal in the northeastern part. KF-2 area has TJ. The seabed topography formed like Y-shape and showed a high feature in the center of TJ. The magnetic properties of KF-2 displayed high magnetic anomalies in N-S spreading ridge center and northwestern part. In the magnetization distribution of KF-2, the low magnetization zone matches well with a strong analytic signal in the northeastern part. The seabed topography of KF-3 presented a flat and high topography like dome structure at center axis and some seamounts scattered around the axis. The magnetic properties of KF-3 showed high magnetic anomalies in N-S spreading ridge center part. In the magnetization of KF-2, the low magnetization zone mismatches to strong analytic signal in this area. The difference of KF-3

  19. Screening for the Location of RNA using the Chloride Ion Distribution in Simulations of Virus Capsids.

    PubMed

    Larsson, Daniel S D; van der Spoel, David

    2012-07-10

    The complete structure of the genomic material inside a virus capsid remains elusive, although a limited amount of symmetric nucleic acid can be resolved in the crystal structure of 17 icosahedral viruses. The negatively charged sugar-phosphate backbone of RNA and DNA as well as the large positive charge of the interior surface of the virus capsids suggest that electrostatic complementarity is an important factor in the packaging of the genomes in these viruses. To test how much packing information is encoded by the electrostatic and steric envelope of the capsid interior, we performed extensive all-atom molecular dynamics (MD) simulations of virus capsids with explicit water molecules and solvent ions. The model systems were two small plant viruses in which significant amounts of RNA has been observed by X-ray crystallography: satellite tobacco mosaic virus (STMV, 62% RNA visible) and satellite tobacco necrosis virus (STNV, 34% RNA visible). Simulations of half-capsids of these viruses with no RNA present revealed that the binding sites of RNA correlated well with regions populated by chloride ions, suggesting that it is possible to screen for the binding sites of nucleic acids by determining the equilibrium distribution of negative ions. By including the crystallographically resolved RNA in addition to ions, we predicted the localization of the unresolved RNA in the viruses. Both viruses showed a hot-spot for RNA binding at the 5-fold symmetry axis. The MD simulations were compared to predictions of the chloride density based on nonlinear Poisson-Boltzmann equation (PBE) calculations with mobile ions. Although the predictions are superficially similar, the PBE calculations overestimate the ion concentration close to the capsid surface and underestimate it far away, mainly because protein dynamics is not taken into account. Density maps from chloride screening can be used to aid in building atomic models of packaged virus genomes. Knowledge of the principles of

  20. Distribution of incident rainfall through vegetation in a watershed located in southern Spain

    NASA Astrophysics Data System (ADS)

    Moreno Perez, Maria Fatima; Roldan Cañas, Jose; Perez Arellano, Rafael; Cienfuegos, Ignacio

    2013-04-01

    The rainfall interception by vegetation canopy is one of the main factors involved in soil moisture and runoff because a large proportion returns to the atmosphere as evaporation. This may assume evaporation loss between 20 and 40% of the rain, so it should be taken into account in basin water balances, especially in arid and semi-arid regions with scanty rainfall. The purpose of this study was to determine the distribution of rainwater through the canopy of trees and shrub present in the watershed of "The Cabril" (Cordoba, Spain). The incident precipitation, throughfall and cortical flow were quantified for 2 agricultural years, 2010/11 and 2011/12, in the predominant vegetation, rockrose (Cistus ladanifer) and tree pines (Pinus pinea), in order to determine the volume of precipitation intercepted, and the influence of the rainfall intensity and duration on interception. 1134.4 mm of rain were collected on 102 storms. 31.4% was intercepted and evaporated into the atmosphere in the pines, and 19% in the rockrose. Cortical flow represented 0.3% in pine and 17,7% in rockrose, and throughfall represented 68.3% in pine and 63.3% in rockrose. Despite numerical differences exist between vegetation cover, the results indicate that there are significant correlations between throughfall, cortical flow and interception with precipitation in both pine and rockrose. The amount of water needed to saturate the tops of the pines showed variations between 1.6 and 9.5 mm. Variation in rockrose is 1.8 to 3.9 mm depending on the intensity of rainfall. The interception reached their highest values with less intense rainfall, decreasing considerably when rainfall duration and intensity increase. It can be seen that precipitation events exceeding 20 mm cause an increase of moisture beneath the surface of pine greater than outside. The opposite is produced when events are less than 20 mm. This can be explained because the interception in the small events is very high.

  1. What influences national and foreign physicians' geographic distribution? An analysis of medical doctors' residence location in Portugal.

    PubMed

    Russo, Giuliano; Ferrinho, Paulo; de Sousa, Bruno; Conceição, Cláudia

    2012-07-02

    The debate over physicians' geographical distribution has attracted the attention of the economic and public health literature over the last forty years. Nonetheless, it is still to date unclear what influences physicians' location, and whether foreign physicians contribute to fill the geographical gaps left by national doctors in any given country. The present research sets out to investigate the current distribution of national and international physicians in Portugal, with the objective to understand its determinants and provide an evidence base for policy-makers to identify policies to influence it. A cross-sectional study of physicians currently registered in Portugal was conducted to describe the population and explore the association of physician residence patterns with relevant personal and municipality characteristics. Data from the Portuguese Medical Council on physicians' residence and characteristics were analysed, as well as data from the National Institute of Statistics on municipalities' population, living standards and health care network. Descriptive statistics, chi-square tests, negative binomial and logistic regression modelling were applied to determine: (a) municipality characteristics predicting Portuguese and International physicians' geographical distribution, and; (b) doctors' characteristics that could increase the odds of residing outside the country's metropolitan areas. There were 39,473 physicians in Portugal in 2008, 51.1% of whom male, and 40.2% between 41 and 55 years of age. They were predominantly Portuguese (90.5%), with Spanish, Brazilian and African nationalities also represented. Population, Population's Purchasing Power, Nurses per capita and Municipality Development Index (MDI) were the municipality characteristics displaying the strongest association with national physicians' location. For foreign physicians, the MDI was not statistically significant, while municipalities' foreign population applying for residence

  2. What influences national and foreign physicians’ geographic distribution? An analysis of medical doctors’ residence location in Portugal

    PubMed Central

    2012-01-01

    Background The debate over physicians’ geographical distribution has attracted the attention of the economic and public health literature over the last forty years. Nonetheless, it is still to date unclear what influences physicians’ location, and whether foreign physicians contribute to fill the geographical gaps left by national doctors in any given country. The present research sets out to investigate the current distribution of national and international physicians in Portugal, with the objective to understand its determinants and provide an evidence base for policy-makers to identify policies to influence it. Methods A cross-sectional study of physicians currently registered in Portugal was conducted to describe the population and explore the association of physician residence patterns with relevant personal and municipality characteristics. Data from the Portuguese Medical Council on physicians’ residence and characteristics were analysed, as well as data from the National Institute of Statistics on municipalities’ population, living standards and health care network. Descriptive statistics, chi-square tests, negative binomial and logistic regression modelling were applied to determine: (a) municipality characteristics predicting Portuguese and International physicians’ geographical distribution, and; (b) doctors’ characteristics that could increase the odds of residing outside the country’s metropolitan areas. Results There were 39,473 physicians in Portugal in 2008, 51.1% of whom male, and 40.2% between 41 and 55 years of age. They were predominantly Portuguese (90.5%), with Spanish, Brazilian and African nationalities also represented. Population, Population’s Purchasing Power, Nurses per capita and Municipality Development Index (MDI) were the municipality characteristics displaying the strongest association with national physicians’ location. For foreign physicians, the MDI was not statistically significant, while municipalities

  3. TESTING AND VERIFICATION OF REAL-TIME WATER QUALITY MONITORING SENSORS IN A DISTRIBUTION SYSTEM AGAINST INTRODUCED CONTAMINATION

    EPA Science Inventory

    Drinking water distribution systems reach the majority of American homes, business and civic areas, and are therefore an attractive target for terrorist attack via direct contamination, or backflow events. Instrumental monitoring of such systems may be used to signal the prese...

  4. Experimental verification of improved depth-dose distribution using hyper-thermal neutron incidence in neutron capture therapy.

    PubMed

    Sakurai, Y; Kobayashi, T

    2001-01-01

    We have proposed the utilization of 'hyper-thermal neutrons' for neutron capture therapy (NCT) from the viewpoint of the improvement in the dose distribution in a human body. In order to verify the improved depth-dose distribution due to hyper-thermal neutron incidence, two experiments were carried out using a test-type hyper-thermal neutron generator at a thermal neutron irradiation field in Kyoto University Reactor (KUR), which is actually utilized for NCT clinical irradiation. From the free-in-air experiment for the spectrum-shift characteristics, it was confirmed that the hyper-thermal neutrons of approximately 860 K at maximum could be obtained by the generator. From the phantom experiment, the improvement effect and the controllability for the depth-dose distribution were confirmed. For example, it was found that the relative neutron depth-dose distribution was about 1 cm improved with the 860 K hyper-thermal neutron incidence, compared to the normal thermal neutron incidence.

  5. TESTING AND VERIFICATION OF REAL-TIME WATER QUALITY MONITORING SENSORS IN A DISTRIBUTION SYSTEM AGAINST INTRODUCED CONTAMINATION

    EPA Science Inventory

    Drinking water distribution systems reach the majority of American homes, business and civic areas, and are therefore an attractive target for terrorist attack via direct contamination, or backflow events. Instrumental monitoring of such systems may be used to signal the prese...

  6. Dip distribution of Oita-Kumamoto Tectonic Line located in central Kyushu, Japan, estimated by eigenvectors of gravity gradient tensor

    NASA Astrophysics Data System (ADS)

    Kusumoto, Shigekazu

    2016-09-01

    We estimated the dip distribution of Oita-Kumamoto Tectonic Line located in central Kyushu, Japan, by using the dip of the maximum eigenvector of the gravity gradient tensor. A series of earthquakes in Kumamoto and Oita beginning on 14 April 2016 occurred along this tectonic line, the largest of which was M = 7.3. Because a gravity gradiometry survey has not been conducted in the study area, we calculated the gravity gradient tensor from the Bouguer gravity anomaly and employed it to the analysis. The general dip distribution of the Oita-Kumamoto Tectonic Line was found to be about 65° and tends to be higher towards its eastern end. In addition, we estimated the dip around the largest earthquake to be about 60° from the gravity gradient tensor. This result agrees with the dip of the earthquake source fault obtained by Global Navigation Satellite System data analysis.[Figure not available: see fulltext.

  7. Do as I say, not as I do: a lexical distributional account of English locative verb class acquisition.

    PubMed

    Twomey, Katherine E; Chang, Franklin; Ambridge, Ben

    2014-09-01

    Children overgeneralise verbs to ungrammatical structures early in acquisition, but retreat from these overgeneralisations as they learn semantic verb classes. In a large corpus of English locative utterances (e.g., the woman sprayed water onto the wall/wall with water), we found structural biases which changed over development and which could explain overgeneralisation behaviour. Children and adults had similar verb classes and a correspondence analysis suggested that lexical distributional regularities in the adult input could help to explain the acquisition of these classes. A connectionist model provided an explicit account of how structural biases could be learned over development and how these biases could be reduced by learning verb classes from distributional regularities.

  8. Genus Distribution of Bacteria and Fungi Associated with Keratitis in a Large Eye Center Located in Southern China.

    PubMed

    Lin, Lixia; Lan, Weizhong; Lou, Bingsheng; Ke, Hongmin; Yang, Yuanzhe; Lin, Xiaofeng; Liang, Lingyi

    2017-04-01

    To investigate the genus distribution of bacteria and fungi associated with keratitis in a large eye center located in Southern China and to compare the results with existing data from other areas in China. All results of corneal microbiological examinations from 2009 to 2013 of patients who had been clinically diagnosed with bacterial or fungal keratitis were obtained chronologically and anonymously from the microbiology database at Zhongshan Ophthalmic Center. Smear/culture data were reviewed and analyzed. Antibiotic resistance of the harvested bacteria was also evaluated. Of 2973 samples, the microbial detection rate was 46.05%; in which 759 eyes (25.5%) were positive for bacteria, 796 eyes (26.8%) were positive for fungi, and 186 eyes (6.3%) were co-infected with both fungi and bacteria. The most common type of bacteria isolated was Staphylococcus epidermidis (31.9%), followed by Pseudomonas aeruginosa (12.4%). The most common type of fungus was Fusarium species (29.3%), followed by Aspergillus species (24.1%). For the bacteria harvested, mean antibiotic resistance was chloromycetin (34.6%), cephalosporins (20.0%), fluoroquinolones (18.6%), and aminoglycosides (10.5%). The genus distribution of organisms detected in keratitis cases in the largest eye center located in Southern China differs from those in other areas in China. In Southern China during the time period studied, S. epidermidis and Fusarium sp. were the most common pathogens of infectious keratitis. Monitoring the changing trend of pathogens as well as antibiotic resistance are warranted.

  9. Levels and spatial distribution of airborne chemical elements in a heavy industrial area located in the north of Spain.

    PubMed

    Lage, J; Almeida, S M; Reis, M A; Chaves, P C; Ribeiro, T; Garcia, S; Faria, J P; Fernández, B G; Wolterbeek, H T

    2014-01-01

    The adverse health effects of airborne particles have been subjected to intense investigation in recent years; however, more studies on the chemical characterization of particles from pollution emissions are needed to (1) identify emission sources, (2) better understand the relative toxicity of particles, and (3) pinpoint more targeted emission control strategies and regulations. The main objective of this study was to assess the levels and spatial distribution of airborne chemical elements in a heavy industrial area located in the north of Spain. Instrumental and biomonitoring techniques were integrated and analytical methods for k0 instrumental neutron activation analysis and particle-induced x-ray emission were used to determine element content in aerosol filters and lichens. Results indicated that in general local industry contributed to the emissions of As, Sb, Cu, V, and Ni, which are associated with combustion processes. In addition, the steelwork emitted significant quantities of Fe and Mn and the cement factory was associated with Ca emissions. The spatial distribution of Zn and Al also indicated an important contribution of two industries located outside the studied area.

  10. KAT-7 SCIENCE VERIFICATION: USING H I OBSERVATIONS OF NGC 3109 TO UNDERSTAND ITS KINEMATICS AND MASS DISTRIBUTION

    SciTech Connect

    Carignan, C.; Frank, B. S.; Hess, K. M.; Lucero, D. M.; Randriamampandry, T. H.; Goedhart, S.; Passmoor, S. S.

    2013-09-15

    H I observations of the Magellanic-type spiral NGC 3109, obtained with the seven dish Karoo Array Telescope (KAT-7), are used to analyze its mass distribution. Our results are compared to those obtained using Very Large Array (VLA) data. KAT-7 is a pathfinder of the Square Kilometer Array precursor MeerKAT, which is under construction. The short baselines and low system temperature of the telescope make it sensitive to large-scale, low surface brightness emission. The new observations with KAT-7 allow the measurement of the rotation curve (RC) of NGC 3109 out to 32', doubling the angular extent of existing measurements. A total H I mass of 4.6 Multiplication-Sign 10{sup 8} M{sub Sun} is derived, 40% more than what is detected by the VLA observations. The observationally motivated pseudo-isothermal dark matter (DM) halo model can reproduce the observed RC very well, but the cosmologically motivated Navarro-Frenk-White DM model gives a much poorer fit to the data. While having a more accurate gas distribution has reduced the discrepancy between the observed RC and the MOdified Newtonian Dynamics (MOND) models, this is done at the expense of having to use unrealistic mass-to-light ratios for the stellar disk and/or very large values for the MOND universal constant a{sub 0}. Different distances or H I contents cannot reconcile MOND with the observed kinematics, in view of the small errors on these two quantities. As with many slowly rotating gas-rich galaxies studied recently, the present result for NGC 3109 continues to pose a serious challenge to the MOND theory.

  11. KAT-7 Science Verification: Using H I Observations of NGC 3109 to Understand its Kinematics and Mass Distribution

    NASA Astrophysics Data System (ADS)

    Carignan, C.; Frank, B. S.; Hess, K. M.; Lucero, D. M.; Randriamampandry, T. H.; Goedhart, S.; Passmoor, S. S.

    2013-09-01

    H I observations of the Magellanic-type spiral NGC 3109, obtained with the seven dish Karoo Array Telescope (KAT-7), are used to analyze its mass distribution. Our results are compared to those obtained using Very Large Array (VLA) data. KAT-7 is a pathfinder of the Square Kilometer Array precursor MeerKAT, which is under construction. The short baselines and low system temperature of the telescope make it sensitive to large-scale, low surface brightness emission. The new observations with KAT-7 allow the measurement of the rotation curve (RC) of NGC 3109 out to 32', doubling the angular extent of existing measurements. A total H I mass of 4.6 × 108 M ⊙ is derived, 40% more than what is detected by the VLA observations. The observationally motivated pseudo-isothermal dark matter (DM) halo model can reproduce the observed RC very well, but the cosmologically motivated Navarro-Frenk-White DM model gives a much poorer fit to the data. While having a more accurate gas distribution has reduced the discrepancy between the observed RC and the MOdified Newtonian Dynamics (MOND) models, this is done at the expense of having to use unrealistic mass-to-light ratios for the stellar disk and/or very large values for the MOND universal constant a 0. Different distances or H I contents cannot reconcile MOND with the observed kinematics, in view of the small errors on these two quantities. As with many slowly rotating gas-rich galaxies studied recently, the present result for NGC 3109 continues to pose a serious challenge to the MOND theory.

  12. KAT-7 Science Verification: Using HI Observations of NGC 3109 to Understand its Kinematics and Mass Distribution

    NASA Astrophysics Data System (ADS)

    Lucero, Danielle M.; Carignan, C.; Hess, K. M.; Frank, B. S.; Randriamampandry, T. H.; Goedhart, S.; Passmoor, S. S.

    2014-01-01

    HI observations of the Magellanic-type spiral NGC 3109, obtained with the seven dish Karoo Array Telescope (KAT-7), are used to analyze its mass distribution. Our results are compared to those obtained using Very Large Array (VLA) data. KAT-7 is a pathfinder of the Square Kilometer Array precursor MeerKAT, which is under construction. The short baselines and low system temperature of the telescope make it sensitive to large-scale, low surface brightness emission. The new observations with KAT-7 allow the measurement of the rotation curve (RC) of NGC 3109 out to 32', doubling the angular extent of existing measurements. A total HI mass of 4.6×108 M⊙ is derived, 40% more than what is detected by the VLA observations. The observationally motivated pseudo-isothermal dark matter halo model can reproduce the observed RC very well, but the cosmologically motivated Navarro-Frenk-White DM model gives a much poorer fit to the data. While having a more accurate gas distribution has reduced the discrepancy between the observed RC and the MOdified Newtonian Dynamics (MOND) models, this is done at the expense of having to use unrealistic mass-to-light ratios for the stellar disk and/or very large values for the MOND universal constant a0. Different distances or HI contents cannot reconcile MOND with the observed kinematics, in view of the small errors on these two quantities. As with many slowly rotating gas-rich galaxies studied recently, the present result for NGC 3109 continues to pose a serious challenge to the MOND theory.

  13. Verification of Anderson Superexchange in MnO via Magnetic Pair Distribution Function Analysis and ab initio Theory

    NASA Astrophysics Data System (ADS)

    Frandsen, Benjamin A.; Brunelli, Michela; Page, Katharine; Uemura, Yasutomo J.; Staunton, Julie B.; Billinge, Simon J. L.

    2016-05-01

    We present a temperature-dependent atomic and magnetic pair distribution function (PDF) analysis of neutron total scattering measurements of antiferromagnetic MnO, an archetypal strongly correlated transition-metal oxide. The known antiferromagnetic ground-state structure fits the low-temperature data closely with refined parameters that agree with conventional techniques, confirming the reliability of the newly developed magnetic PDF method. The measurements performed in the paramagnetic phase reveal significant short-range magnetic correlations on a ˜1 nm length scale that differ substantially from the low-temperature long-range spin arrangement. Ab initio calculations using a self-interaction-corrected local spin density approximation of density functional theory predict magnetic interactions dominated by Anderson superexchange and reproduce the measured short-range magnetic correlations to a high degree of accuracy. Further calculations simulating an additional contribution from a direct exchange interaction show much worse agreement with the data. The Anderson superexchange model for MnO is thus verified by experimentation and confirmed by ab initio theory.

  14. Verification of Anderson superexchange in MnO via magnetic pair distribution function analysis and ab initio theory

    DOE PAGES

    Benjamin A. Frandsen; Brunelli, Michela; Page, Katharine; ...

    2016-05-11

    Here, we present a temperature-dependent atomic and magnetic pair distribution function (PDF) analysis of neutron total scattering measurements of antiferromagnetic MnO, an archetypal strongly correlated transition-metal oxide. The known antiferromagnetic ground-state structure fits the low-temperature data closely with refined parameters that agree with conventional techniques, confirming the reliability of the newly developed magnetic PDF method. The measurements performed in the paramagnetic phase reveal significant short-range magnetic correlations on a ~1 nm length scale that differ substantially from the low-temperature long-range spin arrangement. Ab initio calculations using a self-interaction-corrected local spin density approximation of density functional theory predict magnetic interactions dominatedmore » by Anderson superexchange and reproduce the measured short-range magnetic correlations to a high degree of accuracy. Further calculations simulating an additional contribution from a direct exchange interaction show much worse agreement with the data. Furthermore, the Anderson superexchange model for MnO is thus verified by experimentation and confirmed by ab initio theory.« less

  15. Radar prediction of absolute rain fade distributions for earth-satellite paths and general methods for extrapolation of fade statistics to other locations

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1982-01-01

    The first absolute rain fade distribution method described establishes absolute fade statistics at a given site by means of a sampled radar data base. The second method extrapolates absolute fade statistics from one location to another, given simultaneously measured fade and rain rate statistics at the former. Both methods employ similar conditional fade statistic concepts and long term rain rate distributions. Probability deviations in the 2-19% range, with an 11% average, were obtained upon comparison of measured and predicted levels at given attenuations. The extrapolation of fade distributions to other locations at 28 GHz showed very good agreement with measured data at three sites located in the continental temperate region.

  16. Probing the location and distribution of paramagnetic centers in alkali metal-loaded zeolites through (7)Li MAS NMR.

    PubMed

    Terskikh, Victor V; Ratcliffe, Christopher I; Ripmeester, John A; Reinhold, Catherine J; Anderson, Paul A; Edwards, Peter P

    2004-09-15

    The nature and surroundings of lithium cations in lithium-exchanged X and A zeolites following loading with the alkali metals Na, K, Rb, and Cs have been studied through (7)Li solid-state NMR spectroscopy. It is demonstrated that the lithium in these zeolites is stable with respect to reduction by the other alkali metals. Even though the lithium cations are not directly involved in chemical interactions with the excess electrons introduced in the doping process, the corresponding (7)Li NMR spectra are extremely sensitive to paramagnetic species that are located inside the zeolite cavities. This sensitivity makes (7)Li NMR a useful probe to study the formation, distribution, and transformation of such species.

  17. Swarm Verification

    NASA Technical Reports Server (NTRS)

    Holzmann, Gerard J.; Joshi, Rajeev; Groce, Alex

    2008-01-01

    Reportedly, supercomputer designer Seymour Cray once said that he would sooner use two strong oxen to plow a field than a thousand chickens. Although this is undoubtedly wise when it comes to plowing a field, it is not so clear for other types of tasks. Model checking problems are of the proverbial "search the needle in a haystack" type. Such problems can often be parallelized easily. Alas, none of the usual divide and conquer methods can be used to parallelize the working of a model checker. Given that it has become easier than ever to gain access to large numbers of computers to perform even routine tasks it is becoming more and more attractive to find alternate ways to use these resources to speed up model checking tasks. This paper describes one such method, called swarm verification.

  18. A Novel Method to Incorporate the Spatial Location of the Lung Dose Distribution into Predictive Radiation Pneumonitis Modeling

    SciTech Connect

    Vinogradskiy, Yevgeniy; Tucker, Susan L.; Liao, Zhongxing; Martel, Mary K.

    2012-03-15

    Purpose: Studies have proposed that patients who receive radiation therapy to the base of the lung are more susceptible to radiation pneumonitis than patients who receive therapy to the apex of the lung. The primary purpose of the present study was to develop a novel method to incorporate the lung dose spatial information into a predictive radiation pneumonitis model. A secondary goal was to apply the method to a 547 lung cancer patient database to determine whether including the spatial information could improve the fit of our model. Methods and Materials: The three-dimensional dose distribution of each patient was mapped onto one common coordinate system. The boundaries of the coordinate system were defined by the extreme points of each individual patient lung. Once all dose distributions were mapped onto the common coordinate system, the spatial information was incorporated into a Lyman-Kutcher-Burman predictive radiation pneumonitis model. Specifically, the lung dose voxels were weighted using a user-defined spatial weighting matrix. We investigated spatial weighting matrices that linearly scaled each dose voxel according to the following orientations: superior-inferior, anterior-posterior, medial-lateral, left-right, and radial. The model parameters were fit to our patient cohort with the endpoint of severe radiation pneumonitis. The spatial dose model was compared against a conventional dose-volume model to determine whether adding a spatial component improved the fit of the model. Results: Of the 547 patients analyzed, 111 (20.3%) experienced severe radiation pneumonitis. Adding in a spatial parameter did not significantly increase the accuracy of the model for any of the weighting schemes. Conclusions: A novel method was developed to investigate the relationship between the location of the deposited lung dose and pneumonitis rate. The method was applied to a patient database, and we found that for our patient cohort, the spatial location does not influence

  19. Implementation of a novel double-side technique for partial discharge detection and location in covered conductor overhead distribution networks

    NASA Astrophysics Data System (ADS)

    He, Weisheng; Li, Hongjie; Liang, Deliang; Sun, Haojie; Yang, Chenbo; Wei, Jinqu; Yuan, Zhijian

    2015-12-01

    Partial discharge (PD) detection has proven to be one of the most acceptable techniques for on-line condition monitoring and predictive maintenance of power apparatus. A powerful tool for detecting PD in covered-conductor (CC) lines is urgently needed to improve the asset management of CC overhead distribution lines. In this paper, an appropriate, portable and simple system designed to detect PD activity in CC lines and ultimately pinpoint the PD source is developed and tested. The system is based on a novel double-side synchronised PD measurement technique driven by pulse injection. Emphasis is placed on the proposed PD-location mechanism and hardware structure, with descriptions of the pulse-injection process, detection device, synchronisation principle and PD-location algorithm. The system is simulated using ATP-EMTP, and the simulated results are found to be consistent with the actual simulation layout. For further validation, the capability of the system is tested in a high-voltage laboratory experiment using a 10-kV CC line with cross-linked polyethylene insulation.

  20. Pediatricians' practice location choice-Evaluating the effect of Japan's 2004 postgraduate training program on the spatial distribution of pediatricians.

    PubMed

    Sakai, Rie; Fink, Günther; Kawachi, Ichiro

    2014-01-01

    To explore determinants of change in pediatrician supply in Japan, and examine impacts of a 2004 reform of postgraduate medical education on pediatricians' practice location choice. Data were compiled from secondary data sources. The dependent variable was the change in the number of pediatricians at the municipality ("secondary tier of medical care" [STM]) level. To analyze the determinants of pediatrician location choices, we considered the following predictors: initial ratio of pediatricians per 1000 children under five years of age (pediatrician density) and under-5 mortality as measures of local area need, as well as measures of residential quality. Ordinary least-squares regression models were used to estimate the associations. A coefficient equality test was performed to examine differences in predictors before and after 2004. Basic comparisons of pediatrician coverage in the top and bottom 10% of STMs were conducted to assess inequality in pediatrician supply. Increased supply was inversely associated with baseline pediatrician density both in the pre-period and post-period. Estimated impact of pediatrician density declined over time (P = 0.026), while opposite trends were observed for measures of residential quality. More specifically, urban centers and the SES composite index were positively associated with pediatrician supply for the post-period, but no such associations were found for the pre-period. Inequality in pediatrician distribution increased substantially after the reform, with the best-served 10% of communities benefitting from five times the pediatrician coverage compared to the least-served 10%. Residential quality increasingly became a function of location preference rather than public health needs after the reform. New placement schemes should be developed to achieve more equity in access to pediatric care.

  1. Phase Velocity and Full-Waveform Analysis of Co-located Distributed Acoustic Sensing (DAS) Channels and Geophone Sensor

    NASA Astrophysics Data System (ADS)

    Parker, L.; Mellors, R. J.; Thurber, C. H.; Wang, H. F.; Zeng, X.

    2015-12-01

    A 762-meter Distributed Acoustic Sensing (DAS) array with a channel spacing of one meter was deployed at the Garner Valley Downhole Array in Southern California. The array was approximately rectangular with dimensions of 180 meters by 80 meters. The array also included two subdiagonals within the rectangle along which three-component geophones were co-located. Several active sources were deployed, including a 45-kN, swept-frequency, shear-mass shaker, which produced strong Rayleigh waves across the array. Both DAS and geophone traces were filtered in 2-Hz steps between 4 and 20 Hz to obtain phase velocities as a function of frequency from fitting the moveout of travel times over distances of 35 meters or longer. As an alternative to this traditional means of finding phase velocity, it is theoretically possible to find the Rayleigh-wave phase velocity at each point of co-location as the ratio of DAS and geophone responses, because DAS is sensitive to ground strain and geophones are sensitive to ground velocity, after suitable corrections for instrument response (Mikumo & Aki, 1964). The concept was tested in WPP, a seismic wave propagation program, by first validating and then using a 3D synthetic, full-waveform seismic model to simulate the effect of increased levels of noise and uncertainty as data go from ideal to more realistic. The results obtained from this study provide a better understanding of the DAS response and its potential for being combined with traditional seismometers for obtaining phase velocity at a single location. This analysis is part of the PoroTomo project (Poroelastic Tomography by Adjoint Inverse Modeling of Data from Seismology, Geodesy, and Hydrology, http://geoscience.wisc.edu/feigl/porotomo).

  2. Frequency Distribution of Second Solid Cancer Locations in Relation to the Irradiated Volume Among 115 Patients Treated for Childhood Cancer

    SciTech Connect

    Diallo, Ibrahima Haddy, Nadia; Adjadj, Elisabeth; Samand, Akhtar; Quiniou, Eric; Chavaudra, Jean; Alziar, Iannis; Perret, Nathalie; Guerin, Sylvie; Lefkopoulos, Dimitri; Vathaire, Florent de

    2009-07-01

    Purpose: To provide better estimates of the frequency distribution of second malignant neoplasm (SMN) sites in relation to previous irradiated volumes, and better estimates of the doses delivered to these sites during radiotherapy (RT) of the first malignant neoplasm (FMN). Methods and Materials: The study focused on 115 patients who developed a solid SMN among a cohort of 4581 individuals. The homemade software package Dos{sub E}G was used to estimate the radiation doses delivered to SMN sites during RT of the FMN. Three-dimensional geometry was used to evaluate the distances between the irradiated volume, for RT delivered to each FMN, and the site of the subsequent SMN. Results: The spatial distribution of SMN relative to the irradiated volumes in our cohort was as follows: 12% in the central area of the irradiated volume, which corresponds to the planning target volume (PTV), 66% in the beam-bordering region (i.e., the area surrounding the PTV), and 22% in regions located more than 5 cm from the irradiated volume. At the SMN site, all dose levels ranging from almost zero to >75 Gy were represented. A peak SMN frequency of approximately 31% was identified in volumes that received <2.5 Gy. Conclusion: A greater volume of tissues receives low or intermediate doses in regions bordering the irradiated volume with modern multiple-beam RT arrangements. These results should be considered for risk-benefit evaluations of RT.

  3. Modified Kolmogorov-Smirnov, Anderson-Darling, and Cramer-Von Mises Tests for the Pareto Distribution with Unknown Location and Scale Parameters.

    DTIC Science & Technology

    1985-12-01

    tion in statistical analysis. It is named after Vilfredo Pareto (1848-1923), a Swiss professor of economics who con- ducted the first extensive...THE PARETO DISTRIBUTION WITH UNKNOWN LOCATION AND SCALE PARAMETERS THESIS James E. Porter III Captain, USAF AFIT/GSO/MA/85D-6 Approved for public... PARETO DISTRIBUTION WITH UNKNOWN LOCATION AND SCALE PARAMETERS -.- THES IS Presented to the Faculty of the School of Engineering *- of the Air Force

  4. Physician Location Selection and Distribution. A Bibliography of Relevant Articles, Reports and Data Sources. Health Manpower Policy Discussion Paper Series No. D3.

    ERIC Educational Resources Information Center

    Crane, Stephen C.; Reynolds, Juanita

    This bibliography provides background material on two general issues of how physicians are distributed geographically and how physicians choose a practice location. The report is divided into five major categories of information: overview summary of annotated articles, reference key to location decision factors, reference key to public policy…

  5. Atmospheric aerosols size distribution properties in winter and pre-monsoon over western Indian Thar Desert location

    NASA Astrophysics Data System (ADS)

    Panwar, Chhagan; Vyas, B. M.

    2016-05-01

    The first ever experimental results over Indian Thar Desert region concerning to height integrated aerosols size distribution function in particles size ranging between 0.09 to 2 µm such as, aerosols columnar size distribution (CSD), effective radius (Reff), integrated content of total aerosols (Nt), columnar content of accumulation and coarse size aerosols particles concentration (Na) (size < 0.5 µm) and (Nc) (size between 0.5 to 2 µm) have been described specifically during winter (a stable weather condition and intense anthropogenic pollution activity period) and pre-monsoon (intense dust storms of natural mineral aerosols as well as unstable atmospheric weather condition period) at Jaisalmer (26.90°N, 69.90°E, 220 m above surface level (asl)) located in central Thar desert vicinity of western Indian site. The CSD and various derived other aerosols size parameters are retrieved from their average spectral characteristics of Aerosol Optical Thickness (AOT) from UV to Infrared wavelength spectrum measured from Multi-Wavelength solar Radiometer (MWR). The natures of CSD are, in general, bio-modal character, instead of uniformly distributed character and power law distributions. The observed primary peaks in CSD plots are seen around about 1013 m2 μm-1 at radius range 0.09-0.20 µm during both the seasons. But, in winter months, secondary peaks of relatively lower CSD values of 1010 to 1011 m2/μm-1 occur within a lower radius size range 0.4 to 0.6 µm. In contrast to this, while in dust dominated and hot season, the dominated secondary maxima of the higher CSD of about 1012 m2μm-3 is found of bigger aerosols size particles in a rage of 0.6 to 1.0 µm which is clearly demonstrating the characteristics of higher aerosols laden of bigger size aerosols in summer months relative to their prevailed lower aerosols loading of smaller size aerosols particles (0.4 to 0.6 µm) in cold months. Several other interesting features of changing nature of monthly spectral AOT

  6. Atmospheric aerosols size distribution properties in winter and pre-monsoon over western Indian Thar Desert location

    SciTech Connect

    Panwar, Chhagan Vyas, B. M.

    2016-05-06

    The first ever experimental results over Indian Thar Desert region concerning to height integrated aerosols size distribution function in particles size ranging between 0.09 to 2 µm such as, aerosols columnar size distribution (CSD), effective radius (R{sub eff}), integrated content of total aerosols (N{sub t}), columnar content of accumulation and coarse size aerosols particles concentration (N{sub a}) (size < 0.5 µm) and (N{sub c}) (size between 0.5 to 2 µm) have been described specifically during winter (a stable weather condition and intense anthropogenic pollution activity period) and pre-monsoon (intense dust storms of natural mineral aerosols as well as unstable atmospheric weather condition period) at Jaisalmer (26.90°N, 69.90°E, 220 m above surface level (asl)) located in central Thar desert vicinity of western Indian site. The CSD and various derived other aerosols size parameters are retrieved from their average spectral characteristics of Aerosol Optical Thickness (AOT) from UV to Infrared wavelength spectrum measured from Multi-Wavelength solar Radiometer (MWR). The natures of CSD are, in general, bio-modal character, instead of uniformly distributed character and power law distributions. The observed primary peaks in CSD plots are seen around about 10{sup 13} m{sup 2} μm{sup −1} at radius range 0.09-0.20 µm during both the seasons. But, in winter months, secondary peaks of relatively lower CSD values of 10{sup 10} to 10{sup 11} m{sup 2}/μm{sup −1} occur within a lower radius size range 0.4 to 0.6 µm. In contrast to this, while in dust dominated and hot season, the dominated secondary maxima of the higher CSD of about 10{sup 12} m{sup 2}μm{sup −3} is found of bigger aerosols size particles in a rage of 0.6 to 1.0 µm which is clearly demonstrating the characteristics of higher aerosols laden of bigger size aerosols in summer months relative to their prevailed lower aerosols loading of smaller size aerosols particles (0

  7. Simple Syringe Filtration Methods for Reliably Examining Dissolved and Colloidal Trace Element Distributions in Remote Field Locations

    NASA Astrophysics Data System (ADS)

    Shiller, A. M.

    2002-12-01

    Methods for obtaining reliable dissolved trace element samples frequently utilize clean labs, portable laminar flow benches, or other equipment not readily transportable to remote locations. In some cases unfiltered samples can be obtained in a remote location and transported back to a lab for filtration. However, this may not always be possible or desirable. Additionally, methods for obtaining information on colloidal composition are likewise frequently too cumbersome for remote locations as well as being time-consuming. For that reason I have examined clean methods for collecting samples filtered through 0.45 and 0.02 micron syringe filters. With this methodology, only small samples are collected (typically 15 mL). However, with the introduction of the latest generation of ICP-MS's and microflow nebulizers, sample requirements for elemental analysis are much lower than just a few years ago. Thus, a determination of a suite of first row transition elements is frequently readily obtainable with samples of less than 1 mL. To examine the "traditional" (<0.45 micron) dissolved phase, 25 mm diameter polypropylene syringe filters and all polyethylene/polypropylene syringes are utilized. Filters are pre-cleaned in the lab using 40 mL of approx. 1 M HCl followed by a clean water rinse. Syringes are pre-cleaned by leaching with hot 1 M HCl followed by a clean water rinse. Sample kits are packed in polyethylene bags for transport to the field. Results are similar to results obtained using 0.4 micron polycarbonate screen filters, though concentrations may differ somewhat depending on the extent of sample pre-rinsing of the filter. Using this method, a multi-year time series of dissolved metals in a remote Rocky Mountain stream has been obtained. To examine the effect of colloidal material on dissolved metal concentrations, 0.02 micron alumina syringe filters have been utilized. Other workers have previously used these filters for examining colloidal Fe distributions in lake

  8. NW Indian Ocean crustal thickness, micro-continent distribution and ocean-continent transition location from satellite gravity inversion

    NASA Astrophysics Data System (ADS)

    Kusznir, N. J.; Tymms, V.

    2009-04-01

    Satellite gravity anomaly inversion incorporating a lithosphere thermal gravity anomaly correction has been used to determine Moho depth, crustal thickness and lithosphere thinning factor for the NW Indian Ocean and to map ocean-continent transition location (OCT) and micro-continent distribution. Input data is satellite gravity (Sandwell & Smith 1997) and digital bathymetry (Gebco 2003). Crustal thicknesses predicted by gravity inversion under the Seychelles and Mascarenes are in excess of 30 km and form a single micro-continent extending southwards towards Mauritius. Thick crust (> 25 km) offshore SW India is predicted to extend oceanwards under the Lacadive and Maldive Islands and southwards under the Chagos Archipelago. Superposition of illuminated satellite gravity data onto crustal thickness maps from gravity inversion clearly shows pre-separation conjugacy of the thick crust underlying the Chagos and Mascarene Islands. Maps of crustal thickness from gravity inversion show a pronounced discontinuity in crustal thickness between Mauritius-Reunion and the Mascarene Basin which is of Late Cretaceous age and pre-dates recent plume volcanism. Gravity inversion to determine Moho depth and crustal thickness variation is carried out in the 3D spectral domain and incorporates a lithosphere thermal gravity anomaly correction for both oceanic and continental margin lithosphere (Chappell & Kusznir 2008). Failure to incorporate a lithosphere thermal gravity anomaly correction gives a substantial over-estimate of crustal thickness predicted by gravity inversion. The lithosphere thermal model used to predict the lithosphere thermal gravity anomaly correction may be conditioned using magnetic isochron data to provide the age of oceanic lithosphere (Mueller et al. 1997). The resulting crustal thickness determination and the location of the OCT are sensitive to errors in the magnetic isochron data. An alternative method of inverting satellite gravity to give crustal thickness

  9. Verification Games: Crowd-Sourced Formal Verification

    DTIC Science & Technology

    2016-03-01

    Formal Verification the verification tools developed by the Programming Languages and Software Engineering group were improved. A series of games... software makes it imperative to find more effective and efficient mechanisms for improving software reliability. Formal verification is an important part...of this effort, since it is the only way to be certain that a given piece of software is free of (certain types of) errors. To date, formal

  10. Generic Verification Protocol for Verification of Online Turbidimeters

    EPA Science Inventory

    This protocol provides generic procedures for implementing a verification test for the performance of online turbidimeters. The verification tests described in this document will be conducted under the Environmental Technology Verification (ETV) Program. Verification tests will...

  11. The prevalence and distribution of gastrointestinal parasites of stray and refuge dogs in four locations in India.

    PubMed

    Traub, Rebecca J; Pednekar, Riddhi P; Cuttell, Leigh; Porter, Ronald B; Abd Megat Rani, Puteri Azaziah; Gatne, Mukulesh L

    2014-09-15

    A gastrointestinal parasite survey of 411 stray and refuge dogs sampled from four geographical and climactically distinct locations in India revealed these animals to represent a significant source of environmental contamination for parasites that pose a zoonotic risk to the public. Hookworms were the most commonly identified parasite in dogs in Sikkim (71.3%), Mumbai (48.8%) and Delhi (39.1%). In Ladakh, which experiences harsh extremes in climate, a competitive advantage was observed for parasites such as Sarcocystis spp. (44.2%), Taenia hydatigena (30.3%) and Echinococcus granulosus (2.3%) that utilise intermediate hosts for the completion of their life cycle. PCR identified Ancylostoma ceylanicum and Ancylostoma caninum to occur sympatrically, either as single or mixed infections in Sikkim (Northeast) and Mumbai (West). In Delhi, A. caninum was the only species identified in dogs, probably owing to its ability to evade unfavourable climatic conditions by undergoing arrested development in host tissue. The expansion of the known distribution of A. ceylanicum to the west, as far as Mumbai, justifies the renewed interest in this emerging zoonosis and advocates for its surveillance in future human parasite surveys. Of interest was the absence of Trichuris vulpis in dogs, in support of previous canine surveys in India. This study advocates the continuation of birth control programmes in stray dogs that will undoubtedly have spill-over effects on reducing the levels of environmental contamination with parasite stages. In particular, owners of pet animals exposed to these environments must be extra vigilant in ensuring their animals are regularly dewormed and maintaining strict standards of household and personal hygiene. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Columbus pressurized module verification

    NASA Technical Reports Server (NTRS)

    Messidoro, Piero; Comandatore, Emanuele

    1986-01-01

    The baseline verification approach of the COLUMBUS Pressurized Module was defined during the A and B1 project phases. Peculiarities of the verification program are the testing requirements derived from the permanent manned presence in space. The model philosophy and the test program have been developed in line with the overall verification concept. Such critical areas as meteoroid protections, heat pipe radiators and module seals are identified and tested. Verification problem areas are identified and recommendations for the next development are proposed.

  13. Verification of VLSI designs

    NASA Technical Reports Server (NTRS)

    Windley, P. J.

    1991-01-01

    In this paper we explore the specification and verification of VLSI designs. The paper focuses on abstract specification and verification of functionality using mathematical logic as opposed to low-level boolean equivalence verification such as that done using BDD's and Model Checking. Specification and verification, sometimes called formal methods, is one tool for increasing computer dependability in the face of an exponentially increasing testing effort.

  14. Verification of VENTSAR

    SciTech Connect

    Simpkins, A.A.

    1995-01-01

    The VENTSAR code is an upgraded and improved version of the VENTX code, which estimates concentrations on or near a building from a release at a nearby location. The code calculates the concentrations either for a given meteorological exceedance probability or for a given stability and wind speed combination. A single building can be modeled which lies in the path of the plume, or a penthouse can be added to the top of the building. Plume rise may also be considered. Release types can be either chemical or radioactive. Downwind concentrations are determined at user-specified incremental distances. This verification report was prepared to demonstrate that VENTSAR is properly executing all algorithms and transferring data. Hand calculations were also performed to ensure proper application of methodologies.

  15. Requirements, Verification, and Compliance (RVC) Database Tool

    NASA Technical Reports Server (NTRS)

    Rainwater, Neil E., II; McDuffee, Patrick B.; Thomas, L. Dale

    2001-01-01

    This paper describes the development, design, and implementation of the Requirements, Verification, and Compliance (RVC) database used on the International Space Welding Experiment (ISWE) project managed at Marshall Space Flight Center. The RVC is a systems engineer's tool for automating and managing the following information: requirements; requirements traceability; verification requirements; verification planning; verification success criteria; and compliance status. This information normally contained within documents (e.g. specifications, plans) is contained in an electronic database that allows the project team members to access, query, and status the requirements, verification, and compliance information from their individual desktop computers. Using commercial-off-the-shelf (COTS) database software that contains networking capabilities, the RVC was developed not only with cost savings in mind but primarily for the purpose of providing a more efficient and effective automated method of maintaining and distributing the systems engineering information. In addition, the RVC approach provides the systems engineer the capability to develop and tailor various reports containing the requirements, verification, and compliance information that meets the needs of the project team members. The automated approach of the RVC for capturing and distributing the information improves the productivity of the systems engineer by allowing that person to concentrate more on the job of developing good requirements and verification programs and not on the effort of being a "document developer".

  16. Verification of Adaptive Systems

    SciTech Connect

    Pullum, Laura L; Cui, Xiaohui; Vassev, Emil; Hinchey, Mike; Rouff, Christopher; Buskens, Richard

    2012-01-01

    Adaptive systems are critical for future space and other unmanned and intelligent systems. Verification of these systems is also critical for their use in systems with potential harm to human life or with large financial investments. Due to their nondeterministic nature and extremely large state space, current methods for verification of software systems are not adequate to provide a high level of assurance for them. The combination of stabilization science, high performance computing simulations, compositional verification and traditional verification techniques, plus operational monitors, provides a complete approach to verification and deployment of adaptive systems that has not been used before. This paper gives an overview of this approach.

  17. Candida parapsilosis (sensu lato) isolated from hospitals located in the Southeast of Brazil: Species distribution, antifungal susceptibility and virulence attributes.

    PubMed

    Ziccardi, Mariangela; Souza, Lucieri O P; Gandra, Rafael M; Galdino, Anna Clara M; Baptista, Andréa R S; Nunes, Ana Paula F; Ribeiro, Mariceli A; Branquinha, Marta H; Santos, André L S

    2015-12-01

    Candida parapsilosis (sensu lato), which represents a fungal complex composed of three genetically related species - Candida parapsilosis sensu stricto, Candida orthopsilosis and Candida metapsilosis, has emerged as an important yeast causing fungemia worldwide. The goal of the present work was to assess the prevalence, antifungal susceptibility and production of virulence traits in 53 clinical isolates previously identified as C. parapsilosis (sensu lato) obtained from hospitals located in the Southeast of Brazil. Species forming this fungal complex are physiologically/morphologically indistinguishable; however, polymerase chain reaction followed by restriction fragment length polymorphism of FKS1 gene has solved the identification inaccuracy, revealing that 43 (81.1%) isolates were identified as C. parapsilosis sensu stricto and 10 (18.9%) as C. orthopsilosis. No C. metapsilosis was found. The geographic distribution of these Candida species was uniform among the studied Brazilian States (São Paulo, Rio de Janeiro and Espírito Santo). All C. orthopsilosis and almost all C. parapsilosis sensu stricto (95.3%) isolates were susceptible to amphotericin B, fluconazole, itraconazole, voriconazole and caspofungin. Nevertheless, one C. parapsilosis sensu stricto isolate was resistant to fluconazole and another one was resistant to caspofungin. C. parapsilosis sensu stricto isolates exhibited higher MIC mean values to amphotericin B, fluconazole and caspofungin than those of C. orthopsilosis, while C. orthopsilosis isolates displayed higher MIC mean to itraconazole compared to C. parapsilosis sensu stricto. Identical MIC mean values to voriconazole were measured for these Candida species. All the isolates of both species were able to form biofilm on polystyrene surface. Impressively, biofilm-growing cells of C. parapsilosis sensu stricto and C. orthopsilosis exhibited a considerable resistance to all antifungal agents tested. Pseudohyphae were observed in 67.4% and 80

  18. Shift Verification and Validation

    SciTech Connect

    Pandya, Tara M.; Evans, Thomas M.; Davidson, Gregory G; Johnson, Seth R.; Godfrey, Andrew T.

    2016-09-07

    This documentation outlines the verification and validation of Shift for the Consortium for Advanced Simulation of Light Water Reactors (CASL). Five main types of problems were used for validation: small criticality benchmark problems; full-core reactor benchmarks for light water reactors; fixed-source coupled neutron-photon dosimetry benchmarks; depletion/burnup benchmarks; and full-core reactor performance benchmarks. We compared Shift results to measured data and other simulated Monte Carlo radiation transport code results, and found very good agreement in a variety of comparison measures. These include prediction of critical eigenvalue, radial and axial pin power distributions, rod worth, leakage spectra, and nuclide inventories over a burn cycle. Based on this validation of Shift, we are confident in Shift to provide reference results for CASL benchmarking.

  19. Spatiotemporal distribution of location and object effects in the electromyographic activity of upper extremity muscles during reach-to-grasp.

    PubMed

    Rouse, Adam G; Schieber, Marc H

    2016-06-01

    In reaching to grasp an object, proximal muscles that act on the shoulder and elbow classically have been viewed as transporting the hand to the intended location, while distal muscles that act on the fingers simultaneously shape the hand to grasp the object. Prior studies of electromyographic (EMG) activity in upper extremity muscles therefore have focused, by and large, either on proximal muscle activity during reaching to different locations or on distal muscle activity as the subject grasps various objects. Here, we examined the EMG activity of muscles from the shoulder to the hand, as monkeys reached and grasped in a task that dissociated location and object. We quantified the extent to which variation in the EMG activity of each muscle depended on location, on object, and on their interaction-all as a function of time. Although EMG variation depended on both location and object beginning early in the movement, an early phase of substantial location effects in muscles from proximal to distal was followed by a later phase in which object effects predominated throughout the extremity. Interaction effects remained relatively small. Our findings indicate that neural control of reach-to-grasp may occur largely in two sequential phases: the first, serving to project the entire upper extremity toward the intended location, and the second, acting predominantly to shape the entire extremity for grasping the object. Copyright © 2016 the American Physiological Society.

  20. Spatiotemporal distribution of location and object effects in the electromyographic activity of upper extremity muscles during reach-to-grasp

    PubMed Central

    Rouse, Adam G.

    2016-01-01

    In reaching to grasp an object, proximal muscles that act on the shoulder and elbow classically have been viewed as transporting the hand to the intended location, while distal muscles that act on the fingers simultaneously shape the hand to grasp the object. Prior studies of electromyographic (EMG) activity in upper extremity muscles therefore have focused, by and large, either on proximal muscle activity during reaching to different locations or on distal muscle activity as the subject grasps various objects. Here, we examined the EMG activity of muscles from the shoulder to the hand, as monkeys reached and grasped in a task that dissociated location and object. We quantified the extent to which variation in the EMG activity of each muscle depended on location, on object, and on their interaction—all as a function of time. Although EMG variation depended on both location and object beginning early in the movement, an early phase of substantial location effects in muscles from proximal to distal was followed by a later phase in which object effects predominated throughout the extremity. Interaction effects remained relatively small. Our findings indicate that neural control of reach-to-grasp may occur largely in two sequential phases: the first, serving to project the entire upper extremity toward the intended location, and the second, acting predominantly to shape the entire extremity for grasping the object. PMID:27009156

  1. Structural System Identification Technology Verification

    DTIC Science & Technology

    1981-11-01

    USAAVRADCOM-TR-81-D-28Q V󈧄 ADA1091 81 LEI STRUCTURAL SYSTEM IDENTIFICATION TECHNOLOGY VERIFICATION \\ N. Giansante, A. Berman, W. o. Flannelly, E...release; distribution unlimited. Prepared for APPLIED TECHNOLOGY LABORATORY U. S. ARMY RESEARCH AND TECHNOLOGY LABORATORIES (AVRADCOM) S Fort Eustis...Va. 23604 4-J" APPLI ED TECHNOLOGY LABORATORY POSITION STATEMENT The Applied Technology Laboratory has been involved in the development of the Struc

  2. A model for distribution centers location-routing problem on a multimodal transportation network with a meta-heuristic solving approach

    NASA Astrophysics Data System (ADS)

    Fazayeli, Saeed; Eydi, Alireza; Kamalabadi, Isa Nakhai

    2017-07-01

    Nowadays, organizations have to compete with different competitors in regional, national and international levels, so they have to improve their competition capabilities to survive against competitors. Undertaking activities on a global scale requires a proper distribution system which could take advantages of different transportation modes. Accordingly, the present paper addresses a location-routing problem on multimodal transportation network. The introduced problem follows four objectives simultaneously which form main contribution of the paper; determining multimodal routes between supplier and distribution centers, locating mode changing facilities, locating distribution centers, and determining product delivery tours from the distribution centers to retailers. An integer linear programming is presented for the problem, and a genetic algorithm with a new chromosome structure proposed to solve the problem. Proposed chromosome structure consists of two different parts for multimodal transportation and location-routing parts of the model. Based on published data in the literature, two numerical cases with different sizes generated and solved. Also, different cost scenarios designed to better analyze model and algorithm performance. Results show that algorithm can effectively solve large-size problems within a reasonable time which GAMS software failed to reach an optimal solution even within much longer times.

  3. Characteristic location and growth patterns of functioning pituitary adenomas: correlation with histological distribution of hormone-secreting cells in the pituitary gland.

    PubMed

    Baik, Jun Seung; Lee, Mi Hyun; Ahn, Kook-Jin; Choi, Hyun Seok; Jung, So Lyung; Kim, Bum-Soo; Jeun, Sin Soo; Hong, Yong-Kil

    2015-01-01

    To evaluate the correlation between the magnetic resonance imaging findings of functional pituitary adenomas and histological distribution of hormone-secreting cells in pituitary gland. Forty-nine patients with pathologically confirmed functional micro and macro pituitary adenomas were retrospectively reviewed for its location and growth direction. Micro-prolactin, micro-adrenocorticotropic hormone (ACTH), and micro-growth hormone (GH) producing adenomas showed specific location (P-value <.01). Macro-GH and macro-thyroid-stimulating hormone producing adenomas showed specific growth direction (P-value <.05), whereas macro-prolactin and macro-ACTH producing adenomas did not. The functional pituitary microadenomas' location and macroadenomas' growth pattern correlate well with histological distribution of hormone-secreting cells in pituitary gland. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Environmental Technology Verification Report - Electric Power and Heat Production Using Renewable Biogas at Patterson Farms

    EPA Science Inventory

    The U.S. EPA operates the Environmental Technology Verification program to facilitate the deployment of innovative technologies through performance verification and information dissemination. A technology area of interest is distributed electrical power generation, particularly w...

  5. Environmental Technology Verification Report - Electric Power and Heat Production Using Renewable Biogas at Patterson Farms

    EPA Science Inventory

    The U.S. EPA operates the Environmental Technology Verification program to facilitate the deployment of innovative technologies through performance verification and information dissemination. A technology area of interest is distributed electrical power generation, particularly w...

  6. Light dose verification for pleural PDT.

    PubMed

    Sandell, Julia L; Liang, Xing; Zhu, Timothy

    2012-02-13

    The ability to deliver uniform light dose in Photodynamic therapy (PDT) is critical to treatment efficacy. Current protocol in pleural photodynamic therapy uses 7 isotropic detectors placed at discrete locations within the pleural cavity to monitor light dose throughout treatment. While effort is made to place the detectors uniformly through the cavity, measurements do not provide an overall uniform measurement of delivered dose. A real-time infrared (IR) tracking camera is development to better deliver and monitor a more uniform light distribution during treatment. It has been shown previously that there is good agreement between fluence calculated using IR tracking data and isotropic detector measurements for direct light phantom experiments. This study presents the results of an extensive phantom study which uses variable, patient-like geometries and optical properties (both absorption and scattering). Position data of the treatment is collected from the IR navigation system while concurrently light distribution measurements are made using the aforementioned isotropic detectors. These measurements are compared to fluence calculations made using data from the IR navigation system to verify our light distribution theory is correct and applicable in patient-like settings. The verification of this treatment planning technique is an important step in bringing real-time fluence monitoring into the clinic for more effective treatment.

  7. Simulation verification techniques study

    NASA Technical Reports Server (NTRS)

    Schoonmaker, P. B.; Wenglinski, T. H.

    1975-01-01

    Results are summarized of the simulation verification techniques study which consisted of two tasks: to develop techniques for simulator hardware checkout and to develop techniques for simulation performance verification (validation). The hardware verification task involved definition of simulation hardware (hardware units and integrated simulator configurations), survey of current hardware self-test techniques, and definition of hardware and software techniques for checkout of simulator subsystems. The performance verification task included definition of simulation performance parameters (and critical performance parameters), definition of methods for establishing standards of performance (sources of reference data or validation), and definition of methods for validating performance. Both major tasks included definition of verification software and assessment of verification data base impact. An annotated bibliography of all documents generated during this study is provided.

  8. Verification in Interlibrary Loan: A Key to Success?

    ERIC Educational Resources Information Center

    Everett, David

    1987-01-01

    A study of requests for photocopies of journal articles through interlibrary loan at Colgate University over a period of one month suggests that, if a borrowing library wants quick service with a high fill rate and little imposition upon the lender, location verification is more important that citation verification. (MES)

  9. Software verification and testing

    NASA Technical Reports Server (NTRS)

    1985-01-01

    General procedures for software verification and validation are provided as a guide for managers, programmers, and analysts involved in software development. The verification and validation procedures described are based primarily on testing techniques. Testing refers to the execution of all or part of a software system for the purpose of detecting errors. Planning, execution, and analysis of tests are outlined in this document. Code reading and static analysis techniques for software verification are also described.

  10. Gate-Level Commercial Microelectronics Verification with Standard Cell Recognition

    DTIC Science & Technology

    2015-03-26

    GATE- LEVEL COMMERCIAL MICROELECTRONICS VERIFICATION WITH STANDARD CELL RECOGNITION THESIS Leleia A. Hsia, Second Lieutenant, USAF AFIT-ENG-MS-15-M...Government and is not subject to copyright protection in the United States. AFIT-ENG-MS-15-M-069 GATE- LEVEL COMMERCIAL MICROELECTRONICS VERIFICATION WITH...RELEASE; DISTRIBUTION UNLIMITED AFIT-ENG-MS-15-M-069 GATE- LEVEL COMMERCIAL MICROELECTRONICS VERIFICATION WITH STANDARD CELL RECOGNITION THESIS Leleia A

  11. Reliable Entanglement Verification

    NASA Astrophysics Data System (ADS)

    Arrazola, Juan; Gittsovich, Oleg; Donohue, John; Lavoie, Jonathan; Resch, Kevin; Lütkenhaus, Norbert

    2013-05-01

    Entanglement plays a central role in quantum protocols. It is therefore important to be able to verify the presence of entanglement in physical systems from experimental data. In the evaluation of these data, the proper treatment of statistical effects requires special attention, as one can never claim to have verified the presence of entanglement with certainty. Recently increased attention has been paid to the development of proper frameworks to pose and to answer these type of questions. In this work, we apply recent results by Christandl and Renner on reliable quantum state tomography to construct a reliable entanglement verification procedure based on the concept of confidence regions. The statements made do not require the specification of a prior distribution nor the assumption of an independent and identically distributed (i.i.d.) source of states. Moreover, we develop efficient numerical tools that are necessary to employ this approach in practice, rendering the procedure ready to be employed in current experiments. We demonstrate this fact by analyzing the data of an experiment where photonic entangled two-photon states were generated and whose entanglement is verified with the use of an accessible nonlinear witness.

  12. Distribution of Foraminifera in the Core Samples of Kollidam and Marakanam Mangrove Locations, Tamil Nadu, Southeast Coast of India

    NASA Astrophysics Data System (ADS)

    Nowshath, M.

    2013-05-01

    In order to study the distribution of Foraminifera in the subsurface sediments of mangrove environment, two core samples have been collected i) near boating house, Pitchavaram, from Kollidam estuary (C2) and ii) backwaters of Marakanam (C2)with the help of PVC corer. The length of the core varies from a total of 25 samples from both cores were obtained and they were subjected to standard micropaleontological and sedimentological analyses for the evaluation of different sediment characteristics. The core sample No.C1 (Pitchavaram) yielded only foraminifera whereas the other one core no.C2 (Marakanam) has yielded discussed only the down core distribution of foraminifera. The widely utilized classification proposed by Loeblich and Tappan (1987) has been followed in the present study for Foraminiferal taxonomy and accordingly 23 foraminiferal species belonging to 18 genera, 10 families, 8 superfamilies and 4 suborders have been reported and illustrated. The foraminiferal species recorded are characteristic of shallow innershelf to marginal marine and tropical in nature. Sedimentological parameters such as CaCO3, Organic matter and sand-silt-clay ratio was estimated and their down core distribution is discussed. An attempt has been made to evaluate the favourable substrate for the Foraminifera population abundance in the present area of study. From the overall distribution of foraminifera in different samples of Kollidam estuary (Pitchavaram area), and Marakanam estuary it is observed that siltysand and sandysilt are more accommodative substrate for the population of foraminifera, respectively. The distribution of foraminifera in the core samples indicate that the sediments were deposited under normal oxygenated environment conditions.;

  13. Earthquake Forecasting, Validation and Verification

    NASA Astrophysics Data System (ADS)

    Rundle, J.; Holliday, J.; Turcotte, D.; Donnellan, A.; Tiampo, K.; Klein, B.

    2009-05-01

    Techniques for earthquake forecasting are in development using both seismicity data mining methods, as well as numerical simulations. The former rely on the development of methods to recognize patterns in data, while the latter rely on the use of dynamical models that attempt to faithfully replicate the actual fault systems. Testing such forecasts is necessary not only to determine forecast quality, but also to improve forecasts. A large number of techniques to validate and verify forecasts have been developed for weather and financial applications. Many of these have been elaborated in public locations, including, for example, the URL as listed below. Typically, the goal is to test for forecast resolution, reliability and sharpness. A good forecast is characterized by consistency, quality and value. Most, if not all of these forecast verification procedures can be readily applied to earthquake forecasts as well. In this talk, we discuss both methods of forecasting, as well as validation and verification using a number of these standard methods. We show how these test methods might be useful for both fault-based forecasting, a group of forecast methods that includes the WGCEP and simulator-based renewal models, and grid-based forecasting, which includes the Relative Intensity, Pattern Informatics, and smoothed seismicity methods. We find that applying these standard methods of forecast verification is straightforward. Judgments about the quality of a given forecast method can often depend on the test applied, as well as on the preconceptions and biases of the persons conducting the tests.

  14. Consolidated Site (CS) 022 Verification Survey at Former McClellan AFB, Sacramento, California

    DTIC Science & Technology

    2015-03-31

    USAFSAM/OEC), Radiation Health Consulting Branch performed an independent radiological verification survey of the CS 022 hazardous waste site, located on...verification survey of the CS 022 hazardous waste site, located on former McClellan AFB, California, on 29-31 July 2014. The purpose of this verification...the late 1960s the site was used as a burial pit for industrial waste , solvents, and ash from an industrial incinerator located near the site

  15. Deterministic walks with inverse-square power-law scaling are an emergent property of predators that use chemotaxis to locate randomly distributed prey.

    PubMed

    Reynolds, A M

    2008-07-01

    The results of numerical simulations indicate that deterministic walks with inverse-square power-law scaling are a robust emergent property of predators that use chemotaxis to locate randomly and sparsely distributed stationary prey items. It is suggested that chemotactic destructive foraging accounts for the apparent Lévy flight movement patterns of Oxyrrhis marina microzooplankton in still water containing prey items. This challenges the view that these organisms are executing an innate optimal Lévy flight searching strategy. Crucial for the emergence of inverse-square power-law scaling is the tendency of chemotaxis to occasionally cause predators to miss the nearest prey item, an occurrence which would not arise if prey were located through the employment of a reliable cognitive map or if prey location were visually cued and perfect.

  16. Proton Therapy Verification with PET Imaging

    PubMed Central

    Zhu, Xuping; Fakhri, Georges El

    2013-01-01

    Proton therapy is very sensitive to uncertainties introduced during treatment planning and dose delivery. PET imaging of proton induced positron emitter distributions is the only practical approach for in vivo, in situ verification of proton therapy. This article reviews the current status of proton therapy verification with PET imaging. The different data detecting systems (in-beam, in-room and off-line PET), calculation methods for the prediction of proton induced PET activity distributions, and approaches for data evaluation are discussed. PMID:24312147

  17. Approaches to wind resource verification

    NASA Technical Reports Server (NTRS)

    Barchet, W. R.

    1982-01-01

    Verification of the regional wind energy resource assessments produced by the Pacific Northwest Laboratory addresses the question: Is the magnitude of the resource given in the assessments truly representative of the area of interest? Approaches using qualitative indicators of wind speed (tree deformation, eolian features), old and new data of opportunity not at sites specifically chosen for their exposure to the wind, and data by design from locations specifically selected to be good wind sites are described. Data requirements and evaluation procedures for verifying the resource are discussed.

  18. 10 CFR 72.79 - Facility information and verification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... DOC/NRC Form AP-A and associated forms; (b) Shall submit location information described in § 75.11 of this chapter on DOC/NRC Form AP-1 and associated forms; and (c) Shall permit verification thereof...

  19. 10 CFR 72.79 - Facility information and verification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... DOC/NRC Form AP-A and associated forms; (b) Shall submit location information described in § 75.11 of this chapter on DOC/NRC Form AP-1 and associated forms; and (c) Shall permit verification thereof...

  20. 10 CFR 72.79 - Facility information and verification.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... DOC/NRC Form AP-A and associated forms; (b) Shall submit location information described in § 75.11 of this chapter on DOC/NRC Form AP-1 and associated forms; and (c) Shall permit verification thereof...

  1. 10 CFR 72.79 - Facility information and verification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... DOC/NRC Form AP-A and associated forms; (b) Shall submit location information described in § 75.11 of this chapter on DOC/NRC Form AP-1 and associated forms; and (c) Shall permit verification thereof...

  2. 10 CFR 72.79 - Facility information and verification.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... DOC/NRC Form AP-A and associated forms; (b) Shall submit location information described in § 75.11 of this chapter on DOC/NRC Form AP-1 and associated forms; and (c) Shall permit verification thereof...

  3. Location and distribution of a receptor for the 987P pilus of Escherichia coli in small intestines.

    PubMed

    Dean, E A; Isaacson, R E

    1985-02-01

    Frozen sections of rabbit or pig small intestines were stained with fluorescein-labeled antibody specific for the 987P receptor isolated from adult rabbit small intestines. The 987P receptor was present along the entire villous surface and in goblet cells in adult rabbits, but only in goblet cells in infant rabbits. In adult rabbits, the receptor was distributed equally in the jejunum and the ileum. Material antigenically similar to the rabbit 987P receptor was demonstrated in goblet cells in neonatal piglet ileum.

  4. [Measurements of location of body fat distribution: an assessment of colinearity with body mass, adiposity and stature in female adolescents].

    PubMed

    Pereira, Patrícia Feliciano; Serrano, Hiara Miguel Stanciola; Carvalho, Gisele Queiroz; Ribeiro, Sônia Machado Rocha; Peluzio, Maria do Carmo Gouveia; Franceschini, Sylvia do Carmo Castro; Priore, Silvia Eloiza

    2015-01-01

    To verify the correlation between body fat location measurements with the body mass index (BMI), percentage of body fat (%BF) and stature, according to the nutritional status in female adolescents. A controlled cross sectional study was carried out with 113 adolescents (G1: 38 eutrophic, but with high body fat level, G2: 40 eutrophic and G3: 35 overweight) from public schools in Viçosa-MG, Brazil. The following measures have been assessed: weight, stature, waist circumference (WC), umbilical circumference (UC), hip circumference (HC), thigh circumference, waist-to-hip ratio (WHR), waist-to-stature ratio (WSR), waist-to-thigh ratio (WTR), conicity index (CI), sagittal abdominal diameter (SAD), coronal diameter (CD), central skinfolds (CS) and peripheral (PS). The %BF was assessed by tetrapolar electric bioimpedance. The increase of central fat, represented by WC, UC, WSR, SAD, CD and CS, and the increase of peripheral fat indicated by HC and thigh were proportional to the increase of BMI and %BF. WC and especially the UC showed the strongest correlations with adiposity. Weak correlation between WHR, WTR, CI and CS/PS with adiposity were observed. The stature showed correlation with almost all the fat location measures, being regular or weak with waist. The results indicate colinearity between body mass and total adiposity with central and peripheral adipose tissue. We recommend the use of UC for assessing nutritional status of adolescents, because it showed the highest ability to predict adiposity in each group, and also presented regular or weak correlation with stature. Copyright © 2014 Associação de Pediatria de São Paulo. Publicado por Elsevier Editora Ltda. All rights reserved.

  5. Lunar Pickup Ions Observed by ARTEMIS: Spatial and Temporal Distribution and Constraints on Species and Source Locations

    NASA Technical Reports Server (NTRS)

    Halekas, Jasper S.; Poppe, A. R.; Delory, G. T.; Sarantos, M.; Farrell, W. M.; Angelopoulos, V.; McFadden, J. P.

    2012-01-01

    ARTEMIS observes pickup ions around the Moon, at distances of up to 20,000 km from the surface. The observed ions form a plume with a narrow spatial and angular extent, generally seen in a single energy/angle bin of the ESA instrument. Though ARTEMIS has no mass resolution capability, we can utilize the analytically describable characteristics of pickup ion trajectories to constrain the possible ion masses that can reach the spacecraft at the observation location in the correct energy/angle bin. We find that most of the observations are consistent with a mass range of approx. 20-45 amu, with a smaller fraction consistent with higher masses, and very few consistent with masses below 15 amu. With the assumption that the highest fluxes of pickup ions come from near the surface, the observations favor mass ranges of approx. 20-24 and approx. 36-40 amu. Although many of the observations have properties consistent with a surface or near-surface release of ions, some do not, suggesting that at least some of the observed ions have an exospheric source. Of all the proposed sources for ions and neutrals about the Moon, the pickup ion flux measured by ARTEMIS correlates best with the solar wind proton flux, indicating that sputtering plays a key role in either directly producing ions from the surface, or producing neutrals that subsequently become ionized.

  6. Fuel Retrieval System Design Verification Report

    SciTech Connect

    GROTH, B.D.

    2000-04-11

    The Fuel Retrieval Subproject was established as part of the Spent Nuclear Fuel Project (SNF Project) to retrieve and repackage the SNF located in the K Basins. The Fuel Retrieval System (FRS) construction work is complete in the KW Basin, and start-up testing is underway. Design modifications and construction planning are also underway for the KE Basin. An independent review of the design verification process as applied to the K Basin projects was initiated in support of preparation for the SNF Project operational readiness review (ORR). A Design Verification Status Questionnaire, Table 1, is included which addresses Corrective Action SNF-EG-MA-EG-20000060, Item No.9 (Miller 2000).

  7. Investigation of Reflectance Distribution and Trend for the Double Ray Located in the Northwest of Tycho Crater

    NASA Astrophysics Data System (ADS)

    Yi, Eung Seok; Kim, Kyeong Ja; Choi, Yi Re; Kim, Yong Ha; Lee, Sung Soon; Lee, Seung Ryeol

    2015-06-01

    Analysis of lunar samples returned by the US Apollo missions revealed that the lunar highlands consist of anorthosite, plagioclase, pyroxene, and olivine; also, the lunar maria are composed of materials such as basalt and ilmenite. More recently, the remote sensing approach has enabled reduction of the time required to investigate the entire lunar surface, compared to the approach of returning samples. Moreover, remote sensing has also made it possible to determine the existence of specific minerals and to examine wide areas. In this paper, an investigation was performed on the reflectance distribution and its trend. The results were applied to the example of the double ray stretched in parallel lines from the Tycho crater to the third-quadrant of Mare Nubium. Basic research and background information for the investigation of lunar surface characteristics is also presented. For this research, resources aboard the SELenological and ENgineering Explorer (SELENE), a Japanese lunar probe, were used. These included the Multiband Imager (MI) in the Lunar Imager / Spectrometer (LISM). The data of these instruments were edited through the toolkit, an image editing and analysis tool, Exelis Visual Information Solution (ENVI).

  8. A statistical study of the spatial distribution of Co-operative UK Twin Located Auroral Sounding System (CUTLASS) backscatter power during EISCAT heater beam-sweeping experiments

    NASA Astrophysics Data System (ADS)

    Shergill, H.; Robinson, T. R.; Dhillon, R. S.; Lester, M.; Milan, S. E.; Yeoman, T. K.

    2010-05-01

    High-power electromagnetic waves can excite a variety of plasma instabilities in Earth's ionosphere. These lead to the growth of plasma waves and plasma density irregularities within the heated volume, including patches of small-scale field-aligned electron density irregularities. This paper reports a statistical study of intensity distributions in patches of these irregularities excited by the European Incoherent Scatter (EISCAT) heater during beam-sweeping experiments. The irregularities were detected by the Co-operative UK Twin Located Auroral Sounding System (CUTLASS) coherent scatter radar located in Finland. During these experiments the heater beam direction is steadily changed from northward to southward pointing. Comparisons are made between statistical parameters of CUTLASS backscatter power distributions and modeled heater beam power distributions provided by the EZNEC version 4 software. In general, good agreement between the statistical parameters and the modeled beam is observed, clearly indicating the direct causal connection between the heater beam and the irregularities, despite the sometimes seemingly unpredictable nature of unaveraged results. The results also give compelling evidence in support of the upper hybrid theory of irregularity excitation.

  9. Perception of drinking water in the Quebec City region (Canada): the influence of water quality and consumer location in the distribution system.

    PubMed

    Turgeon, Steve; Rodriguez, Manuel J; Thériault, Marius; Levallois, Patrick

    2004-04-01

    The purpose of every water utility is to provide consumers with drinking water that is aesthetically acceptable and presents no risk to public health. Several studies have been carried out to analyze people's perception and attitude about the drinking water coming from their water distribution systems. The goal of the present study is to investigate the influence of water quality and the geographic location of consumers within a distribution system on consumer perception of tap water. The study is based on the data obtained from two surveys carried out in municipalities of the Quebec City area (Canada). Three perception variables were used to study consumer perception: general satisfaction, taste satisfaction and risk perception. Data analysis based on logistic regression indicates that water quality variations and geographic location in the distribution system have a significant impact on the consumer perception. This impact appears to be strongly associated with residual chlorine levels. The study also confirms the importance of socio-economic characteristics of consumers on their perception of drinking water quality.

  10. Study of scattering from a sphere with an eccentrically located spherical inclusion by generalized Lorenz-Mie theory: internal and external field distribution.

    PubMed

    Wang, J J; Gouesbet, G; Han, Y P; Gréhan, G

    2011-01-01

    Based on the recent results in the generalized Lorenz-Mie theory, solutions for scattering problems of a sphere with an eccentrically located spherical inclusion illuminated by an arbitrary shaped electromagnetic beam in an arbitrary orientation are obtained. Particular attention is paid to the description and application of an arbitrary shaped beam in an arbitrary orientation to the scattering problem under study. The theoretical formalism is implemented in a homemade computer program written in FORTRAN. Numerical results concerning spatial distributions of both internal and external fields are displayed in different formats in order to properly display exemplifying results. More specifically, as an example, we consider the case of a focused fundamental Gaussian beam (TEM(00) mode) illuminating a glass sphere (having a real refractive index equal to 1.50) with an eccentrically located spherical water inclusion (having a real refractive index equal to 1.33). Displayed results are for various parameters of the incident electromagnetic beam (incident orientation, beam waist radius, location of the beam waist center) and of the scatterer system (location of the inclusion inside the host sphere and relative diameter of the inclusion to the host sphere).

  11. Computation and Analysis of the Global Distribution of the Radioxenon Isotope 133Xe based on Emissions from Nuclear Power Plants and Radioisotope Production Facilities and its Relevance for the Verification of the Nuclear-Test-Ban Treaty

    NASA Astrophysics Data System (ADS)

    Wotawa, Gerhard; Becker, Andreas; Kalinowski, Martin; Saey, Paul; Tuma, Matthias; Zähringer, Matthias

    2010-05-01

    Monitoring of radioactive noble gases, in particular xenon isotopes, is a crucial element of the verification of the Comprehensive Nuclear-Test-Ban Treaty (CTBT). The capability of the noble gas network, which is currently under construction, to detect signals from a nuclear explosion critically depends on the background created by other sources. Therefore, the global distribution of these isotopes based on emissions and transport patterns needs to be understood. A significant xenon background exists in the reactor regions of North America, Europe and Asia. An emission inventory of the four relevant xenon isotopes has recently been created, which specifies source terms for each power plant. As the major emitters of xenon isotopes worldwide, a few medical radioisotope production facilities have been recently identified, in particular the facilities in Chalk River (Canada), Fleurus (Belgium), Pelindaba (South Africa) and Petten (Netherlands). Emissions from these sites are expected to exceed those of the other sources by orders of magnitude. In this study, emphasis is put on 133Xe, which is the most prevalent xenon isotope. First, based on the emissions known, the resulting 133Xe concentration levels at all noble gas stations of the final CTBT verification network were calculated and found to be consistent with observations. Second, it turned out that emissions from the radioisotope facilities can explain a number of observed peaks, meaning that atmospheric transport modelling is an important tool for the categorization of measurements. Third, it became evident that Nuclear Power Plant emissions are more difficult to treat in the models, since their temporal variation is high and not generally reported. Fourth, there are indications that the assumed annual emissions may be underestimated by factors of two to ten, while the general emission patterns seem to be well understood. Finally, it became evident that 133Xe sources mainly influence the sensitivity of the

  12. The Assembly, Integration, and Verification (AIV) team

    NASA Astrophysics Data System (ADS)

    2009-06-01

    Assembly, Integration, and Verification (AIV) is the process by which the software and hardware deliveries from the distributed ALMA partners (North America, South America, Europe, and East Asia) are assembled and integrated into a working system, and the initial technical capabilities tested to insure that they will meet the observatories exacting requirements for science.

  13. Voltage verification unit

    DOEpatents

    Martin, Edward J [Virginia Beach, VA

    2008-01-15

    A voltage verification unit and method for determining the absence of potentially dangerous potentials within a power supply enclosure without Mode 2 work is disclosed. With this device and method, a qualified worker, following a relatively simple protocol that involves a function test (hot, cold, hot) of the voltage verification unit before Lock Out/Tag Out and, and once the Lock Out/Tag Out is completed, testing or "trying" by simply reading a display on the voltage verification unit can be accomplished without exposure of the operator to the interior of the voltage supply enclosure. According to a preferred embodiment, the voltage verification unit includes test leads to allow diagnostics with other meters, without the necessity of accessing potentially dangerous bus bars or the like.

  14. SU-E-J-58: Dosimetric Verification of Metal Artifact Effects: Comparison of Dose Distributions Affected by Patient Teeth and Implants

    SciTech Connect

    Lee, M; Kang, S; Lee, S; Suh, T; Lee, J; Park, J; Park, H; Lee, B

    2014-06-01

    Purpose: Implant-supported dentures seem particularly appropriate for the predicament of becoming edentulous and cancer patients are no exceptions. As the number of people having dental implants increased in different ages, critical dosimetric verification of metal artifact effects are required for the more accurate head and neck radiation therapy. The purpose of this study is to verify the theoretical analysis of the metal(streak and dark) artifact, and to evaluate dosimetric effect which cause by dental implants in CT images of patients with the patient teeth and implants inserted humanoid phantom. Methods: The phantom comprises cylinder which is shaped to simulate the anatomical structures of a human head and neck. Through applying various clinical cases, made phantom which is closely allied to human. Developed phantom can verify two classes: (i)closed mouth (ii)opened mouth. RapidArc plans of 4 cases were created in the Eclipse planning system. Total dose of 2000 cGy in 10 fractions is prescribed to the whole planning target volume (PTV) using 6MV photon beams. Acuros XB (AXB) advanced dose calculation algorithm, Analytical Anisotropic Algorithm (AAA) and progressive resolution optimizer were used in dose optimization and calculation. Results: In closed and opened mouth phantom, because dark artifacts formed extensively around the metal implants, dose variation was relatively higher than that of streak artifacts. As the PTV was delineated on the dark regions or large streak artifact regions, maximum 7.8% dose error and average 3.2% difference was observed. The averaged minimum dose to the PTV predicted by AAA was about 5.6% higher and OARs doses are also 5.2% higher compared to AXB. Conclusion: The results of this study showed that AXB dose calculation involving high-density materials is more accurate than AAA calculation, and AXB was superior to AAA in dose predictions beyond dark artifact/air cavity portion when compared against the measurements.

  15. Modular Machine Code Verification

    DTIC Science & Technology

    2007-05-01

    assembly level program verification is to design a type system for assembly language. Partly inspired by the Typed Intermediate Language (TIL) [57... designed to support direct verification of assembly programs with non-trivial 126 properties not expressible in traditional types. Besides the examples...provably sound tal for back-end opti- mization. In Proc. 2003 ACM Conference on Programming Language Design and Imple- mentation, pages 208–219. ACM

  16. ENVIRONMENTAL TECHNOLOGY VERIFICATION--FUELCELL ENERGY, INC.: DFC 300A MOLTEN CARBONATE FUEL CELL COMBINED HEAT AND POWER SYSTEM

    EPA Science Inventory

    The U.S. EPA operates the Environmental Technology Verification program to facilitate the deployment of innovative technologies through performance verification and information dissemination. A technology area of interest is distributed electrical power generation, particularly w...

  17. ENVIRONMENTAL TECHNOLOGY VERIFICATION--FUELCELL ENERGY, INC.: DFC 300A MOLTEN CARBONATE FUEL CELL COMBINED HEAT AND POWER SYSTEM

    EPA Science Inventory

    The U.S. EPA operates the Environmental Technology Verification program to facilitate the deployment of innovative technologies through performance verification and information dissemination. A technology area of interest is distributed electrical power generation, particularly w...

  18. How do wetland type and location affect their hydrological services? - A distributed hydrological modelling study of the contribution of isolated and riparian wetlands

    NASA Astrophysics Data System (ADS)

    Fossey, Maxime; Rousseau, Alain N.; Savary, Stéphane; Royer, Alain

    2015-04-01

    Wetlands play a significant role on the hydrological cycle, reducing peak flows through water storage functions and sustaining low flows through slow release of water. However, their impacts on water resource availability and flood control are mainly driven by wetland types and locations within a watershed. So, despite the general agreement about these major hydrological functions, little is known about their spatial and typological influences. Consequently, assessing the quantitative impact of wetlands on hydrological regimes has become a relevant issue for both the scientific community and the decision-maker community. To investigate the hydrologic response at the watershed scale, mathematical modelling has been a well-accepted framework. Specific isolated and riparian wetland modules were implemented in the PHYSITEL/HYDROTEL distributed hydrological modelling platform to assess the impact of the spatial distribution of isolated and riparian wetlands on the stream flows of the Becancour River watershed, Quebec, Canada. More specifically, the focus was on assessing whether stream flow parameters, including peak flow and low flow, were related to: (i) geographic location of wetlands, (ii) typology of wetlands, and (iii) season of the year. Preliminary results suggest that isolated and riparian wetlands have individual space- and time-dependent impacts on the hydrologic response of the study watershed and provide relevant information for the design of wetland protection and restoration programs.

  19. Modular verification of concurrent systems

    SciTech Connect

    Sobel, A.E.K.

    1986-01-01

    During the last ten years, a number of authors have proposed verification techniques that allow one to prove properties of individual processes by using global assumptions about the behavior of the remaining processes in the distributed program. As a result, one must justify these global assumptions before drawing any conclusions regarding the correctness of the entire program. This justification is often the most difficult part of the proof and presents a serious obstacle to hierarchical program development. This thesis develops a new approach to the verification of concurrent systems. The approach is modular and supports compositional development of programs since the proofs of each individual process of a program are completely isolated from all others. The generality of this approach is illustrated by applying it to a representative set of contemporary concurrent programming languages, namely: CSP, ADA, Distributed Processes, and a shared variable language. In addition, it is also shown how the approach may be used to deal with a number of other constructs that have been proposed for inclusion in concurrent languages: FORK and JOIN primitives, nested monitor calls, path expressions, atomic transactions, and asynchronous message passing. These results allow argument that the approach is universal and can be used to design proof systems for any concurrent language.

  20. [Spatial Distribution of Type 2 Diabetes Mellitus in Berlin: Application of a Geographically Weighted Regression Analysis to Identify Location-Specific Risk Groups].

    PubMed

    Kauhl, Boris; Pieper, Jonas; Schweikart, Jürgen; Keste, Andrea; Moskwyn, Marita

    2017-02-16

    Understanding which population groups in which locations are at higher risk for type 2 diabetes mellitus (T2DM) allows efficient and cost-effective interventions targeting these risk-populations in great need in specific locations. The goal of this study was to analyze the spatial distribution of T2DM and to identify the location-specific, population-based risk factors using global and local spatial regression models. To display the spatial heterogeneity of T2DM, bivariate kernel density estimation was applied. An ordinary least squares regression model (OLS) was applied to identify population-based risk factors of T2DM. A geographically weighted regression model (GWR) was then constructed to analyze the spatially varying association between the identified risk factors and T2DM. T2DM is especially concentrated in the east and outskirts of Berlin. The OLS model identified proportions of persons aged 80 and older, persons without migration background, long-term unemployment, households with children and a negative association with single-parenting households as socio-demographic risk groups. The results of the GWR model point out important local variations of the strength of association between the identified risk factors and T2DM. The risk factors for T2DM depend largely on the socio-demographic composition of the neighborhoods in Berlin and highlight that a one-size-fits-all approach is not appropriate for the prevention of T2DM. Future prevention strategies should be tailored to target location-specific risk-groups. © Georg Thieme Verlag KG Stuttgart · New York.

  1. Aerosol mass size distribution and black carbon over a high altitude location in Western Trans-Himalayas: Impact of a dust episode

    NASA Astrophysics Data System (ADS)

    Kompalli, Sobhan Kumar; Krishna Moorthy, K.; Suresh Babu, S.; Manoj, M. R.

    2014-12-01

    The information on the aerosol properties from remote locations provides insights into the background and natural conditions against which anthropogenic impacts could be compared. Measurements of the near surface aerosol mass size distribution from the high altitude remote site help us to understand the natural processes, such as, the association between Aeolian and fluvial processes that have a direct bearing on the mass concentrations, especially in the larger size ranges. In the present study, the total mass concentration and mass-size distribution of the near surface aerosols, measured using a 10-channel Quartz Crystal Microbalance (QCM) Impactor from a high altitude location-Hanle (32.78°N, 78.95°E, 4520 m asl) in the western Trans-Himalayas, have been used to characterize the composite aerosols. Also the impact of a highly localized, short-duration dust storm episode on the mass size distribution has been examined. In general, though the total mass concentration (Mt) remained very low (∼0.75 ± 0.61 μg m-3), interestingly, coarse mode (super-micron) aerosols contributed almost 72 ± 6% to the total aerosol mass loading near the surface. The mass-size distribution showed 3 modes, a fine particle mode (∼0.2 μm), an accumulation mode at ∼0.5 μm, and a coarse mode at ∼3 μm. During a localized short duration dust storm episode, Mt reached as high as ∼13.5 μg m-3 with coarse mode aerosols contributing to nearly 90% of it. The mass size distribution changed significantly, with a broad coarse mode so that the accumulation mode became inconspicuous. Concurrent measurements of aerosol black carbon (BC) using twin wavelength measurements of the aethalometer showed an increase in the wavelength index of absorption, from the normal values of ∼1 to 1.5 signifying the enhanced absorption at the short wavelength (380 nm) by the dust.

  2. Explaining Verification Conditions

    NASA Technical Reports Server (NTRS)

    Deney, Ewen; Fischer, Bernd

    2006-01-01

    The Hoare approach to program verification relies on the construction and discharge of verification conditions (VCs) but offers no support to trace, analyze, and understand the VCs themselves. We describe a systematic extension of the Hoare rules by labels so that the calculus itself can be used to build up explanations of the VCs. The labels are maintained through the different processing steps and rendered as natural language explanations. The explanations can easily be customized and can capture different aspects of the VCs; here, we focus on their structure and purpose. The approach is fully declarative and the generated explanations are based only on an analysis of the labels rather than directly on the logical meaning of the underlying VCs or their proofs. Keywords: program verification, Hoare calculus, traceability.

  3. Voice verification upgrade

    NASA Astrophysics Data System (ADS)

    Davis, R. L.; Sinnamon, J. T.; Cox, D. L.

    1982-06-01

    This contractor has two major objectives. The first was to build, test, and deliver to the government an entry control system using speaker verification (voice authentication) as the mechanism for verifying the user's claimed identity. This system included a physical mantrap, with an integral weight scale to prevent more than one user from gaining access with one verification (tailgating). The speaker verification part of the entry control system contained all the updates and embellishments to the algorithm that was developed earlier for the BISS (Base and Installation Security System) system under contract with the Electronic Systems Division of the USAF. These updates were tested prior to and during the contract on an operational system used at Texas Instruments in Dallas, Texas, for controlling entry to the Corporate Information Center (CIC).

  4. Wind gust warning verification

    NASA Astrophysics Data System (ADS)

    Primo, Cristina

    2016-07-01

    Operational meteorological centres around the world increasingly include warnings as one of their regular forecast products. Warnings are issued to warn the public about extreme weather situations that might occur leading to damages and losses. In forecasting these extreme events, meteorological centres help their potential users in preventing the damage or losses they might suffer. However, verifying these warnings requires specific methods. This is due not only to the fact that they happen rarely, but also because a new temporal dimension is added when defining a warning, namely the time window of the forecasted event. This paper analyses the issues that might appear when dealing with warning verification. It also proposes some new verification approaches that can be applied to wind warnings. These new techniques are later applied to a real life example, the verification of wind gust warnings at the German Meteorological Centre ("Deutscher Wetterdienst"). Finally, the results obtained from the latter are discussed.

  5. Nuclear disarmament verification

    SciTech Connect

    DeVolpi, A.

    1993-12-31

    Arms control treaties, unilateral actions, and cooperative activities -- reflecting the defusing of East-West tensions -- are causing nuclear weapons to be disarmed and dismantled worldwide. In order to provide for future reductions and to build confidence in the permanency of this disarmament, verification procedures and technologies would play an important role. This paper outlines arms-control objectives, treaty organization, and actions that could be undertaken. For the purposes of this Workshop on Verification, nuclear disarmament has been divided into five topical subareas: Converting nuclear-weapons production complexes, Eliminating and monitoring nuclear-weapons delivery systems, Disabling and destroying nuclear warheads, Demilitarizing or non-military utilization of special nuclear materials, and Inhibiting nuclear arms in non-nuclear-weapons states. This paper concludes with an overview of potential methods for verification.

  6. Turbulence Modeling Verification and Validation

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.

    2014-01-01

    steps in the process. Verification insures that the CFD code is solving the equations as intended (no errors in the implementation). This is typically done either through the method of manufactured solutions (MMS) or through careful step-by-step comparisons with other verified codes. After the verification step is concluded, validation is performed to document the ability of the turbulence model to represent different types of flow physics. Validation can involve a large number of test case comparisons with experiments, theory, or DNS. Organized workshops have proved to be valuable resources for the turbulence modeling community in its pursuit of turbulence modeling verification and validation. Workshop contributors using different CFD codes run the same cases, often according to strict guidelines, and compare results. Through these comparisons, it is often possible to (1) identify codes that have likely implementation errors, and (2) gain insight into the capabilities and shortcomings of different turbulence models to predict the flow physics associated with particular types of flows. These are valuable lessons because they help bring consistency to CFD codes by encouraging the correction of faulty programming and facilitating the adoption of better models. They also sometimes point to specific areas needed for improvement in the models. In this paper, several recent workshops are summarized primarily from the point of view of turbulence modeling verification and validation. Furthermore, the NASA Langley Turbulence Modeling Resource website is described. The purpose of this site is to provide a central location where RANS turbulence models are documented, and test cases, grids, and data are provided. The goal of this paper is to provide an abbreviated survey of turbulence modeling verification and validation efforts, summarize some of the outcomes, and give some ideas for future endeavors in this area.

  7. Verification and validation benchmarks.

    SciTech Connect

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  8. Requirement Assurance: A Verification Process

    NASA Technical Reports Server (NTRS)

    Alexander, Michael G.

    2011-01-01

    Requirement Assurance is an act of requirement verification which assures the stakeholder or customer that a product requirement has produced its "as realized product" and has been verified with conclusive evidence. Product requirement verification answers the question, "did the product meet the stated specification, performance, or design documentation?". In order to ensure the system was built correctly, the practicing system engineer must verify each product requirement using verification methods of inspection, analysis, demonstration, or test. The products of these methods are the "verification artifacts" or "closure artifacts" which are the objective evidence needed to prove the product requirements meet the verification success criteria. Institutional direction is given to the System Engineer in NPR 7123.1A NASA Systems Engineering Processes and Requirements with regards to the requirement verification process. In response, the verification methodology offered in this report meets both the institutional process and requirement verification best practices.

  9. Verification of statistical method CORN for modeling of microfuel in the case of high grain concentration

    NASA Astrophysics Data System (ADS)

    Chukbar, B. K.

    2015-12-01

    Two methods of modeling a double-heterogeneity fuel are studied: the deterministic positioning and the statistical method CORN of the MCU software package. The effect of distribution of microfuel in a pebble bed on the calculation results is studied. The results of verification of the statistical method CORN for the cases of the microfuel concentration up to 170 cm-3 in a pebble bed are presented. The admissibility of homogenization of the microfuel coating with the graphite matrix is studied. The dependence of the reactivity on the relative location of fuel and graphite spheres in a pebble bed is found.

  10. Verification of statistical method CORN for modeling of microfuel in the case of high grain concentration

    SciTech Connect

    Chukbar, B. K.

    2015-12-15

    Two methods of modeling a double-heterogeneity fuel are studied: the deterministic positioning and the statistical method CORN of the MCU software package. The effect of distribution of microfuel in a pebble bed on the calculation results is studied. The results of verification of the statistical method CORN for the cases of the microfuel concentration up to 170 cm{sup –3} in a pebble bed are presented. The admissibility of homogenization of the microfuel coating with the graphite matrix is studied. The dependence of the reactivity on the relative location of fuel and graphite spheres in a pebble bed is found.

  11. Voice Verification Upgrade.

    DTIC Science & Technology

    1982-06-01

    to develop speaker verification techniques for use over degraded commun- ication channels -- specifically telephone lines. A test of BISS type speaker...verification technology was performed on a degraded channel and compensation techniques were then developed . The fifth program [103 (Total Voice SV...UPGAW. *mbit aL DuI~sel Jme T. SImmoon e~d David L. Cox AAWVLP FIR MIEW RMAS Utgl~rIMIW At" DT11C AU9 231f CD, _ ROME AIR DEVELOPMENT CENTER Air

  12. General Environmental Verification Specification

    NASA Technical Reports Server (NTRS)

    Milne, J. Scott, Jr.; Kaufman, Daniel S.

    2003-01-01

    The NASA Goddard Space Flight Center s General Environmental Verification Specification (GEVS) for STS and ELV Payloads, Subsystems, and Components is currently being revised based on lessons learned from GSFC engineering and flight assurance. The GEVS has been used by Goddard flight projects for the past 17 years as a baseline from which to tailor their environmental test programs. A summary of the requirements and updates are presented along with the rationale behind the changes. The major test areas covered by the GEVS include mechanical, thermal, and EMC, as well as more general requirements for planning, tracking of the verification programs.

  13. Location Privacy

    NASA Astrophysics Data System (ADS)

    Meng, Xiaofeng; Chen, Jidong

    With rapid development of sensor and wireless mobile devices, it is easy to access mobile users' location information anytime and anywhere. On one hand, LBS is becoming more and more valuable and important. On the other hand, location privacy issues raised by such applications have also gained more attention. However, due to the specificity of location information, traditional privacy-preserving techniques in data publishing cannot be used. In this chapter, we will introduce location privacy, and analyze the challenges of location privacy-preserving, and give a survey of existing work including the system architecture, location anonymity and query processing.

  14. Influence of pH, layer charge location and crystal thickness distribution on U(VI) sorption onto heterogeneous dioctahedral smectite.

    PubMed

    Guimarães, Vanessa; Rodríguez-Castellón, Enrique; Algarra, Manuel; Rocha, Fernando; Bobos, Iuliu

    2016-11-05

    The UO2(2+) adsorption on smectite (samples BA1, PS2 and PS3) with a heterogeneous structure was investigated at pH 4 (I=0.02M) and pH 6 (I=0.2M) in batch experiments, with the aim to evaluate the influence of pH, layer charge location and crystal thickness distribution. Mean crystal thickness distribution of smectite crystallite used in sorption experiments range from 4.8nm (sample PS2), to 5.1nm (sample PS3) and, to 7.4nm (sample BA1). Smaller crystallites have higher total surface area and sorption capacity. Octahedral charge location favor higher sorption capacity. The sorption isotherms of Freundlich, Langmuir and SIPS were used to model the sorption experiments. The surface complexation and cation exchange reactions were modeled using PHREEQC-code to describe the UO2(2+) sorption on smectite. The amount of UO2(2+) adsorbed on smectite samples decreased significantly at pH 6 and higher ionic strength, where the sorption mechanism was restricted to the edge sites of smectite. Two binding energy components at 380.8±0.3 and 382.2±0.3eV, assigned to hydrated UO2(2+) adsorbed by cation exchange and by inner-sphere complexation on the external sites at pH 4, were identified after the U4f7/2 peak deconvolution by X-photoelectron spectroscopy. Also, two new binding energy components at 380.3±0.3 and 381.8±0.3eV assigned to AlOUO2(+) and SiOUO2(+) surface species were observed at pH 6.

  15. A Fault Location Algorithm for Two-End Series-Compensated Double-Credit Transmission Lines Using the Distributed Parameter Line Model

    SciTech Connect

    Kang, Ning; Gombos, Gergely; Mousavi, Mirrasoul J.; Feng, Xiaoming

    2017-01-01

    A new fault location algorithm for two-end series-compensated double-circuit transmission lines utilizing unsynchronized two-terminal current phasors and local voltage phasors is presented in this paper. The distributed parameter line model is adopted to take into account the shunt capacitance of the lines. The mutual coupling between the parallel lines in the zero-sequence network is also considered. The boundary conditions under different fault types are used to derive the fault location formulation. The developed algorithm directly uses the local voltage phasors on the line side of series compensation (SC) and metal oxide varistor (MOV). However, when potential transformers are not installed on the line side of SC and MOVs for the local terminal, these measurements can be calculated from the local terminal bus voltage and currents by estimating the voltages across the SC and MOVs. MATLAB SimPowerSystems is used to generate cases under diverse fault conditions to evaluating accuracy. The simulation results show that the proposed algorithm is qualified for practical implementation.

  16. Interactions of microbial biofilms with toxic trace metals; 2: Prediction and verification of an integrated computer model of lead (II) distribution in the presence of microbial activity

    SciTech Connect

    Hsieh, K.M.; Murgel, G.A.; Lion, L.W.; Shuler, M.L. )

    1994-06-20

    The interfacial interactions of a toxic trace metal, Pb, with a surface modified by a marine film-forming bacterium, Pseudomonas atlantica, were predicted by a structured biofilm model used in conjunction with a chemical speciation model. The validity of the integrated model was tested for batch and continuous operations. Dynamic responses of the biophase due to transient lead concentration increases were also simulated. The reasonable predictions achieved by the model demonstrate its utility in describing trace metal distributions in complex systems where the adsorption properties of inorganic surfaces are modified by adherent bacteria and bacterial production of extracellular polymers.

  17. Pediatricians’ Practice Location Choice—Evaluating the Effect of Japan’s 2004 Postgraduate Training Program on the Spatial Distribution of Pediatricians

    PubMed Central

    Sakai, Rie; Fink, Günther; Kawachi, Ichiro

    2014-01-01

    Objectives To explore determinants of change in pediatrician supply in Japan, and examine impacts of a 2004 reform of postgraduate medical education on pediatricians’ practice location choice. Methods Data were compiled from secondary data sources. The dependent variable was the change in the number of pediatricians at the municipality (“secondary tier of medical care” [STM]) level. To analyze the determinants of pediatrician location choices, we considered the following predictors: initial ratio of pediatricians per 1000 children under five years of age (pediatrician density) and under-5 mortality as measures of local area need, as well as measures of residential quality. Ordinary least-squares regression models were used to estimate the associations. A coefficient equality test was performed to examine differences in predictors before and after 2004. Basic comparisons of pediatrician coverage in the top and bottom 10% of STMs were conducted to assess inequality in pediatrician supply. Results Increased supply was inversely associated with baseline pediatrician density both in the pre-period and post-period. Estimated impact of pediatrician density declined over time (P = 0.026), while opposite trends were observed for measures of residential quality. More specifically, urban centers and the SES composite index were positively associated with pediatrician supply for the post-period, but no such associations were found for the pre-period. Inequality in pediatrician distribution increased substantially after the reform, with the best-served 10% of communities benefitting from five times the pediatrician coverage compared to the least-served 10%. Conclusions Residential quality increasingly became a function of location preference rather than public health needs after the reform. New placement schemes should be developed to achieve more equity in access to pediatric care. PMID:24681844

  18. Location | FNLCR

    Cancer.gov

    The Frederick National Laboratory for Cancer Research campus is located 50 miles northwest of Washington, D.C., and 50 miles west of Baltimore, Maryland, in Frederick, Maryland. Satellite locations include leased and government facilities extending s

  19. Alu and L1 sequence distributions in Xq24-q28 and their comparative utility in YAC contig assembly and verification

    SciTech Connect

    Porta, G.; Zucchi, I.; Schlessinger, D.; Hillier, L.; Green, P.; Nowotny, V.; D`Urso, M.

    1993-05-01

    The contents of Alu- and L1-containing TaqI restriction fragments were assessed by Southern blot analyses across YAC contigs already assembled by other means and localized within Xq24-q28. Fingerprinting patterns of YACs in contigs were concordant. Using software based on that of M. V. Olson et al. to analyze digitized data on fragment sizes, fingerprinting itself could establish matches among about 40% of a test group of 435 YACs. At 100-kb resolution, both repetitive elements were found throughout the region, with no apparent enrichment of Alu or L1 in DNA of G compared to that found in R bands. However, consistent with a random overall distribution, delimited regions of up to 100 kb contained clusters of repetitive elements. The local concentrations may help to account for the reported differential hybridization of Alu and L1 probes to segments of metaphase chromosomes. 40 refs., 6 figs., 2 tabs.

  20. ENVIRONMENTAL TECHNOLOGY VERIFICATION PROGRAM

    EPA Science Inventory

    This presentation will be given at the EPA Science Forum 2005 in Washington, DC. The Environmental Technology Verification Program (ETV) was initiated in 1995 to speed implementation of new and innovative commercial-ready environemntal technologies by providing objective, 3rd pa...

  1. ENVIRONMENTAL TECHNOLOGY VERIFICATION PROGRAM

    EPA Science Inventory

    This presentation will be given at the EPA Science Forum 2005 in Washington, DC. The Environmental Technology Verification Program (ETV) was initiated in 1995 to speed implementation of new and innovative commercial-ready environemntal technologies by providing objective, 3rd pa...

  2. Computer Graphics Verification

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Video processing creates technical animation sequences using studio quality equipment to realistically represent fluid flow over space shuttle surfaces, helicopter rotors, and turbine blades.Computer systems Co-op, Tim Weatherford, performing computer graphics verification. Part of Co-op brochure.

  3. FPGA Verification Accelerator (FVAX)

    NASA Technical Reports Server (NTRS)

    Oh, Jane; Burke, Gary

    2008-01-01

    Is Verification Acceleration Possible? - Increasing the visibility of the internal nodes of the FPGA results in much faster debug time - Forcing internal signals directly allows a problem condition to be setup very quickly center dot Is this all? - No, this is part of a comprehensive effort to improve the JPL FPGA design and V&V process.

  4. Multibody modeling and verification

    NASA Technical Reports Server (NTRS)

    Wiens, Gloria J.

    1989-01-01

    A summary of a ten week project on flexible multibody modeling, verification and control is presented. Emphasis was on the need for experimental verification. A literature survey was conducted for gathering information on the existence of experimental work related to flexible multibody systems. The first portion of the assigned task encompassed the modeling aspects of flexible multibodies that can undergo large angular displacements. Research in the area of modeling aspects were also surveyed, with special attention given to the component mode approach. Resulting from this is a research plan on various modeling aspects to be investigated over the next year. The relationship between the large angular displacements, boundary conditions, mode selection, and system modes is of particular interest. The other portion of the assigned task was the generation of a test plan for experimental verification of analytical and/or computer analysis techniques used for flexible multibody systems. Based on current and expected frequency ranges of flexible multibody systems to be used in space applications, an initial test article was selected and designed. A preliminary TREETOPS computer analysis was run to ensure frequency content in the low frequency range, 0.1 to 50 Hz. The initial specifications of experimental measurement and instrumentation components were also generated. Resulting from this effort is the initial multi-phase plan for a Ground Test Facility of Flexible Multibody Systems for Modeling Verification and Control. The plan focusses on the Multibody Modeling and Verification (MMV) Laboratory. General requirements of the Unobtrusive Sensor and Effector (USE) and the Robot Enhancement (RE) laboratories were considered during the laboratory development.

  5. Verification of Anderson superexchange in MnO via magnetic pair distribution function analysis and ab initio theory

    SciTech Connect

    Benjamin A. Frandsen; Brunelli, Michela; Page, Katharine; Uemura, Yasutomo J.; Staunton, Julie B.; Billinge, Simon J. L.

    2016-05-11

    Here, we present a temperature-dependent atomic and magnetic pair distribution function (PDF) analysis of neutron total scattering measurements of antiferromagnetic MnO, an archetypal strongly correlated transition-metal oxide. The known antiferromagnetic ground-state structure fits the low-temperature data closely with refined parameters that agree with conventional techniques, confirming the reliability of the newly developed magnetic PDF method. The measurements performed in the paramagnetic phase reveal significant short-range magnetic correlations on a ~1 nm length scale that differ substantially from the low-temperature long-range spin arrangement. Ab initio calculations using a self-interaction-corrected local spin density approximation of density functional theory predict magnetic interactions dominated by Anderson superexchange and reproduce the measured short-range magnetic correlations to a high degree of accuracy. Further calculations simulating an additional contribution from a direct exchange interaction show much worse agreement with the data. Furthermore, the Anderson superexchange model for MnO is thus verified by experimentation and confirmed by ab initio theory.

  6. Verification of Anderson superexchange in MnO via magnetic pair distribution function analysis and ab initio theory

    SciTech Connect

    Benjamin A. Frandsen; Brunelli, Michela; Page, Katharine; Uemura, Yasutomo J.; Staunton, Julie B.; Billinge, Simon J. L.

    2016-05-11

    Here, we present a temperature-dependent atomic and magnetic pair distribution function (PDF) analysis of neutron total scattering measurements of antiferromagnetic MnO, an archetypal strongly correlated transition-metal oxide. The known antiferromagnetic ground-state structure fits the low-temperature data closely with refined parameters that agree with conventional techniques, confirming the reliability of the newly developed magnetic PDF method. The measurements performed in the paramagnetic phase reveal significant short-range magnetic correlations on a ~1 nm length scale that differ substantially from the low-temperature long-range spin arrangement. Ab initio calculations using a self-interaction-corrected local spin density approximation of density functional theory predict magnetic interactions dominated by Anderson superexchange and reproduce the measured short-range magnetic correlations to a high degree of accuracy. Further calculations simulating an additional contribution from a direct exchange interaction show much worse agreement with the data. Furthermore, the Anderson superexchange model for MnO is thus verified by experimentation and confirmed by ab initio theory.

  7. Spectroscopic verification of zinc absorption and distribution in the desert plant Prosopis juliflora-velutina (velvet mesquite) treated with ZnO nanoparticles

    PubMed Central

    Hernandez-Viezcas, J.A.; Castillo-Michel, H.; Servin, A.D.; Peralta-Videa, J.R.; Gardea-Torresdey, J.L.

    2012-01-01

    The impact of metal nanoparticles (NPs) on biological systems, especially plants, is still not well understood. The aim of this research was to determine the effects of zinc oxide (ZnO) NPs in velvet mesquite (Prosopis juliflora-velutina). Mesquite seedlings were grown for 15 days in hydroponics with ZnO NPs (10 nm) at concentrations varying from 500 to 4000 mg L−1. Zinc concentrations in roots, stems and leaves were determined by inductively coupled plasma optical emission spectroscopy (ICP-OES). Plant stress was examined by the specific activity of catalase (CAT) and ascorbate peroxidase (APOX); while the biotransformation of ZnO NPs and Zn distribution in tissues was determined by X-ray absorption spectroscopy (XAS) and micro X-ray fluorescence (μXRF), respectively. ICP-OES results showed that Zn concentrations in tissues (2102 ± 87, 1135 ± 56, and 628 ± 130 mg kg−1 d wt in roots, stems, and leaves, respectively) were found at 2000 mg ZnO NPs L−1. Stress tests showed that ZnO NPs increased CAT in roots, stems, and leaves, while APOX increased only in stems and leaves. XANES spectra demonstrated that ZnO NPs were not present in mesquite tissues, while Zn was found as Zn(II), resembling the spectra of Zn(NO3)2. The μXRF analysis confirmed the presence of Zn in the vascular system of roots and leaves in ZnO NP treated plants. PMID:22820414

  8. Improved Verification for Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Powell, Mark A.

    2008-01-01

    Aerospace systems are subject to many stringent performance requirements to be verified with low risk. This report investigates verification planning using conditional approaches vice the standard classical statistical methods, and usage of historical surrogate data for requirement validation and in verification planning. The example used in this report to illustrate the results of these investigations is a proposed mission assurance requirement with the concomitant maximum acceptable verification risk for the NASA Constellation Program Orion Launch Abort System (LAS). This report demonstrates the following improvements: 1) verification planning using conditional approaches vice classical statistical methods results in plans that are more achievable and feasible; 2) historical surrogate data can be used to bound validation of performance requirements; and, 3) incorporation of historical surrogate data in verification planning using conditional approaches produces even less costly and more reasonable verification plans. The procedures presented in this report may produce similar improvements and cost savings in verification for any stringent performance requirement for an aerospace system.

  9. Verification of the databases EXFOR and ENDF

    NASA Astrophysics Data System (ADS)

    Berton, Gottfried; Damart, Guillaume; Cabellos, Oscar; Beauzamy, Bernard; Soppera, Nicolas; Bossant, Manuel

    2017-09-01

    The objective of this work is for the verification of large experimental (EXFOR) and evaluated nuclear reaction databases (JEFF, ENDF, JENDL, TENDL…). The work is applied to neutron reactions in EXFOR data, including threshold reactions, isomeric transitions, angular distributions and data in the resonance region of both isotopes and natural elements. Finally, a comparison of the resonance integrals compiled in EXFOR database with those derived from the evaluated libraries is also performed.

  10. Operational gunshot location system

    NASA Astrophysics Data System (ADS)

    Showen, Robert

    1997-02-01

    The nation's first operational trial of a gunshot location system is underway in Redwood City, California. The system uses acoustic sensors widely distributed over an impacted community. The impulses received at each sensor allow triangulation of the gunfire location and prompt police dispatch. A computer display shows gunfire location superimposed on a map showing property boundaries. Police are responding to system events immediately and in a community-policing investigative role.

  11. NON-INVASIVE DETERMINATION OF THE LOCATION AND DISTRIBUTION OF FREE-PHASE DENSE NONAQUEOUS PHASE LIQUIDS (DNAPL) BY SEISMIC REFLECTION TECHNIQUES

    SciTech Connect

    Michael G. Waddell; William J. Domoracki; Tom J. Temples; Jerome Eyer

    2001-05-01

    The Earth Sciences and Resources Institute, University of South Carolina is conducting a 14 month proof of concept study to determine the location and distribution of subsurface Dense Nonaqueous Phase Liquid (DNAPL) carbon tetrachloride (CCl{sub 4}) contamination at the 216-Z-9 crib, 200 West area, Department of Energy (DOE) Hanford Site, Washington by use of two-dimensional high resolution seismic reflection surveys and borehole geophysical data. The study makes use of recent advances in seismic reflection amplitude versus offset (AVO) technology to directly detect the presence of subsurface DNAPL. The techniques proposed are a noninvasive means towards site characterization and direct free-phase DNAPL detection. This report covers the results of Task 3 and change of scope of Tasks 4-6. Task 1 contains site evaluation and seismic modeling studies. The site evaluation consists of identifying and collecting preexisting geological and geophysical information regarding subsurface structure and the presence and quantity of DNAPL. The seismic modeling studies were undertaken to determine the likelihood that an AVO response exists and its probable manifestation. Task 2 is the design and acquisition of 2-D seismic reflection data designed to image areas of probable high concentration of DNAPL. Task 3 is the processing and interpretation of the 2-D data. Task 4, 5, and 6 were designing, acquiring, processing, and interpretation of a three dimensional seismic survey (3D) at the Z-9 crib area at 200 west area, Hanford.

  12. Motor activity (exploration) and formation of home bases in mice (C57BL/6) influenced by visual and tactile cues: modification of movement distribution, distance, location, and speed.

    PubMed

    Clark, Benjamin J; Hamilton, Derek A; Whishaw, Ian Q

    2006-04-15

    The motor activity of mice in tests of "exploration" is organized. Mice establish home bases, operationally defined as places where they spend long periods of time, near physical objects and nesting material from which they make excursions. This organization raises the question of the extent to which mouse motoric activity is modulated by innate predispositions versus environmental influences. Here the influence of contextual cues (visual and tactile) on the motor activity of C57BL/6 mice was examined: (1) on an open field that had no walls, a partial wall, or a complete wall, (2) in the presence of distinct visual cues, room cues, or in the absence of visual cues (infrared light), and (3) in the presence of configurations of visual and tactile cues. Mice were generally less active in the presence of salient cues and formed home bases near those cues. In addition, movement speed, path distribution, and the number and length of stops were modulated by contextual cues. With repeated tests, mice favored tactile cues over visual cues as their home base locations. Although responses to cues were robust over test days, conditioning to context was generally weak. That the exploratory behavior of mice is affected by experience and context provides insights into performance variability and may prove useful in investigating the genetic and neural influences on mouse behavior.

  13. Pyroclastic Eruptions in a Mars Climate Model: The Effects of Grain Size, Plume Height, Density, Geographical Location, and Season on Ash Distribution

    NASA Astrophysics Data System (ADS)

    Kerber, L. A.; Head, J. W.; Madeleine, J.; Wilson, L.; Forget, F.

    2010-12-01

    Pyroclastic volcanism has played a major role in the geologic history of the planet Mars. In addition to several highland patera features interpreted to be composed of pyroclastic material, there are a number of vast, fine-grained, friable deposits which may have a volcanic origin. The physical processes involved in the explosive eruption of magma, including the nucleation of bubbles, the fragmentation of magma, the incorporation of atmospheric gases, the formation of a buoyant plume, and the fall-out of individual pyroclasts has been modeled extensively for martian conditions [Wilson, L., J.W. Head (2007), Explosive volcanic eruptions on Mars: Tephra and accretionary lapilli formation, dispersal and recognition in the geologic record, J. Volcanol. Geotherm. Res. 163, 83-97]. We have further developed and expanded this original model in order to take into account differing temperature, pressure, and wind regimes found at different altitudes, at different geographic locations, and during different martian seasons. Using a well-established Mars global circulation model [LMD-GCM, Forget, F., F. Hourdin, R. Fournier, C. Hourdin, O. Talagrand (1999), Improved general circulation models of the martian atmosphere from the surface to above 80 km, J. Geophys. Res. 104, 24,155-24,176] we are able to link the volcanic eruption model of Wilson and Head (2007) to the spatially and temporally dynamic GCM temperature, pressure, and wind profiles to create three-dimensional maps of expected ash deposition on the surface. Here we present results exploring the effects of grain-size distribution, plume height, density of ash, latitude, season, and atmospheric pressure on the areal extent and shape of the resulting ash distribution. Our results show that grain-size distribution and plume height most strongly effect the distance traveled by the pyroclasts from the vent, while latitude and season can have a large effect on the direction in which the pyroclasts travel and the final shape

  14. RESRAD-BUILD verification.

    SciTech Connect

    Kamboj, S.; Yu, C.; Biwer, B. M.; Klett, T.

    2002-01-31

    The results generated by the RESRAD-BUILD code (version 3.0) were verified with hand or spreadsheet calculations using equations given in the RESRAD-BUILD manual for different pathways. For verification purposes, different radionuclides--H-3, C-14, Na-22, Al-26, Cl-36, Mn-54, Co-60, Au-195, Ra-226, Ra-228, Th-228, and U-238--were chosen to test all pathways and models. Tritium, Ra-226, and Th-228 were chosen because of the special tritium and radon models in the RESRAD-BUILD code. Other radionuclides were selected to represent a spectrum of radiation types and energies. Verification of the RESRAD-BUILD code was conducted with an initial check of all the input parameters for correctness against their original source documents. Verification of the calculations was performed external to the RESRAD-BUILD code with Microsoft{reg_sign} Excel to verify all the major portions of the code. In some cases, RESRAD-BUILD results were compared with those of external codes, such as MCNP (Monte Carlo N-particle) and RESRAD. The verification was conducted on a step-by-step basis and used different test cases as templates. The following types of calculations were investigated: (1) source injection rate, (2) air concentration in the room, (3) air particulate deposition, (4) radon pathway model, (5) tritium model for volume source, (6) external exposure model, (7) different pathway doses, and (8) time dependence of dose. Some minor errors were identified in version 3.0; these errors have been corrected in later versions of the code. Some possible improvements in the code were also identified.

  15. Robust verification analysis

    SciTech Connect

    Rider, William; Witkowski, Walt; Kamm, James R.; Wildey, Tim

    2016-02-15

    We introduce a new methodology for inferring the accuracy of computational simulations through the practice of solution verification. We demonstrate this methodology on examples from computational heat transfer, fluid dynamics and radiation transport. Our methodology is suited to both well- and ill-behaved sequences of simulations. Our approach to the analysis of these sequences of simulations incorporates expert judgment into the process directly via a flexible optimization framework, and the application of robust statistics. The expert judgment is systematically applied as constraints to the analysis, and together with the robust statistics guards against over-emphasis on anomalous analysis results. We have named our methodology Robust Verification. Our methodology is based on utilizing multiple constrained optimization problems to solve the verification model in a manner that varies the analysis' underlying assumptions. Constraints applied in the analysis can include expert judgment regarding convergence rates (bounds and expectations) as well as bounding values for physical quantities (e.g., positivity of energy or density). This approach then produces a number of error models, which are then analyzed through robust statistical techniques (median instead of mean statistics). This provides self-contained, data and expert informed error estimation including uncertainties for both the solution itself and order of convergence. Our method produces high quality results for the well-behaved cases relatively consistent with existing practice. The methodology can also produce reliable results for ill-behaved circumstances predicated on appropriate expert judgment. We demonstrate the method and compare the results with standard approaches used for both code and solution verification on well-behaved and ill-behaved simulations.

  16. Robust verification analysis

    NASA Astrophysics Data System (ADS)

    Rider, William; Witkowski, Walt; Kamm, James R.; Wildey, Tim

    2016-02-01

    We introduce a new methodology for inferring the accuracy of computational simulations through the practice of solution verification. We demonstrate this methodology on examples from computational heat transfer, fluid dynamics and radiation transport. Our methodology is suited to both well- and ill-behaved sequences of simulations. Our approach to the analysis of these sequences of simulations incorporates expert judgment into the process directly via a flexible optimization framework, and the application of robust statistics. The expert judgment is systematically applied as constraints to the analysis, and together with the robust statistics guards against over-emphasis on anomalous analysis results. We have named our methodology Robust Verification. Our methodology is based on utilizing multiple constrained optimization problems to solve the verification model in a manner that varies the analysis' underlying assumptions. Constraints applied in the analysis can include expert judgment regarding convergence rates (bounds and expectations) as well as bounding values for physical quantities (e.g., positivity of energy or density). This approach then produces a number of error models, which are then analyzed through robust statistical techniques (median instead of mean statistics). This provides self-contained, data and expert informed error estimation including uncertainties for both the solution itself and order of convergence. Our method produces high quality results for the well-behaved cases relatively consistent with existing practice. The methodology can also produce reliable results for ill-behaved circumstances predicated on appropriate expert judgment. We demonstrate the method and compare the results with standard approaches used for both code and solution verification on well-behaved and ill-behaved simulations.

  17. Microcode Verification Project.

    DTIC Science & Technology

    1980-05-01

    MICROCOPY RESOLUTION TEST CHART MADCTR.S042 /2>t w NI TeduIem R"pm’ 00 0 MICRQCODE VERIFICATION PROJECT Unhvrsity of Southern California Stephen D...in the production, testing , and maintenance of Air Force software. This effort was undertaken in response to that goal. The objective of the effort was...rather than hard wiring, is a recent development in computer technology. Hardware diagnostics do not fulfill testing requirements for these computers

  18. TFE Verification Program

    SciTech Connect

    Not Available

    1990-03-01

    The objective of the semiannual progress report is to summarize the technical results obtained during the latest reporting period. The information presented herein will include evaluated test data, design evaluations, the results of analyses and the significance of results. The program objective is to demonstrate the technology readiness of a TFE suitable for use as the basic element in a thermionic reactor with electric power output in the 0.5 to 5.0 MW(e) range, and a full-power life of 7 years. The TF Verification Program builds directly on the technology and data base developed in the 1960s and 1970s in an AEC/NASA program, and in the SP-100 program conducted in 1983, 1984 and 1985. In the SP-100 program, the attractive but concern was expressed over the lack of fast reactor irradiation data. The TFE Verification Program addresses this concern. The general logic and strategy of the program to achieve its objectives is shown on Fig. 1-1. Five prior programs form the basis for the TFE Verification Program: (1) AEC/NASA program of the 1960s and early 1970; (2) SP-100 concept development program;(3) SP-100 thermionic technology program; (4) Thermionic irradiations program in TRIGA in FY-86; (5) and Thermionic Technology Program in 1986 and 1987. 18 refs., 64 figs., 43 tabs.

  19. NGSI: IAEA Verification of UF6 Cylinders

    SciTech Connect

    Curtis, Michael M.

    2012-06-05

    The International Atomic Energy Agency (IAEA) is often ignorant of the location of declared, uranium hexafluoride (UF6) cylinders following verification, because cylinders are not typically tracked onsite or off. This paper will assess various methods the IAEA uses to verify cylinder gross defects, and how the task could be ameliorated through the use of improved identification and monitoring. The assessment will be restricted to current verification methods together with one that has been applied on a trial basis—short-notice random inspections coupled with mailbox declarations. This paper is part of the NNSA Office of Nonproliferation and International Security’s Next Generation Safeguards Initiative (NGSI) program to investigate the concept of a global monitoring scheme that uniquely identifies and tracks UF6 cylinders.

  20. Visual inspection for CTBT verification

    SciTech Connect

    Hawkins, W.; Wohletz, K.

    1997-03-01

    On-site visual inspection will play an essential role in future Comprehensive Test Ban Treaty (CTBT) verification. Although seismic and remote sensing techniques are the best understood and most developed methods for detection of evasive testing of nuclear weapons, visual inspection can greatly augment the certainty and detail of understanding provided by these more traditional methods. Not only can visual inspection offer ``ground truth`` in cases of suspected nuclear testing, but it also can provide accurate source location and testing media properties necessary for detailed analysis of seismic records. For testing in violation of the CTBT, an offending party may attempt to conceal the test, which most likely will be achieved by underground burial. While such concealment may not prevent seismic detection, evidence of test deployment, location, and yield can be disguised. In this light, if a suspicious event is detected by seismic or other remote methods, visual inspection of the event area is necessary to document any evidence that might support a claim of nuclear testing and provide data needed to further interpret seismic records and guide further investigations. However, the methods for visual inspection are not widely known nor appreciated, and experience is presently limited. Visual inspection can be achieved by simple, non-intrusive means, primarily geological in nature, and it is the purpose of this report to describe the considerations, procedures, and equipment required to field such an inspection.

  1. Verification of ICESat-2/ATLAS Science Receiver Algorithm Onboard Databases

    NASA Astrophysics Data System (ADS)

    Carabajal, C. C.; Saba, J. L.; Leigh, H. W.; Magruder, L. A.; Urban, T. J.; Mcgarry, J.; Schutz, B. E.

    2013-12-01

    NASA's ICESat-2 mission will fly the Advanced Topographic Laser Altimetry System (ATLAS) instrument on a 3-year mission scheduled to launch in 2016. ATLAS is a single-photon detection system transmitting at 532nm with a laser repetition rate of 10 kHz, and a 6 spot pattern on the Earth's surface. A set of onboard Receiver Algorithms will perform signal processing to reduce the data rate and data volume to acceptable levels. These Algorithms distinguish surface echoes from the background noise, limit the daily data volume, and allow the instrument to telemeter only a small vertical region about the signal. For this purpose, three onboard databases are used: a Surface Reference Map (SRM), a Digital Elevation Model (DEM), and a Digital Relief Maps (DRMs). The DEM provides minimum and maximum heights that limit the signal search region of the onboard algorithms, including a margin for errors in the source databases, and onboard geolocation. Since the surface echoes will be correlated while noise will be randomly distributed, the signal location is found by histogramming the received event times and identifying the histogram bins with statistically significant counts. Once the signal location has been established, the onboard Digital Relief Maps (DRMs) will be used to determine the vertical width of the telemetry band about the signal. University of Texas-Center for Space Research (UT-CSR) is developing the ICESat-2 onboard databases, which are currently being tested using preliminary versions and equivalent representations of elevation ranges and relief more recently developed at Goddard Space Flight Center (GSFC). Global and regional elevation models have been assessed in terms of their accuracy using ICESat geodetic control, and have been used to develop equivalent representations of the onboard databases for testing against the UT-CSR databases, with special emphasis on the ice sheet regions. A series of verification checks have been implemented, including

  2. Verification of Global Assimilation of Ionospheric Measurements Gauss Markov (GAIM-GM) Model Forecast Accuracy

    DTIC Science & Technology

    2011-09-01

    and location measurements , GPS must take into consideration the ionospheric environment and does so by computing the electron content in the path...VERIFICATION OF GLOBAL ASSIMILATION OF IONOSPHERIC MEASUREMENTS GAUSS MARKOV (GAIM-GM) MODEL FORECAST ACCURACY THESIS...United States. AFIT/GAP/ENP/11-S01 VERIFICATION OF GLOBAL ASSIMILATION OF IONOSPHERIC MEASUREMENTS GAUSS MARKOV (GAIM

  3. Quantum money with classical verification

    NASA Astrophysics Data System (ADS)

    Gavinsky, Dmitry

    2014-12-01

    We propose and construct a quantum money scheme that allows verification through classical communication with a bank. This is the first demonstration that a secure quantum money scheme exists that does not require quantum communication for coin verification. Our scheme is secure against adaptive adversaries - this property is not directly related to the possibility of classical verification, nevertheless none of the earlier quantum money constructions is known to possess it.

  4. Quantum money with classical verification

    SciTech Connect

    Gavinsky, Dmitry

    2014-12-04

    We propose and construct a quantum money scheme that allows verification through classical communication with a bank. This is the first demonstration that a secure quantum money scheme exists that does not require quantum communication for coin verification. Our scheme is secure against adaptive adversaries - this property is not directly related to the possibility of classical verification, nevertheless none of the earlier quantum money constructions is known to possess it.

  5. Hardware verification of distributed/adaptive control

    NASA Technical Reports Server (NTRS)

    Eldred, D. B.; Schaechter, D. B.

    1983-01-01

    Adaptive control techniques are studied for their future application to the control of large space structures, where uncertain or changing parameters may destabilize standard control system designs. The approach used is to examine an extended Kalman filter estimator, in which the state vector is augmented with the unknown parameters. The associated Riccatti equation is linearized about the case of exact knowledge of the parameters. By assuming that parameter variations occur slowly, the filter complexity is reduced further yet. Simulations on a two degree-of-freedom oscillator demonstrate the parameter-tracking capability of the filter, and an implementation on the JPL Flexible Beam Facility using an incorrect model shows the adaptive filter/optimal control to be stable where a standard Kalman filter/optimal control design is unstable.

  6. Programmatic implications of implementing the relational algebraic capacitated location (RACL) algorithm outcomes on the allocation of laboratory sites, test volumes, platform distribution and space requirements.

    PubMed

    Cassim, Naseem; Smith, Honora; Coetzee, Lindi M; Glencross, Deborah K

    2017-01-01

    CD4 testing in South Africa is based on an integrated tiered service delivery model that matches testing demand with capacity. The National Health Laboratory Service has predominantly implemented laboratory-based CD4 testing. Coverage gaps, over-/under-capacitation and optimal placement of point-of-care (POC) testing sites need investigation. We assessed the impact of relational algebraic capacitated location (RACL) algorithm outcomes on the allocation of laboratory and POC testing sites. The RACL algorithm was developed to allocate laboratories and POC sites to ensure coverage using a set coverage approach for a defined travel time (T). The algorithm was repeated for three scenarios (A: T = 4; B: T = 3; C: T = 2 hours). Drive times for a representative sample of health facility clusters were used to approximate T. Outcomes included allocation of testing sites, Euclidian distances and test volumes. Additional analysis included platform distribution and space requirement assessment. Scenarios were reported as fusion table maps. Scenario A would offer a fully-centralised approach with 15 CD4 laboratories without any POC testing. A significant increase in volumes would result in a four-fold increase at busier laboratories. CD4 laboratories would increase to 41 in scenario B and 61 in scenario C. POC testing would be offered at two sites in scenario B and 20 sites in scenario C. The RACL algorithm provides an objective methodology to address coverage gaps through the allocation of CD4 laboratories and POC sites for a given T. The algorithm outcomes need to be assessed in the context of local conditions.

  7. Programmatic implications of implementing the relational algebraic capacitated location (RACL) algorithm outcomes on the allocation of laboratory sites, test volumes, platform distribution and space requirements

    PubMed Central

    2017-01-01

    Introduction CD4 testing in South Africa is based on an integrated tiered service delivery model that matches testing demand with capacity. The National Health Laboratory Service has predominantly implemented laboratory-based CD4 testing. Coverage gaps, over-/under-capacitation and optimal placement of point-of-care (POC) testing sites need investigation. Objectives We assessed the impact of relational algebraic capacitated location (RACL) algorithm outcomes on the allocation of laboratory and POC testing sites. Methods The RACL algorithm was developed to allocate laboratories and POC sites to ensure coverage using a set coverage approach for a defined travel time (T). The algorithm was repeated for three scenarios (A: T = 4; B: T = 3; C: T = 2 hours). Drive times for a representative sample of health facility clusters were used to approximate T. Outcomes included allocation of testing sites, Euclidian distances and test volumes. Additional analysis included platform distribution and space requirement assessment. Scenarios were reported as fusion table maps. Results Scenario A would offer a fully-centralised approach with 15 CD4 laboratories without any POC testing. A significant increase in volumes would result in a four-fold increase at busier laboratories. CD4 laboratories would increase to 41 in scenario B and 61 in scenario C. POC testing would be offered at two sites in scenario B and 20 sites in scenario C. Conclusion The RACL algorithm provides an objective methodology to address coverage gaps through the allocation of CD4 laboratories and POC sites for a given T. The algorithm outcomes need to be assessed in the context of local conditions.

  8. Assessment of total and organic vanadium levels and their bioaccumulation in edible sea cucumbers: tissues distribution, inter-species-specific, locational differences and seasonal variations.

    PubMed

    Liu, Yanjun; Zhou, Qingxin; Xu, Jie; Xue, Yong; Liu, Xiaofang; Wang, Jingfeng; Xue, Changhu

    2016-02-01

    The objective of this study is to investigate the levels, inter-species-specific, locational differences and seasonal variations of vanadium in sea cucumbers and to validate further several potential factors controlling the distribution of metals in sea cucumbers. Vanadium levels were evaluated in samples of edible sea cucumbers and were demonstrated exhibit differences in different seasons, species and sampling sites. High vanadium concentrations were measured in the sea cucumbers, and all of the vanadium detected was in an organic form. Mean vanadium concentrations were considerably higher in the blood (sea cucumber) than in the other studied tissues. The highest concentration of vanadium (2.56 μg g(-1)), as well as a higher degree of organic vanadium (85.5 %), was observed in the Holothuria scabra samples compared with all other samples. Vanadium levels in Apostichopus japonicus from Bohai Bay and Yellow Sea have marked seasonal variations. Average values of 1.09 μg g(-1) of total vanadium and 0.79 μg g(-1) of organic vanadium were obtained in various species of sea cucumbers. Significant positive correlations between vanadium in the seawater and V org in the sea cucumber (r = 81.67 %, p = 0.00), as well as between vanadium in the sediment and V org in the sea cucumber (r = 77.98 %, p = 0.00), were observed. Vanadium concentrations depend on the seasons (salinity, temperature), species, sampling sites and seawater environment (seawater, sediment). Given the adverse toxicological effects of inorganic vanadium and positive roles in controlling the development of diabetes in humans, a regular monitoring programme of vanadium content in edible sea cucumbers can be recommended.

  9. Systematic analysis of biological and physical limitations of proton beam range verification with offline PET/CT scans

    NASA Astrophysics Data System (ADS)

    Knopf, A.; Parodi, K.; Bortfeld, T.; Shih, H. A.; Paganetti, H.

    2009-07-01

    The clinical use of offline positron emission tomography/computed tomography (PET/CT) scans for proton range verification is currently under investigation at the Massachusetts General Hospital (MGH). Validation is achieved by comparing measured activity distributions, acquired in patients after receiving one fraction of proton irradiation, with corresponding Monte Carlo (MC) simulated distributions. Deviations between measured and simulated activity distributions can either reflect errors during the treatment chain from planning to delivery or they can be caused by various inherent challenges of the offline PET/CT verification method. We performed a systematic analysis to assess the impact of the following aspects on the feasibility and accuracy of the offline PET/CT method: (1) biological washout processes, (2) patient motion, (3) Hounsfield unit (HU) based tissue classification for the simulation of the activity distributions and (4) tumor site specific aspects. It was found that the spatial reproducibility of the measured activity distributions is within 1 mm. However, the feasibility of range verification is restricted to a limited amount of positions and tumor sites. Washout effects introduce discrepancies between the measured and simulated ranges of about 4 mm at positions where the proton beam stops in soft tissue. Motion causes spatial deviations of up to 3 cm between measured and simulated activity distributions in abdominopelvic tumor cases. In these later cases, the MC simulated activity distributions were found to be limited to about 35% accuracy in absolute values and about 2 mm in spatial accuracy depending on the correlativity of HU into the physical and biological parameters of the irradiated tissue. Besides, for further specific tumor locations, the beam arrangement, the limited accuracy of rigid co-registration and organ movements can prevent the success of PET/CT range verification. All the addressed factors explain why the proton beam range can

  10. Production readiness verification testing

    NASA Technical Reports Server (NTRS)

    James, A. M.; Bohon, H. L.

    1980-01-01

    A Production Readiness Verification Testing (PRVT) program has been established to determine if structures fabricated from advanced composites can be committed on a production basis to commercial airline service. The program utilizes subcomponents which reflect the variabilities in structure that can realistically be expected from current production and quality control technology to estimate the production qualities, variation in static strength, and durability of advanced composite structures. The results of the static tests and a durability assessment after one year of continuous load/environment testing of twenty two duplicates of each of two structural components (a segment of the front spar and cover of a vertical stabilizer box structure) are discussed.

  11. Automating engineering verification in ALMA subsystems

    NASA Astrophysics Data System (ADS)

    Ortiz, José; Castillo, Jorge

    2014-08-01

    The Atacama Large Millimeter/submillimeter Array is an interferometer comprising 66 individual high precision antennas located over 5000 meters altitude in the north of Chile. Several complex electronic subsystems need to be meticulously tested at different stages of an antenna commissioning, both independently and when integrated together. First subsystem integration takes place at the Operations Support Facilities (OSF), at an altitude of 3000 meters. Second integration occurs at the high altitude Array Operations Site (AOS), where also combined performance with Central Local Oscillator (CLO) and Correlator is assessed. In addition, there are several other events requiring complete or partial verification of instrument specifications compliance, such as parts replacements, calibration, relocation within AOS, preventive maintenance and troubleshooting due to poor performance in scientific observations. Restricted engineering time allocation and the constant pressure of minimizing downtime in a 24/7 astronomical observatory, impose the need to complete (and report) the aforementioned verifications in the least possible time. Array-wide disturbances, such as global power interruptions and following recovery, generate the added challenge of executing this checkout on multiple antenna elements at once. This paper presents the outcome of the automation of engineering verification setup, execution, notification and reporting in ALMA and how these efforts have resulted in a dramatic reduction of both time and operator training required. Signal Path Connectivity (SPC) checkout is introduced as a notable case of such automation.

  12. Deductive Verification of Cryptographic Software

    NASA Technical Reports Server (NTRS)

    Almeida, Jose Barcelar; Barbosa, Manuel; Pinto, Jorge Sousa; Vieira, Barbara

    2009-01-01

    We report on the application of an off-the-shelf verification platform to the RC4 stream cipher cryptographic software implementation (as available in the openSSL library), and introduce a deductive verification technique based on self-composition for proving the absence of error propagation.

  13. Software Verification and Validation Procedure

    SciTech Connect

    Olund, Thomas S.

    2008-09-15

    This Software Verification and Validation procedure provides the action steps for the Tank Waste Information Network System (TWINS) testing process. The primary objective of the testing process is to provide assurance that the software functions as intended, and meets the requirements specified by the client. Verification and validation establish the primary basis for TWINS software product acceptance.

  14. HDL to verification logic translator

    NASA Astrophysics Data System (ADS)

    Gambles, J. W.; Windley, P. J.

    The increasingly higher number of transistors possible in VLSI circuits compounds the difficulty in insuring correct designs. As the number of possible test cases required to exhaustively simulate a circuit design explodes, a better method is required to confirm the absence of design faults. Formal verification methods provide a way to prove, using logic, that a circuit structure correctly implements its specification. Before verification is accepted by VLSI design engineers, the stand alone verification tools that are in use in the research community must be integrated with the CAD tools used by the designers. One problem facing the acceptance of formal verification into circuit design methodology is that the structural circuit descriptions used by the designers are not appropriate for verification work and those required for verification lack some of the features needed for design. We offer a solution to this dilemma: an automatic translation from the designers' HDL models into definitions for the higher-ordered logic (HOL) verification system. The translated definitions become the low level basis of circuit verification which in turn increases the designer's confidence in the correctness of higher level behavioral models.

  15. HDL to verification logic translator

    NASA Technical Reports Server (NTRS)

    Gambles, J. W.; Windley, P. J.

    1992-01-01

    The increasingly higher number of transistors possible in VLSI circuits compounds the difficulty in insuring correct designs. As the number of possible test cases required to exhaustively simulate a circuit design explodes, a better method is required to confirm the absence of design faults. Formal verification methods provide a way to prove, using logic, that a circuit structure correctly implements its specification. Before verification is accepted by VLSI design engineers, the stand alone verification tools that are in use in the research community must be integrated with the CAD tools used by the designers. One problem facing the acceptance of formal verification into circuit design methodology is that the structural circuit descriptions used by the designers are not appropriate for verification work and those required for verification lack some of the features needed for design. We offer a solution to this dilemma: an automatic translation from the designers' HDL models into definitions for the higher-ordered logic (HOL) verification system. The translated definitions become the low level basis of circuit verification which in turn increases the designer's confidence in the correctness of higher level behavioral models.

  16. TFE Verification Program

    SciTech Connect

    Not Available

    1993-05-01

    The objective of the semiannual progress report is to summarize the technical results obtained during the latest reporting period. The information presented herein will include evaluated test data, design evaluations, the results of analyses and the significance of results. The program objective is to demonstrate the technology readiness of a TFE (thermionic fuel element) suitable for use as the basic element in a thermionic reactor with electric power output in the 0.5 to 5.0 MW(e) range, and a full-power life of 7 years. The TFE Verification Program builds directly on the technology and data base developed in the 1960s and early 1970s in an AEC/NASA program, and in the SP-100 program conducted in 1983, 1984 and 1985. In the SP-100 program, the attractive features of thermionic power conversion technology were recognized but concern was expressed over the lack of fast reactor irradiation data. The TFE Verification Program addresses this concern.

  17. In Vivo Proton Beam Range Verification Using Spine MRI Changes

    SciTech Connect

    Gensheimer, Michael F.; Yock, Torunn I.; Liebsch, Norbert J.; Sharp, Gregory C.; Paganetti, Harald; Madan, Neel; Grant, P. Ellen; Bortfeld, Thomas

    2010-09-01

    Purpose: In proton therapy, uncertainty in the location of the distal dose edge can lead to cautious treatment plans that reduce the dosimetric advantage of protons. After radiation exposure, vertebral bone marrow undergoes fatty replacement that is visible on magnetic resonance imaging (MRI). This presents an exciting opportunity to observe radiation dose distribution in vivo. We used quantitative spine MRI changes to precisely detect the distal dose edge in proton radiation patients. Methods and Materials: We registered follow-up T1-weighted MRI images to planning computed tomography scans from 10 patients who received proton spine irradiation. A radiation dose-MRI signal intensity curve was created using the lateral beam penumbra in the sacrum. This curve was then used to measure range errors in the lumbar spine. Results: In the lateral penumbra, there was an increase in signal intensity with higher dose throughout the full range of 0-37.5 Gy (RBE). In the distal fall-off region, the beam sometimes appeared to penetrate farther than planned. The mean overshoot in 10 patients was 1.9 mm (95% confidence interval, 0.8-3.1 mm), on the order of the uncertainties inherent to our range verification method. Conclusions: We have demonstrated in vivo proton range verification using posttreatment spine MRI changes. Our analysis suggests the presence of a systematic overshoot of a few millimeters in some proton spine treatments, but the range error does not exceed the uncertainty incorporated into the treatment planning margin. It may be possible to extend our technique to MRI sequences that show early bone marrow changes, enabling adaptive treatment modification.

  18. Approaches to wind-resource verification

    SciTech Connect

    Barchet, W.R.

    1981-07-01

    Verification of the regional wind energy resource assessments produced by the Pacific Northwest Laboratory addresses the question: Is the magnitude of the resource given in the assessments truly representative of the area of interest. Approaches using qualitative indicators of wind speed (tree deformation, eolian features), old and new data of opportunity not at sites specifically chosen for their exposure to the wind, and data by design from locations specifically selected to be good wind sites are described. Data requirements and evaluation procedures for verifying the resource are discussed.

  19. Cleanup Verification Package for the 118-F-6 Burial Ground

    SciTech Connect

    H. M. Sulloway

    2008-10-02

    This cleanup verification package documents completion of remedial action for the 118-F-6 Burial Ground located in the 100-FR-2 Operable Unit of the 100-F Area on the Hanford Site. The trenches received waste from the 100-F Experimental Animal Farm, including animal manure, animal carcasses, laboratory waste, plastic, cardboard, metal, and concrete debris as well as a railroad tank car.

  20. Cancelable face verification using optical encryption and authentication.

    PubMed

    Taheri, Motahareh; Mozaffari, Saeed; Keshavarzi, Parviz

    2015-10-01

    In a cancelable biometric system, each instance of enrollment is distorted by a transform function, and the output should not be retransformed to the original data. This paper presents a new cancelable face verification system in the encrypted domain. Encrypted facial images are generated by a double random phase encoding (DRPE) algorithm using two keys (RPM1 and RPM2). To make the system noninvertible, a photon counting (PC) method is utilized, which requires a photon distribution mask for information reduction. Verification of sparse images that are not recognizable by direct visual inspection is performed by unconstrained minimum average correlation energy filter. In the proposed method, encryption keys (RPM1, RPM2, and PDM) are used in the sender side, and the receiver needs only encrypted images and correlation filters. In this manner, the system preserves privacy if correlation filters are obtained by an adversary. Performance of PC-DRPE verification system is evaluated under illumination variation, pose changes, and facial expression. Experimental results show that utilizing encrypted images not only increases security concerns but also enhances verification performance. This improvement can be attributed to the fact that, in the proposed system, the face verification problem is converted to key verification tasks.

  1. Trajectory Based Behavior Analysis for User Verification

    NASA Astrophysics Data System (ADS)

    Pao, Hsing-Kuo; Lin, Hong-Yi; Chen, Kuan-Ta; Fadlil, Junaidillah

    Many of our activities on computer need a verification step for authorized access. The goal of verification is to tell apart the true account owner from intruders. We propose a general approach for user verification based on user trajectory inputs. The approach is labor-free for users and is likely to avoid the possible copy or simulation from other non-authorized users or even automatic programs like bots. Our study focuses on finding the hidden patterns embedded in the trajectories produced by account users. We employ a Markov chain model with Gaussian distribution in its transitions to describe the behavior in the trajectory. To distinguish between two trajectories, we propose a novel dissimilarity measure combined with a manifold learnt tuning for catching the pairwise relationship. Based on the pairwise relationship, we plug-in any effective classification or clustering methods for the detection of unauthorized access. The method can also be applied for the task of recognition, predicting the trajectory type without pre-defined identity. Given a trajectory input, the results show that the proposed method can accurately verify the user identity, or suggest whom owns the trajectory if the input identity is not provided.

  2. Consolidated Site (CS) 024 Verification Survey at Former McClellan AFB, Sacramento, California

    DTIC Science & Technology

    2015-03-31

    USAFSAM/OEC), Radiation Health Consulting Branch performed an independent radiological verification survey of the CS 024 hazardous waste site, located on...independent radiological verification survey of the CS 024 hazardous waste site, located on former McClellan AFB, California, on 29-31 July 2014. The...burial of demolition debris and scrap material. Workers frequently used the eastern half of the pit to burn Air Force generated wastes ; workers

  3. Space station data management system - A common GSE test interface for systems testing and verification

    NASA Technical Reports Server (NTRS)

    Martinez, Pedro A.; Dunn, Kevin W.

    1987-01-01

    This paper examines the fundamental problems and goals associated with test, verification, and flight-certification of man-rated distributed data systems. First, a summary of the characteristics of modern computer systems that affect the testing process is provided. Then, verification requirements are expressed in terms of an overall test philosophy for distributed computer systems. This test philosophy stems from previous experience that was gained with centralized systems (Apollo and the Space Shuttle), and deals directly with the new problems that verification of distributed systems may present. Finally, a description of potential hardware and software tools to help solve these problems is provided.

  4. [Dosimetric verification of the intensity modulated radiation therapy].

    PubMed

    Zhang, Yuhai; Gao, Yang

    2010-05-01

    To research the method of dosimetric verification of the intensity modulated radiation therapy (IMRT). The IMRT treatment plans were designed by Eclipse TPS and were implemented in Varian ClinacIX LA with 6MV X-ray. The absolute point doses were measured using a PTW 0.6 cc ion chamber with UNIDOS E dosimeter and the planes dose distributions were measured using PTW 2D-Array ion chamber in the phantom. The error between the measured dose and calculated dose in the interesting points was less than 3%. The points passed ratio was more than 90% in gamma analysis method (3 mm 13%) about the plane dose distribution verification. The method of dosimetric verification of IMRT is reliable and efficient in the implementation.

  5. INNOVATIVE TECHNOLOGY VERIFICATION REPORT XRF ...

    EPA Pesticide Factsheets

    The Elvatech, Ltd. ElvaX (ElvaX) x-ray fluorescence (XRF) analyzer distributed in the United States by Xcalibur XRF Services (Xcalibur), was demonstrated under the U.S. Environmental Protection Agency (EPA) Superfund Innovative Technology Evaluation (SITE) Program. The field portion of the demonstration was conducted in January 2005 at the Kennedy Athletic, Recreational and Social Park (KARS) at Kennedy Space Center on Merritt Island, Florida. The demonstration was designed to collect reliable performance and cost data for the ElvaX analyzer and seven other commercially available XRF instruments for measuring trace elements in soil and sediment. The performance and cost data were evaluated to document the relative performance of each XRF instrument. This innovative technology verification report describes the objectives and the results of that evaluation and serves to verify the performance and cost of the ElvaX analyzer. Separate reports have been prepared for the other XRF instruments that were evaluated as part of the demonstration. The objectives of the evaluation included determining each XRF instrument’s accuracy, precision, sample throughput, and tendency for matrix effects. To fulfill these objectives, the field demonstration incorporated the analysis of 326 prepared samples of soil and sediment that contained 13 target elements. The prepared samples included blends of environmental samples from nine different sample collection sites as well as s

  6. Online fingerprint verification.

    PubMed

    Upendra, K; Singh, S; Kumar, V; Verma, H K

    2007-01-01

    As organizations search for more secure authentication methods for user access, e-commerce, and other security applications, biometrics is gaining increasing attention. With an increasing emphasis on the emerging automatic personal identification applications, fingerprint based identification is becoming more popular. The most widely used fingerprint representation is the minutiae based representation. The main drawback with this representation is that it does not utilize a significant component of the rich discriminatory information available in the fingerprints. Local ridge structures cannot be completely characterized by minutiae. Also, it is difficult quickly to match two fingerprint images containing different number of unregistered minutiae points. In this study filter bank based representation, which eliminates these weakness, is implemented and the overall performance of the developed system is tested. The results have shown that this system can be used effectively for secure online verification applications.

  7. Requirements for Nuclear Test Verification

    NASA Astrophysics Data System (ADS)

    Dreicer, M.

    2001-05-01

    Verification comprises the collection and assessment of reliable, relevant information for ascertaining the degree to which our foreign partners are adhering to their international security commitments. In the past, treaty verification was largely a bilateral and the information used to make compliance judgements was under government control. Verification data was collected in joint bilaterally-controlled conditions or at large distances. The international verification regime being developed by the Comprehensive Test Ban Treaty Preparatory Commission will be providing a vast amount of data to a large number of Member States and scientific researchers. The increasingly rapid communication of data from many global sources, including the International Monitoring System, has shifted the traditional views of how verification should be implemented. The newly formed Bureau of Verification and Compliance in the U.S. Department of State is working to develop an overall concept of what sources of information and day-to-day activities are needed to carry out its verification and compliance functions. This presentation will set out preliminary ideas of how this will be and will include ideas of what types of research and development are needed.

  8. Site Specific Verification Guidelines.

    SciTech Connect

    Harding, Steve; Gordon, Frederick M.; Kennedy, Mike

    1992-05-01

    The Bonneville Power Administration (BPA) and the Northwest region have moved from energy surplus to a time when demand for energy is likely to exceed available supplies. The Northwest Power Planning Council is calling for a major push to acquire new resources.'' To meet anticipated loads in the next decade, BPA and the region must more than double that rate at which we acquire conservation resources. BPA hopes to achieve some of this doubling by programs independently designed and implemented by utilities and other parties without intensive BPA involvement. BPA will accept proposals for programs using performance-based payments, in which BPA bases its reimbursement to the sponsor on measured energy savings rather than program costs. To receive payment for conservation projects developed under performance-based programs, utilities and other project developers must propose verification plans to measure the amount of energy savings. BPA has traditionally used analysis of billing histories, before and after measure installation, adjusted by a comparison group on non-participating customers to measure conservation savings. This approach does not work well for all conversation projects. For large or unusual facilities the comparison group approach is not reliable due to the absence of enough comparable non-participants to allow appropriate statistical analysis. For these facilities, which include large commercial and institutional buildings, industrial projects, and complex combinations of building types served by a single utility meter, savings must be verified on a site-specific basis. These guidelines were written to help proposers understand what Bonneville considers the important issues in site specific verification of conservation performance. It also provides a toolbox of methods with guidance on their application and use. 15 refs.

  9. Distribution

    Treesearch

    John R. Jones

    1985-01-01

    Quaking aspen is the most widely distributed native North American tree species (Little 1971, Sargent 1890). It grows in a great diversity of regions, environments, and communities (Harshberger 1911). Only one deciduous tree species in the world, the closely related Eurasian aspen (Populus tremula), has a wider range (Weigle and Frothingham 1911)....

  10. Distributions.

    ERIC Educational Resources Information Center

    Bowers, Wayne A.

    This monograph was written for the Conference of the New Instructional Materials in Physics, held at the University of Washington in summer, 1965. It is intended for students who have had an introductory college physics course. It seeks to provide an introduction to the idea of distributions in general, and to some aspects of the subject in…

  11. 40 CFR 1065.390 - PM balance verifications and weighing process verification.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 34 2012-07-01 2012-07-01 false PM balance verifications and weighing... § 1065.390 PM balance verifications and weighing process verification. (a) Scope and frequency. This section describes three verifications. (1) Independent verification of PM balance performance within 370...

  12. 40 CFR 1065.390 - PM balance verifications and weighing process verification.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 33 2011-07-01 2011-07-01 false PM balance verifications and weighing... § 1065.390 PM balance verifications and weighing process verification. (a) Scope and frequency. This section describes three verifications. (1) Independent verification of PM balance performance within 370...

  13. 40 CFR 1065.390 - PM balance verifications and weighing process verification.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 33 2014-07-01 2014-07-01 false PM balance verifications and weighing... § 1065.390 PM balance verifications and weighing process verification. (a) Scope and frequency. This section describes three verifications. (1) Independent verification of PM balance performance within 370...

  14. 40 CFR 1065.390 - PM balance verifications and weighing process verification.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false PM balance verifications and weighing... § 1065.390 PM balance verifications and weighing process verification. (a) Scope and frequency. This section describes three verifications. (1) Independent verification of PM balance performance within 370...

  15. 40 CFR 1065.390 - PM balance verifications and weighing process verification.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 34 2013-07-01 2013-07-01 false PM balance verifications and weighing... § 1065.390 PM balance verifications and weighing process verification. (a) Scope and frequency. This section describes three verifications. (1) Independent verification of PM balance performance within 370...

  16. Cross Domain Rule Set Verification Tools and Process Improvements

    DTIC Science & Technology

    2010-07-01

    Martin Liedy Cobham Analytic Solutions Kelly Djahandari Northrop Grumman Corporation July 2010 Final Report Distribution A...Cross Domain Rule Set Verification Tools and Process Improvements Charles McElveen, Martin Liedy Kelly Djahandari Lance Call Cobham ...residual inference risks. ABOUT THE AUTHORS Charles McElveen, CISSP, ISSEP, is a Senior Security Engineer at Cobham Analytic Solutions with over

  17. Generic interpreters and microprocessor verification

    NASA Technical Reports Server (NTRS)

    Windley, Phillip J.

    1990-01-01

    The following topics are covered in viewgraph form: (1) generic interpreters; (2) Viper microprocessors; (3) microprocessor verification; (4) determining correctness; (5) hierarchical decomposition; (6) interpreter theory; (7) AVM-1; (8) phase-level specification; and future work.

  18. Verification of operational solar flare forecast: Case of Regional Warning Center Japan

    NASA Astrophysics Data System (ADS)

    Kubo, Yûki; Den, Mitsue; Ishii, Mamoru

    2017-08-01

    In this article, we discuss a verification study of an operational solar flare forecast in the Regional Warning Center (RWC) Japan. The RWC Japan has been issuing four-categorical deterministic solar flare forecasts for a long time. In this forecast verification study, we used solar flare forecast data accumulated over 16 years (from 2000 to 2015). We compiled the forecast data together with solar flare data obtained with the Geostationary Operational Environmental Satellites (GOES). Using the compiled data sets, we estimated some conventional scalar verification measures with 95% confidence intervals. We also estimated a multi-categorical scalar verification measure. These scalar verification measures were compared with those obtained by the persistence method and recurrence method. As solar activity varied during the 16 years, we also applied verification analyses to four subsets of forecast-observation pair data with different solar activity levels. We cannot conclude definitely that there are significant performance differences between the forecasts of RWC Japan and the persistence method, although a slightly significant difference is found for some event definitions. We propose to use a scalar verification measure to assess the judgment skill of the operational solar flare forecast. Finally, we propose a verification strategy for deterministic operational solar flare forecasting. For dichotomous forecast, a set of proposed verification measures is a frequency bias for bias, proportion correct and critical success index for accuracy, probability of detection for discrimination, false alarm ratio for reliability, Peirce skill score for forecast skill, and symmetric extremal dependence index for association. For multi-categorical forecast, we propose a set of verification measures as marginal distributions of forecast and observation for bias, proportion correct for accuracy, correlation coefficient and joint probability distribution for association, the

  19. Developing sub-domain verification methods based on GIS tools

    NASA Astrophysics Data System (ADS)

    Smith, J. A.; Foley, T. A.; Raby, J. W.

    2014-12-01

    The meteorological community makes extensive use of the Model Evaluation Tools (MET) developed by National Center for Atmospheric Research for numerical weather prediction model verification through grid-to-point, grid-to-grid and object-based domain level analyses. MET Grid-Stat has been used to perform grid-to-grid neighborhood verification to account for the uncertainty inherent in high resolution forecasting, and MET Method for Object-based Diagnostic Evaluation (MODE) has been used to develop techniques for object-based spatial verification of high resolution forecast grids for continuous meteorological variables. High resolution modeling requires more focused spatial and temporal verification over parts of the domain. With a Geographical Information System (GIS), researchers can now consider terrain type/slope and land use effects and other spatial and temporal variables as explanatory metrics in model assessments. GIS techniques, when coupled with high resolution point and gridded observations sets, allow location-based approaches that permit discovery of spatial and temporal scales where models do not sufficiently resolve the desired phenomena. In this paper we discuss our initial GIS approach to verify WRF-ARW with a one-kilometer horizontal resolution inner domain centered over Southern California. Southern California contains a mixture of urban, sub-urban, agricultural and mountainous terrain types along with a rich array of observational data with which to illustrate our ability to conduct sub-domain verification.

  20. Verification and Validation for Flight-Critical Systems (VVFCS)

    NASA Technical Reports Server (NTRS)

    Graves, Sharon S.; Jacobsen, Robert A.

    2010-01-01

    On March 31, 2009 a Request for Information (RFI) was issued by NASA s Aviation Safety Program to gather input on the subject of Verification and Validation (V & V) of Flight-Critical Systems. The responses were provided to NASA on or before April 24, 2009. The RFI asked for comments in three topic areas: Modeling and Validation of New Concepts for Vehicles and Operations; Verification of Complex Integrated and Distributed Systems; and Software Safety Assurance. There were a total of 34 responses to the RFI, representing a cross-section of academic (26%), small & large industry (47%) and government agency (27%).

  1. Organics Verification Study for Sinclair and Dyes Inlets, Washington

    SciTech Connect

    Kohn, Nancy P.; Brandenberger, Jill M.; Niewolny, Laurie A.; Johnston, Robert K.

    2006-09-28

    Sinclair and Dyes Inlets near Bremerton, Washington, are on the State of Washington 1998 303(d) list of impaired waters because of fecal coliform contamination in marine water, metals in sediment and fish tissue, and organics in sediment and fish tissue. Because significant cleanup and source control activities have been conducted in the inlets since the data supporting the 1998 303(d) listings were collected, two verification studies were performed to address the 303(d) segments that were listed for metal and organic contaminants in marine sediment. The Metals Verification Study (MVS) was conducted in 2003; the final report, Metals Verification Study for Sinclair and Dyes Inlets, Washington, was published in March 2004 (Kohn et al. 2004). This report describes the Organics Verification Study that was conducted in 2005. The study approach was similar to the MVS in that many surface sediment samples were screened for the major classes of organic contaminants, and then the screening results and other available data were used to select a subset of samples for quantitative chemical analysis. Because the MVS was designed to obtain representative data on concentrations of contaminants in surface sediment throughout Sinclair Inlet, Dyes Inlet, Port Orchard Passage, and Rich Passage, aliquots of the 160 MVS sediment samples were used in the analysis for the Organics Verification Study. However, unlike metals screening methods, organics screening methods are not specific to individual organic compounds, and are not available for some target organics. Therefore, only the quantitative analytical results were used in the organics verification evaluation. The results of the Organics Verification Study showed that sediment quality outside of Sinclair Inlet is unlikely to be impaired because of organic contaminants. Similar to the results for metals, in Sinclair Inlet, the distribution of residual organic contaminants is generally limited to nearshore areas already within the

  2. Thoughts on Verification of Nuclear Disarmament

    SciTech Connect

    Dunlop, W H

    2007-09-26

    perfect and it was expected that occasionally there might be a verification measurement that was slightly above 150 kt. But the accuracy was much improved over the earlier seismic measurements. In fact some of this improvement was because as part of this verification protocol the US and Soviet Union provided the yields of several past tests to improve seismic calibrations. This actually helped provide a much needed calibration for the seismic measurements. It was also accepted that since nuclear tests were to a large part R&D related, it was also expected that occasionally there might be a test that was slightly above 150 kt, as you could not always predict the yield with high accuracy in advance of the test. While one could hypothesize that the Soviets could do a test at some other location than their test sites, if it were even a small fraction of 150 kt it would clearly be observed and would be a violation of the treaty. So the issue of clandestine tests of significance was easily covered for this particular treaty.

  3. Cold fusion verification

    NASA Astrophysics Data System (ADS)

    North, M. H.; Mastny, G. F.; Wesley, E. J.

    1991-03-01

    The objective of this work to verify and reproduce experimental observations of Cold Nuclear Fusion (CNF), as originally reported in 1989. The method was to start with the original report and add such additional information as became available to build a set of operational electrolytic CNF cells. Verification was to be achieved by first observing cells for neutron production, and for those cells that demonstrated a nuclear effect, careful calorimetric measurements were planned. The authors concluded, after laboratory experience, reading published work, talking with others in the field, and attending conferences, that CNF probably is chimera and will go the way of N-rays and polywater. The neutron detector used for these tests was a completely packaged unit built into a metal suitcase that afforded electrostatic shielding for the detectors and self-contained electronics. It was battery-powered, although it was on charge for most of the long tests. The sensor element consists of He detectors arranged in three independent layers in a solid moderating block. The count from each of the three layers as well as the sum of all the detectors were brought out and recorded separately. The neutron measurements were made with both the neutron detector and the sample tested in a cave made of thick moderating material that surrounded the two units on the sides and bottom.

  4. Woodward Effect Experimental Verifications

    NASA Astrophysics Data System (ADS)

    March, Paul

    2004-02-01

    The work of J. F. Woodward (1990 1996a; 1996b; 1998; 2002a; 2002b; 2004) on the existence of ``mass fluctuations'' and their use in exotic propulsion schemes was examined for possible application in improving space flight propulsion and power generation. Woodward examined Einstein's General Relativity Theory (GRT) and assumed that if the strong Machian interpretation of GRT as well as gravitational / inertia like Wheeler-Feynman radiation reaction forces hold, then when an elementary particle is accelerated through a potential gradient, its rest mass should fluctuate around its mean value during its acceleration. Woodward also used GRT to clarify the precise experimental conditions necessary for observing and exploiting these mass fluctuations or ``Woodward effect'' (W-E). Later, in collaboration with his ex-graduate student T. Mahood, they also pushed the experimental verification boundaries of these proposals. If these purported mass fluctuations occur as Woodward claims, and his assumption that gravity and inertia are both byproducts of the same GRT based phenomenon per Mach's Principle is correct, then many innovative applications such as propellantless propulsion and gravitational exotic matter generators may be feasible. This paper examines the reality of mass fluctuations and the feasibility of using the W-E to design propellantless propulsion devices in the near to mid-term future. The latest experimental results, utilizing MHD-like force rectification systems, will also be presented.

  5. Using publically available forest inventory data in climate-based modes of tree species distribution: Examining effects of true versus altered location coordinates

    Treesearch

    Jacob Gibson; Gretchen Moisen; Tracey Frescino; Thomas C. Edwards

    2013-01-01

    Species distribution models (SDMs) were built with US Forest Inventory and Analysis (FIA) publicly available plot coordinates, which are altered for plot security purposes, and compared with SDMs built with true plot coordinates. Six species endemic to the western US, including four junipers (Juniperus deppeana var. deppeana, J. monosperma, J. occidentalis, J....

  6. 28 CFR 74.6 - Location of eligible persons.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Location of eligible persons. 74.6 Section 74.6 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) CIVIL LIBERTIES ACT REDRESS PROVISION Verification of Eligibility § 74.6 Location of eligible persons. The Office shall compare...

  7. Evaluation of 3D pre-treatment verification for volumetric modulated arc therapy plan in head region

    NASA Astrophysics Data System (ADS)

    Ruangchan, S.; Oonsiri, S.; Suriyapee, S.

    2016-03-01

    The development of pre-treatment QA tools contributes to the three dimension (3D) dose verification using the calculation software with the measured planar dose distribution. This research is aimed to evaluate the Sun Nuclear 3DVH software with Thermo luminescence dosimeter (TLD) measurement. The two VMAT patient plans (2.5 arcs) of 6 MV photons with different PTV locations were transferred to the Rando phantom images. The PTV of the first plan located in homogeneous area and vice versa in the second plan. For treatment planning process, the Rando phantom images were employed in optimization and calculation with the PTV, brain stem, lens and TLD position contouring. The verification plans were created, transferred to the ArcCHECK for measurement and calculated the 3D dose using 3DVH software. The range of the percent dose differences in both PTV and organ at risk (OAR) between TLD and 3DVH software of the first and the second plans were -2.09 to 3.87% and -1.39 to 6.88%, respectively. The mean percent dose differences for the PTV were 1.62% and 3.93% for the first and the second plans, respectively. In conclusion, the 3DVH software results show good agreement with TLD when the tumor located in the homogeneous area.

  8. Expose : procedure and results of the joint experiment verification tests

    NASA Astrophysics Data System (ADS)

    Panitz, C.; Rettberg, P.; Horneck, G.; Rabbow, E.; Baglioni, P.

    The International Space Station will carry the EXPOSE facility accommodated at the universal workplace URM-D located outside the Russian Service Module. The launch will be affected in 2005 and it is planned to stay in space for 1.5 years. The tray like structure will accomodate 2 chemical and 6 biological PI-experiments or experiment systems of the ROSE (Response of Organisms to Space Environment) consortium. EXPOSE will support long-term in situ studies of microbes in artificial meteorites, as well as of microbial communities from special ecological niches, such as endolithic and evaporitic ecosystems. The either vented or sealed experiment pockets will be covered by an optical filter system to control intensity and spectral range of solar UV irradiation. Control of sun exposure will be achieved by the use of individual shutters. To test the compatibility of the different biological systems and their adaptation to the opportunities and constraints of space conditions a profound ground support program has been developed. The procedure and first results of this joint Experiment Verification Tests (EVT) will be presented. The results will be essential for the success of the EXPOSE mission and have been done in parallel with the development and construction of the final hardware design of the facility. The results of the mission will contribute to the understanding of the organic chemistry processes in space, the biological adaptation strategies to extreme conditions, e.g. on early Earth and Mars, and the distribution of life beyond its planet of origin.

  9. Location and distribution of micro-inclusions in the EDML and NEEM ice cores using optical microscopy and in situ Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Eichler, Jan; Kleitz, Ina; Bayer-Giraldi, Maddalena; Jansen, Daniela; Kipfstuhl, Sepp; Shigeyama, Wataru; Weikusat, Christian; Weikusat, Ilka

    2017-05-01

    Impurities control a variety of physical properties of polar ice. Their impact can be observed at all scales - from the microstructure (e.g., grain size and orientation) to the ice sheet flow behavior (e.g., borehole tilting and closure). Most impurities in ice form micrometer-sized inclusions. It has been suggested that these µ inclusions control the grain size of polycrystalline ice by pinning of grain boundaries (Zener pinning), which should be reflected in their distribution with respect to the grain boundary network. We used an optical microscope to generate high-resolution large-scale maps (3 µm pix-1, 8 × 2 cm2) of the distribution of micro-inclusions in four polar ice samples: two from Antarctica (EDML, MIS 5.5) and two from Greenland (NEEM, Holocene). The in situ positions of more than 5000 µ inclusions have been determined. A Raman microscope was used to confirm the extrinsic nature of a sample proportion of the mapped inclusions. A superposition of the 2-D grain boundary network and µ-inclusion distributions shows no significant correlations between grain boundaries and µ inclusions. In particular, no signs of grain boundaries harvesting µ inclusions could be found and no evidence of µ inclusions inhibiting grain boundary migration by slow-mode pinning could be detected. Consequences for our understanding of the impurity effect on ice microstructure and rheology are discussed.

  10. Soyasapogenol A and B distribution in soybean (Glycine max L. Merr.) in relation to seed physiology, genetic variability, and growing location.

    PubMed

    Rupasinghe, H P Vasantha; Jackson, Chung-Ja C; Poysa, Vaino; Di Berardo, Christina; Bewley, J Derek; Jenkinson, Jonathan

    2003-09-24

    An efficient analytical method utilizing high-performance liquid chromatography (HPLC)/evaporative light scattering detector (ELSD) was developed to isolate and quantify the two major soyasaponin aglycones or precursors in soybeans, triterpene soyasapogenol A and B. Soaking of seeds in water up to 15 h did not change the content of soyasapogenols. Seed germination had no influence on soyasapogenol A content but increased the accumulation of soyasapogenol B. Soyasapogenols were mainly concentrated in the axis of the seeds as compared with the cotyledons and seed coat. In the seedling, the root (radicle) contained the highest concentration of soyasapogenol A, while the plumule had the greatest amounts of soyasapogenol B. In 10 advanced food-grade soybean cultivars grown in four locations in Ontario, total soyasapogenol content in soybeans was 2 +/- 0.3 mg/g. Soyasapogenol B content (1.5 +/- 0.27 mg/g) was 2.5-4.5-fold higher than soyasapogenol A content (0.49 +/- 0.1 mg/g). A significant variation in soyasapogenol content was observed among cultivars and growing locations. There was no significant correlation between the content of soyasapogenols and the total isoflavone aglycones.

  11. Distribution of polychlorinated biphenyls and organochlorine pesticides in human breast milk from various locations in Tunisia: Levels of contamination, influencing factors, and infant risk assessment

    SciTech Connect

    Ennaceur, S. Gandoura, N.; Driss, M.R.

    2008-09-15

    The concentrations of dichlorodiphenytrichloroethane and its metabolites (DDTs), hexachlorobenzene (HCB), hexachlorocyclohexane isomers (HCHs), dieldrin, and 20 polychlorinated biphenyls (PCBs) were determined in 237 human breast milk samples collected from 12 locations in Tunisia. Gas chromatography with electron capture detector (GC-ECD) was used to identify and quantify residue levels on a lipid basis of organochlorine compounds (OCs). The predominant OCs in human breast milk were PCBs, p,p'-DDE, p,p'-DDT, HCHs, and HCB. Concentrations of DDTs in human breast milk from rural areas were significantly higher than those from urban locations (p<0.05). With regard to PCBs, we observed the predominance of mid-chlorinated congeners due to the presence of PCBs with high K{sub ow} such as PCB 153, 138, and 180. Positive correlations were found between concentrations of OCs in human breast milk and age of mothers and number of parities, suggesting the influence of such factors on OC burdens in lactating mothers. The comparison of daily intakes of PCBs, DDTs, HCHs, and HCB to infants through human breast milk with guidelines proposed by WHO and Health Canada shows that some individuals accumulated OCs in breast milk close to or higher than these guidelines.

  12. Experiments for locating damaged truss members in a truss structure

    NASA Technical Reports Server (NTRS)

    Mcgowan, Paul E.; Smith, Suzanne W.; Javeed, Mehzad

    1991-01-01

    Locating damaged truss members in large space structures will involve a combination of sensing and diagnostic techniques. Methods developed for damage location require experimental verification prior to on-orbit applications. To this end, a series of experiments for locating damaged members using a generic, ten bay truss structure were conducted. A 'damaged' member is a member which has been removed entirely. Previously developed identification methods are used in conjunction with the experimental data to locate damage. Preliminary results to date are included, and indicate that mode selection and sensor location are important issues for location performance. A number of experimental data sets representing various damage configurations were compiled using the ten bay truss. The experimental data and the corresponding finite element analysis models are available to researchers for verification of various methods of structure identification and damage location.

  13. Hard and Soft Safety Verifications

    NASA Technical Reports Server (NTRS)

    Wetherholt, Jon; Anderson, Brenda

    2012-01-01

    The purpose of this paper is to examine the differences between and the effects of hard and soft safety verifications. Initially, the terminology should be defined and clarified. A hard safety verification is datum which demonstrates how a safety control is enacted. An example of this is relief valve testing. A soft safety verification is something which is usually described as nice to have but it is not necessary to prove safe operation. An example of a soft verification is the loss of the Solid Rocket Booster (SRB) casings from Shuttle flight, STS-4. When the main parachutes failed, the casings impacted the water and sank. In the nose cap of the SRBs, video cameras recorded the release of the parachutes to determine safe operation and to provide information for potential anomaly resolution. Generally, examination of the casings and nozzles contributed to understanding of the newly developed boosters and their operation. Safety verification of SRB operation was demonstrated by examination for erosion or wear of the casings and nozzle. Loss of the SRBs and associated data did not delay the launch of the next Shuttle flight.

  14. Verification of Advances in a Coupled Snow-runoff Modeling Framework for Operational Streamflow Forecasts

    NASA Astrophysics Data System (ADS)

    Barik, M. G.; Hogue, T. S.; Franz, K. J.; He, M.

    2011-12-01

    The National Oceanic and Atmospheric Administration's (NOAA's) River Forecast Centers (RFCs) issue hydrologic forecasts related to flood events, reservoir operations for water supply, streamflow regulation, and recreation on the nation's streams and rivers. The RFCs use the National Weather Service River Forecast System (NWSRFS) for streamflow forecasting which relies on a coupled snow model (i.e. SNOW17) and rainfall-runoff model (i.e. SAC-SMA) in snow-dominated regions of the US. Errors arise in various steps of the forecasting system from input data, model structure, model parameters, and initial states. The goal of the current study is to undertake verification of potential improvements in the SNOW17-SAC-SMA modeling framework developed for operational streamflow forecasts. We undertake verification for a range of parameters sets (i.e. RFC, DREAM (Differential Evolution Adaptive Metropolis)) as well as a data assimilation (DA) framework developed for the coupled models. Verification is also undertaken for various initial conditions to observe the influence of variability in initial conditions on the forecast. The study basin is the North Fork America River Basin (NFARB) located on the western side of the Sierra Nevada Mountains in northern California. Hindcasts are verified using both deterministic (i.e. Nash Sutcliffe efficiency, root mean square error, and joint distribution) and probabilistic (i.e. reliability diagram, discrimination diagram, containing ratio, and Quantile plots) statistics. Our presentation includes comparison of the performance of different optimized parameters and the DA framework as well as assessment of the impact associated with the initial conditions used for streamflow forecasts for the NFARB.

  15. Distribution and abundance of zooplankton at selected locations on the Savannah River and from tributaries of the Savannah River Plant: December 1984--August 1985

    SciTech Connect

    Chimney, M.J.; Cody, W.R.

    1986-11-01

    Spatial and temporal differences in the abundance and composition of the zooplankton community occurred at Savannah River and SRP creek/swamp sampling locations. Stations are grouped into four categories based on differences in community structure: Savannah River; thermally influenced stations on Four Mile Creek and Pen Branch; closed-canopy stations in the Steel Creek system; and open-canopy Steel Creek stations, non-thermally influenced stations on Pen Branch and Beaver Dam Creek. Differences among stations were little related to water temperature, dissolved oxygen concentration, conductivity or pH at the tine of collection. None of these parameters appeared to be limiting. Rather, past thermal history and habitat structure seemed to be important controlling factors. 66 refs.

  16. A Modified Cramer-von Mises and Anderson-Darling Test for the Weibull Distribution with Unknown Location and Scale Parameters.

    DTIC Science & Technology

    1981-12-01

    13 6 Plotting Positions Versus A2 or W2 Statistics. 22 7 Gamma Shape = 2 ..... ................ .... 24 8 a. Beta, p-l, q-l...Level-.20, n=20 . 71 13 Shape vs W2 Critical Values, Level-.20, n-25 . 72 14 Shape vs W2 Critical Values, Level-.20, n-30 . 73 15 Shape vs W2 Critical...formula to calculateW 2 is given by Eq (4). Letting x( 13 ,(2),...x(n ) be the n order statistics and letting Ui F (xi), the cumulative distribution

  17. Method and system for determining depth distribution of radiation-emitting material located in a source medium and radiation detector system for use therein

    DOEpatents

    Benke, Roland R.; Kearfott, Kimberlee J.; McGregor, Douglas S.

    2003-03-04

    A method, system and a radiation detector system for use therein are provided for determining the depth distribution of radiation-emitting material distributed in a source medium, such as a contaminated field, without the need to take samples, such as extensive soil samples, to determine the depth distribution. The system includes a portable detector assembly with an x-ray or gamma-ray detector having a detector axis for detecting the emitted radiation. The radiation may be naturally-emitted by the material, such as gamma-ray-emitting radionuclides, or emitted when the material is struck by other radiation. The assembly also includes a hollow collimator in which the detector is positioned. The collimator causes the emitted radiation to bend toward the detector as rays parallel to the detector axis of the detector. The collimator may be a hollow cylinder positioned so that its central axis is perpendicular to the upper surface of the large area source when positioned thereon. The collimator allows the detector to angularly sample the emitted radiation over many ranges of polar angles. This is done by forming the collimator as a single adjustable collimator or a set of collimator pieces having various possible configurations when connected together. In any one configuration, the collimator allows the detector to detect only the radiation emitted from a selected range of polar angles measured from the detector axis. Adjustment of the collimator or the detector therein enables the detector to detect radiation emitted from a different range of polar angles. The system further includes a signal processor for processing the signals from the detector wherein signals obtained from different ranges of polar angles are processed together to obtain a reconstruction of the radiation-emitting material as a function of depth, assuming, but not limited to, a spatially-uniform depth distribution of the material within each layer. The detector system includes detectors having

  18. Distribution and mobility of lead (Pb), copper (Cu), zinc (Zn), and antimony (Sb) from ammunition residues on shooting ranges for small arms located on mires.

    PubMed

    Mariussen, Espen; Johnsen, Ida Vaa; Strømseng, Arnljot Einride

    2017-04-01

    An environmental survey was performed on shooting ranges for small arms located on minerotrophic mires. The highest mean concentrations of Pb (13 g/kg), Cu (5.2 g/kg), Zn (1.1 g/kg), and Sb (0.83 g/kg) in the top soil were from a range located on a poor minerotrophic and acidic mire. This range had also the highest concentrations of Pb, Cu, Zn, and Sb in discharge water (0.18 mg/L Pb, 0.42 mg/L Cu, 0.63 mg/L Zn, and 65 μg/L Sb) and subsurface soil water (2.5 mg/L Pb, 0.9 mg/L Cu, 1.6 mg/L Zn, and 0.15 mg/L Sb). No clear differences in the discharge of ammunition residues between the mires were observed based on the characteristics of the mires. In surface water with high pH (pH ~7), there was a trend with high concentrations of Sb and lower relative concentrations of Cu and Pb. The relatively low concentrations of ammunition residues both in the soil and soil water, 20 cm below the top soil, indicates limited vertical migration in the soil. Channels in the mires, made by plant roots or soil layer of less decomposed materials, may increase the rate of transport of contaminated surface water into deeper soil layers and ground water. A large portion of both Cu and Sb were associated to the oxidizable components in the peat, which may imply that these elements form inner-sphere complexes with organic matter. The largest portion of Pb and Zn were associated with the exchangeable and pH-sensitive components in the peat, which may imply that these elements form outer-sphere complexes with the peat.

  19. Towards an in-situ measurement of wave velocity in buried plastic water distribution pipes for the purposes of leak location

    NASA Astrophysics Data System (ADS)

    Almeida, Fabrício C. L.; Brennan, Michael J.; Joseph, Phillip F.; Dray, Simon; Whitfield, Stuart; Paschoalini, Amarildo T.

    2015-12-01

    Water companies are under constant pressure to ensure that water leakage is kept to a minimum. Leak noise correlators are often used to help find and locate leaks. These devices correlate acoustic or vibration signals from sensors which are placed either side the location of a suspected leak. The peak in the cross-correlation function of the measured signals gives the time difference between the arrival times of the leak noise at the sensors. To convert the time delay into a distance, the speed at which the leak noise propagates along the pipe (wave-speed) needs to be known. Often, this is estimated from historical wave-speed data measured on other pipes obtained at various times and under various conditions, or it is estimated from tables which are calculated using simple formula. Usually, the wave-speed is not measured directly at the time of the correlation measurement and is therefore potentially a source of significant error in the localisation of the leak. In this paper, a new method of measuring the wave-speed in-situ in the presence of a leak, that is robust and simple, is explored. Experiments were conducted on a bespoke large scale buried pipe test-rig, in which a leak was also induced in the pipe between the measurement positions to simulate a condition that is likely to occur in practice. It is shown that even in conditions where the signal to noise ratio is very poor, the wave-speed estimate calculated using the new method is less than 5% different from the best estimate of 387 m s-1.

  20. Polarization-multiplexed plasmonic phase generation with distributed nanoslits.

    PubMed

    Lee, Seung-Yeol; Kim, Kyuho; Lee, Gun-Yeal; Lee, Byoungho

    2015-06-15

    Methods for multiplexing surface plasmon polaritons (SPPs) have been attracting much attention due to their potentials for plasmonic integrated systems, plasmonic holography, and optical tweezing. Here, using closely-distanced distributed nanoslits, we propose a method for generating polarization-multiplexed SPP phase profiles which can be applied for implementing general SPP phase distributions. Two independent types of SPP phase generation mechanisms - polarization-independent and polarization-reversible ones - are combined to generate fully arbitrary phase profiles for each optical handedness. As a simple verification of the proposed scheme, we experimentally demonstrate that the location of plasmonic focus can be arbitrary designed, and switched by the change of optical handedness.

  1. CTBT integrated verification system evaluation model supplement

    SciTech Connect

    EDENBURN,MICHAEL W.; BUNTING,MARCUS; PAYNE JR.,ARTHUR C.; TROST,LAWRENCE C.

    2000-03-02

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia's Monitoring Systems and Technology Center and has been funded by the U.S. Department of Energy's Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, ''top-level,'' modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM's unique features is that it integrates results from the various CTBT sensor technologies (seismic, in sound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection), location accuracy, and identification capability of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system's performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. The original IVSEM report, CTBT Integrated Verification System Evaluation Model, SAND97-25 18, described version 1.2 of IVSEM. This report describes the changes made to IVSEM version 1.2 and the addition of identification capability estimates that have been incorporated into IVSEM version 2.0.

  2. Spatial Evaluation and Verification of Earthquake Simulators

    NASA Astrophysics Data System (ADS)

    Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.

    2017-06-01

    In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.

  3. Spatial Evaluation and Verification of Earthquake Simulators

    NASA Astrophysics Data System (ADS)

    Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.

    2016-09-01

    In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.

  4. Post-OPC verification using a full-chip pattern-based simulation verification method

    NASA Astrophysics Data System (ADS)

    Hung, Chi-Yuan; Wang, Ching-Heng; Ma, Cliff; Zhang, Gary

    2005-11-01

    In this paper, we evaluated and investigated techniques for performing fast full-chip post-OPC verification using a commercial product platform. A number of databases from several technology nodes, i.e. 0.13um, 0.11um and 90nm are used in the investigation. Although it has proven that for most cases, our OPC technology is robust in general, due to the variety of tape-outs with complicated design styles and technologies, it is difficult to develop a "complete or bullet-proof" OPC algorithm that would cover every possible layout patterns. In the evaluation, among dozens of databases, some OPC databases were found errors by Model-based post-OPC checking, which could cost significantly in manufacturing - reticle, wafer process, and more importantly the production delay. From such a full-chip OPC database verification, we have learned that optimizing OPC models and recipes on a limited set of test chip designs may not provide sufficient coverage across the range of designs to be produced in the process. And, fatal errors (such as pinch or bridge) or poor CD distribution and process-sensitive patterns may still occur. As a result, more than one reticle tape-out cycle is not uncommon to prove models and recipes that approach the center of process for a range of designs. So, we will describe a full-chip pattern-based simulation verification flow serves both OPC model and recipe development as well as post OPC verification after production release of the OPC. Lastly, we will discuss the differentiation of the new pattern-based and conventional edge-based verification tools and summarize the advantages of our new tool and methodology: 1). Accuracy: Superior inspection algorithms, down to 1nm accuracy with the new "pattern based" approach 2). High speed performance: Pattern-centric algorithms to give best full-chip inspection efficiency 3). Powerful analysis capability: Flexible error distribution, grouping, interactive viewing and hierarchical pattern extraction to narrow

  5. Technical challenges for dismantlement verification

    SciTech Connect

    Olinger, C.T.; Stanbro, W.D.; Johnston, R.G.; Nakhleh, C.W.; Dreicer, J.S.

    1997-11-01

    In preparation for future nuclear arms reduction treaties, including any potential successor treaties to START I and II, the authors have been examining possible methods for bilateral warhead dismantlement verification. Warhead dismantlement verification raises significant challenges in the political, legal, and technical arenas. This discussion will focus on the technical issues raised by warhead arms controls. Technical complications arise from several sources. These will be discussed under the headings of warhead authentication, chain-of-custody, dismantlement verification, non-nuclear component tracking, component monitoring, and irreversibility. The authors will discuss possible technical options to address these challenges as applied to a generic dismantlement and disposition process, in the process identifying limitations and vulnerabilities. They expect that these considerations will play a large role in any future arms reduction effort and, therefore, should be addressed in a timely fashion.

  6. Structural verification for GAS experiments

    NASA Technical Reports Server (NTRS)

    Peden, Mark Daniel

    1992-01-01

    The purpose of this paper is to assist the Get Away Special (GAS) experimenter in conducting a thorough structural verification of its experiment structural configuration, thus expediting the structural review/approval process and the safety process in general. Material selection for structural subsystems will be covered with an emphasis on fasteners (GSFC fastener integrity requirements) and primary support structures (Stress Corrosion Cracking requirements and National Space Transportation System (NSTS) requirements). Different approaches to structural verifications (tests and analyses) will be outlined especially those stemming from lessons learned on load and fundamental frequency verification. In addition, fracture control will be covered for those payloads that utilize a door assembly or modify the containment provided by the standard GAS Experiment Mounting Plate (EMP). Structural hazard assessment and the preparation of structural hazard reports will be reviewed to form a summation of structural safety issues for inclusion in the safety data package.

  7. NON-INVASIVE DETERMINATION OF THE LOCATION AND DISTRIBUTION OF FREE-PHASE DENSE NONAQUEOUS PHASE LIQUIDS (DNAPL) BY SEISMIC REFLECTION TECHNIQUES

    SciTech Connect

    Michael G. Waddell; William J. Domoracki; Tom J. Temples

    2001-12-01

    This annual technical progress report is for part of Task 4 (site evaluation), Task 5 (2D seismic design, acquisition, and processing), and Task 6 (2D seismic reflection, interpretation, and AVO analysis) on DOE contact number DE-AR26-98FT40369. The project had planned one additional deployment to another site other than Savannah River Site (SRS) or DOE Hanford Site. After the SUBCON midyear review in Albuquerque, NM, it was decided that two additional deployments would be performed. The first deployment is to test the feasibility of using non-invasive seismic reflection and AVO analysis as a monitoring tool to assist in determining the effectiveness of Dynamic Underground Stripping (DUS) in removal of DNAPL. The second deployment is to the Department of Defense (DOD) Charleston Naval Weapons Station Solid Waste Management Unit 12 (SWMU-12), Charleston, SC to further test the technique to detect high concentrations of DNAPL. The Charleston Naval Weapons Station SWMU-12 site was selected in consultation with National Energy Technology Laboratory (NETL) and DOD Naval Facilities Engineering Command Southern Division (NAVFAC) personnel. Based upon the review of existing data and due to the shallow target depth, the project team collected three Vertical Seismic Profiles (VSP) and an experimental P-wave seismic reflection line. After preliminary data analysis of the VSP data and the experimental reflection line data, it was decided to proceed with Task 5 and Task 6. Three high resolution P-wave reflection profiles were collected with two objectives; (1) design the reflection survey to image a target depth of 20 feet below land surface to assist in determining the geologic controls on the DNAPL plume geometry, and (2) apply AVO analysis to the seismic data to locate the zone of high concentration of DNAPL. Based upon the results of the data processing and interpretation of the seismic data, the project team was able to map the channel that is controlling the DNAPL plume

  8. Evaluation of Mesoscale Model Phenomenological Verification Techniques

    NASA Technical Reports Server (NTRS)

    Lambert, Winifred

    2006-01-01

    Forecasters at the Spaceflight Meteorology Group, 45th Weather Squadron, and National Weather Service in Melbourne, FL use mesoscale numerical weather prediction model output in creating their operational forecasts. These models aid in forecasting weather phenomena that could compromise the safety of launch, landing, and daily ground operations and must produce reasonable weather forecasts in order for their output to be useful in operations. Considering the importance of model forecasts to operations, their accuracy in forecasting critical weather phenomena must be verified to determine their usefulness. The currently-used traditional verification techniques involve an objective point-by-point comparison of model output and observations valid at the same time and location. The resulting statistics can unfairly penalize high-resolution models that make realistic forecasts of a certain phenomena, but are offset from the observations in small time and/or space increments. Manual subjective verification can provide a more valid representation of model performance, but is time-consuming and prone to personal biases. An objective technique that verifies specific meteorological phenomena, much in the way a human would in a subjective evaluation, would likely produce a more realistic assessment of model performance. Such techniques are being developed in the research community. The Applied Meteorology Unit (AMU) was tasked to conduct a literature search to identify phenomenological verification techniques being developed, determine if any are ready to use operationally, and outline the steps needed to implement any operationally-ready techniques into the Advanced Weather Information Processing System (AWIPS). The AMU conducted a search of all literature on the topic of phenomenological-based mesoscale model verification techniques and found 10 different techniques in various stages of development. Six of the techniques were developed to verify precipitation forecasts, one

  9. Transfer function verification and block diagram simplification of a very high-order distributed pole closed-loop servo by means of non-linear time-response simulation

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, A. K.

    1975-01-01

    Linear frequency domain methods are inadequate in analyzing the 1975 Viking Orbiter (VO75) digital tape recorder servo due to dominant nonlinear effects such as servo signal limiting, unidirectional servo control, and static/dynamic Coulomb friction. The frequency loop (speed control) servo of the VO75 tape recorder is used to illustrate the analytical tools and methodology of system redundancy elimination and high order transfer function verification. The paper compares time-domain performance parameters derived from a series of nonlinear time responses with the available experimental data in order to select the best possible analytical transfer function representation of the tape transport (mechanical segment of the tape recorder) from several possible candidates. The study also shows how an analytical time-response simulation taking into account most system nonlinearities can pinpoint system redundancy and overdesign stemming from a strictly empirical design approach. System order reduction is achieved through truncation of individual transfer functions and elimination of redundant blocks.

  10. Verification of road databases using multiple road models

    NASA Astrophysics Data System (ADS)

    Ziems, Marcel; Rottensteiner, Franz; Heipke, Christian

    2017-08-01

    In this paper a new approach for automatic road database verification based on remote sensing images is presented. In contrast to existing methods, the applicability of the new approach is not restricted to specific road types, context areas or geographic regions. This is achieved by combining several state-of-the-art road detection and road verification approaches that work well under different circumstances. Each one serves as an independent module representing a unique road model and a specific processing strategy. All modules provide independent solutions for the verification problem of each road object stored in the database in form of two probability distributions, the first one for the state of a database object (correct or incorrect), and a second one for the state of the underlying road model (applicable or not applicable). In accordance with the Dempster-Shafer Theory, both distributions are mapped to a new state space comprising the classes correct, incorrect and unknown. Statistical reasoning is applied to obtain the optimal state of a road object. A comparison with state-of-the-art road detection approaches using benchmark datasets shows that in general the proposed approach provides results with larger completeness. Additional experiments reveal that based on the proposed method a highly reliable semi-automatic approach for road data base verification can be designed.

  11. Formal verification of mathematical software

    NASA Technical Reports Server (NTRS)

    Sutherland, D.

    1984-01-01

    Methods are investigated for formally specifying and verifying the correctness of mathematical software (software which uses floating point numbers and arithmetic). Previous work in the field was reviewed. A new model of floating point arithmetic called the asymptotic paradigm was developed and formalized. Two different conceptual approaches to program verification, the classical Verification Condition approach and the more recently developed Programming Logic approach, were adapted to use the asymptotic paradigm. These approaches were then used to verify several programs; the programs chosen were simplified versions of actual mathematical software.

  12. Environmental Technology Verification Program Materials ...

    EPA Pesticide Factsheets

    The protocol provides generic procedures for implementing a verification test for the performance of in situ chemical oxidation (ISCO), focused specifically to expand the application of ISCO at manufactured gas plants with polyaromatic hydrocarbon (PAH) contamination (MGP/PAH) and at active gas station sites. The protocol provides generic procedures for implementing a verification test for the performance of in situ chemical oxidation (ISCO), focused specifically to expand the application of ISCO at manufactured gas plants with polyaromatic hydrocarbon (PAH) contamination (MGP/PAH) and at active gas station sites.

  13. CHEMICAL INDUCTION MIXER VERIFICATION - ENVIRONMENTAL TECHNOLOGY VERIFICATION PROGRAM

    EPA Science Inventory

    The Wet-Weather Flow Technologies Pilot of the Environmental Technology Verification (ETV) Program, which is supported by the U.S. Environmental Protection Agency and facilitated by NSF International, has recently evaluated the performance of chemical induction mixers used for di...

  14. CHEMICAL INDUCTION MIXER VERIFICATION - ENVIRONMENTAL TECHNOLOGY VERIFICATION PROGRAM

    EPA Science Inventory

    The Wet-Weather Flow Technologies Pilot of the Environmental Technology Verification (ETV) Program, which is supported by the U.S. Environmental Protection Agency and facilitated by NSF International, has recently evaluated the performance of chemical induction mixers used for di...

  15. Non-damaging, portable radiography: Applications in arms control verification

    SciTech Connect

    Morris, R.A.; Butterfield, K.B.; Apt, K.E.

    1992-08-01

    The state-of-the-technology necessary to perform portable radiography in support of arms control verification is evaluated. Specific requirements, such as accurate measurements of the location of features in a treaty-limited object and the detection of deeply imbedded features, are defined in three scenarios. Sources, detectors, portability, mensuration, and safety are discussed in relation to the scenarios. Examples are given of typical radiographic systems that would be capable of addressing the inspection problems associated with the three scenarios.

  16. STS-130 astronaut Nick Patrick during dry run for SSATA Crew Training and EMU Verification for STS-130.

    NASA Image and Video Library

    2009-10-29

    STS-130 astronaut Nick Patrick during dry run for SSATA Crew Training and EMU Verification for STS-130. Photo Date: October 29, 2009. Location: Building 7 - SSATA Chamber. Photographer: Robert Markowitz.

  17. SSATA Crew Training and EMU Verification for STS-128 crew member Danny Olivas during suited dry run.

    NASA Image and Video Library

    2009-04-22

    SSATA Crew Training and EMU Verification for STS-128 crew member Danny Olivas during suited dry run. Test Directors: Cristina Anchondo and Laura Campbell. Photo Date: April 22, 2009. Location: Building 7 - SSATA Chamber. Photographer: Robert Markowitz

  18. 16 CFR 315.5 - Prescriber verification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... § 315.5 Prescriber verification. (a) Prescription requirement. A seller may sell contact lenses only in accordance with a contact lens prescription for the patient that is: (1) Presented to the seller by the... for verification. When seeking verification of a contact lens prescription, a seller shall provide the...

  19. Working Memory Mechanism in Proportional Quantifier Verification

    ERIC Educational Resources Information Center

    Zajenkowski, Marcin; Szymanik, Jakub; Garraffa, Maria

    2014-01-01

    The paper explores the cognitive mechanisms involved in the verification of sentences with proportional quantifiers (e.g. "More than half of the dots are blue"). The first study shows that the verification of proportional sentences is more demanding than the verification of sentences such as: "There are seven blue and eight yellow…

  20. 78 FR 58492 - Generator Verification Reliability Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-24

    ... Energy Regulatory Commission 18 CFR Part 40 Generator Verification Reliability Standards AGENCY: Federal... Organization: MOD-025-2 (Verification and Data Reporting of Generator Real and Reactive Power Capability and Synchronous Condenser Reactive Power Capability), MOD- 026-1 (Verification of Models and Data for Generator...

  1. Working Memory Mechanism in Proportional Quantifier Verification

    ERIC Educational Resources Information Center

    Zajenkowski, Marcin; Szymanik, Jakub; Garraffa, Maria

    2014-01-01

    The paper explores the cognitive mechanisms involved in the verification of sentences with proportional quantifiers (e.g. "More than half of the dots are blue"). The first study shows that the verification of proportional sentences is more demanding than the verification of sentences such as: "There are seven blue and eight yellow…

  2. NON-INVASIVE DETERMINATION OF THE LOCATION AND DISTRIBUTION OF FREE-PHASE DENSE NONAQUEOUS PHASE LIQUIDS (DNAPL) BY SEISMIC REFLECTION TECHNIQUES

    SciTech Connect

    Michael G. Waddell; William J. Domoracki; Tom J. Temples

    2001-05-01

    This semi-annual technical progress report is for Task 4 site evaluation, Task 5 seismic reflection design and acquisition, and Task 6 seismic reflection processing and interpretation on DOE contact number DE-AR26-98FT40369. The project had planned one additional deployment to another site other than Savannah River Site (SRS) or DOE Hanford. During this reporting period the project had an ASME peer review. The findings and recommendation of the review panel, as well at the project team response to comments, are in Appendix A. After the SUBCON midyear review in Albuquerque, NM and the peer review it was decided that two additional deployments would be performed. The first deployment is to test the feasibility of using non-invasive seismic reflection and AVO analysis as monitoring to assist in determining the effectiveness of Dynamic Underground Stripping (DUS) in removal of DNAPL. Under the rescope of the project, Task 4 would be performed at the Charleston Navy Weapons Station, Charleston, SC and not at the Dynamic Underground Stripping (DUS) project at SRS. The project team had already completed Task 4 at the M-area seepage basin, only a few hundred yards away from the DUS site. Because the geology is the same, Task 4 was not necessary. However, a Vertical Seismic Profile (VSP) was conducted in one well to calibrate the geology to the seismic data. The first deployment to the DUS Site (Tasks 5 and 6) has been completed. Once the steam has been turned off these tasks will be performed again to compare the results to the pre-steam data. The results from the first deployment to the DUS site indicated a seismic amplitude anomaly at the location and depths of the known high concentrations of DNAPL. The deployment to another site with different geologic conditions was supposed to occur during this reporting period. The first site selected was DOE Paducah, Kentucky. After almost eight months of negotiation, site access was denied requiring the selection of another site

  3. Verification of precipitation in weather systems: determination of systematic errors

    NASA Astrophysics Data System (ADS)

    Ebert, E. E.; McBride, J. L.

    2000-12-01

    An object-oriented verification procedure is presented for gridded quantitative precipitation forecasts (QPFs). It is carried out within the framework of "contiguous rain areas" (CRAs), whereby a weather system is defined as a region bounded by a user-specified isopleth of precipitation in the union of the forecast and observed rain fields. The horizontal displacement of the forecast is determined by translating the forecast rain field until the total squared difference between the observed and forecast fields is minimized. This allows a decomposition of total error into components due to: (a) location; (b) rain volume and (c) pattern. Results are first presented for a Monte Carlo simulation of 40,000 synthetic CRAs in order to determine the accuracy of the verification procedure when the rain systems are only partially observed due to the presence of domain boundaries. Verification is then carried out for operational 24-h forecasts from the Australian Bureau of Meteorology LAPS numerical weather prediction model over a four-year period. Forty-five percent of all rain events were well forecast by the model, with small location and intensity errors. Location error was generally the dominant source of QPF error, with the directions of most frequent displacement varying by region. Forty-five percent of extreme rainfall events (>100 mm d -1) were well forecast, but in this case the model's underestimation of rain intensity was the most frequent source of error.

  4. Toward Regional Fossil Fuel CO2 Emissions Verification Using WRF-CHEM

    NASA Astrophysics Data System (ADS)

    Delle Monache, L.; Kosoviæ, B.; Cameron-Smith, P.; Bergmann, D.; Grant, K.; Guilderson, T.

    2008-12-01

    As efforts to reduce emissions of green house gases take shape it is becoming obvious that an essential component of a viable solution will involve emission verification. While detailed inventories of green house gas sources will represent important component of the solution additional verification methodologies will be necessary to reduce uncertainties in emission estimates especially for distributed sources and CO2 offsets. We developed tools for solving inverse dispersion problem for distributed emissions of green house gases. For that purpose we combine probabilistic inverse methodology based on Bayesian inversion with stochastic sampling and weather forecasting and air quality model WRF-CHEM. We demonstrate estimation of CO2 emissions associated with fossil fuel burning in California over two one-week periods in 2006. We use WRF- CHEM in tracer simulation mode to solve forward dispersion problem for emissions over eleven air basins. We first use direct inversion approach to determine optimal location for a limited number of CO2 - C14 isotope sensors. We then use Bayesian inference with stochastic sampling to determine probability distributions for emissions from California air basins. Moreover, we vary the number of sensors and frequency of measurements to study their effect on the accuracy and uncertainty level of the emission estimation. Finally, to take into account uncertainties associated with forward modeling, we combine Bayesian inference and stochastic sampling with ensemble modeling. The ensemble is created by running WRF-CHEM with different initial and boundary conditions as well as different boundary layer and surface model options. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory (LLNL) under Contract DE-AC52-07NA27344 (LLNL-ABS-406901-DRAFT). The project 07-ERD- 064 was funded by the Laboratory Directed Research and Development Program at LLNL.

  5. Kleene Algebra and Bytecode Verification

    DTIC Science & Technology

    2016-04-27

    Languages, ACM SIGPLAN/SIGACT, 1998, pp. 149–160. [2] Coglio, A., Simple verification technique for complex Java bytecode subroutines, Concurrency and...of Programming Languages (POPL’73), ACM , 1973, pp. 194–206. [6] Kot, L. and D. Kozen, Second-order abstract interpretation via Kleene algebra

  6. Automated verification system user's guide

    NASA Technical Reports Server (NTRS)

    Hoffman, R. H.

    1972-01-01

    Descriptions of the operational requirements for all of the programs of the Automated Verification System (AVS) are provided. The AVS programs are: (1) FORTRAN code analysis and instrumentation program (QAMOD); (2) Test Effectiveness Evaluation Program (QAPROC); (3) Transfer Control Variable Tracking Program (QATRAK); (4) Program Anatomy Table Generator (TABGEN); and (5) Network Path Analysis Program (RAMBLE).

  7. Verification Challenges at Low Numbers

    SciTech Connect

    Benz, Jacob M.; Booker, Paul M.; McDonald, Benjamin S.

    2013-07-16

    This paper will explore the difficulties of deep reductions by examining the technical verification challenges. At each step on the road to low numbers, the verification required to ensure compliance of all parties will increase significantly. Looking post New START, the next step will likely include warhead limits in the neighborhood of 1000 (Pifer 2010). Further reductions will include stepping stones at 100’s of warheads, and then 10’s of warheads before final elimination could be considered of the last few remaining warheads and weapons. This paper will focus on these three threshold reduction levels, 1000, 100’s, 10’s. For each, the issues and challenges will be discussed, potential solutions will be identified, and the verification technologies and chain of custody measures that address these solutions will be surveyed. It is important to note that many of the issues that need to be addressed have no current solution. In these cases, the paper will explore new or novel technologies that could be applied. These technologies will draw from the research and development that is ongoing throughout the national lab complex, and will look at technologies utilized in other areas of industry for their application to arms control verification.

  8. LANL measurements verification acceptance criteria

    SciTech Connect

    Chavez, D. M.

    2001-01-01

    The possibility of SNM diversion/theft is a major concern to organizations charged with control of Special Nuclear Material (SNM). Verification measurements are used to aid in the detection of SNM losses. The acceptance/rejection criteria for verification measurements are dependent on the facility-specific processes, the knowledge of the measured item, and the measurement technique applied. This paper will discuss some of the LANL measurement control steps and criteria applied for the acceptance of a verification measurement. The process involves interaction among the facility operations personnel, the subject matter experts of a specific instrument/technique, the process knowledge on the matrix of the measured item, and the measurement-specific precision and accuracy values. By providing an introduction to a site-specific application of measurement verification acceptance criteria, safeguards, material custodians, and SNM measurement professionals are assisted in understanding the acceptance/rejection process for measurements and their contribution of the process to the detection of SNM diversion.

  9. Forecast Validation and Verification for Earthquakes, Weather and Finance

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Turcotte, D. L.; Donnellan, A.; Tiampo, K.

    2009-04-01

    Techniques for earthquake forecasting are in development using both seismicity data mining methods, as well as numerical simulations. Testing such forecasts is necessary not only to determine forecast quality, but also to carry out forecast improvement. A large number of techniques to validate and verify forecasts have been developed for weather and financial applications. Many of these have been elaborated in public locations, including http://www.bom.gov.au/bmrc/wefor/staff/eee/verif/verif_web_page.html. Typically, the goal is to test for forecast resolution, reliability and sharpness. A good forecast is characterized by consistency, quality and value. Most, if not all of these forecast verification procedures can be readily applied to earthquake forecasts as well. In this talk, we discuss a number of these methods, and show how they might be useful for both fault-based forecasting, a group that includes the WGCEP and simulator-based renewal models, and grid-based forecasting, which includes the Relative Intensity, Pattern Informatics, and smoothed seismicity methods. We find that applying these standard methods of forecast verification is straightforward, and we conclude that judgments about the quality of a given forecast method can often depend on the test applied.

  10. Self-verifiable paper documents and automatic content verification

    NASA Astrophysics Data System (ADS)

    Tian, Yibin; Zhan, Xiaonong; Wu, Chaohong; Ming, Wei

    2014-02-01

    This report describes a method for the creation and automatic content verification of low-cost self-verifiable paper documents. The image of an original document is decomposed to symbol templates and their corresponding locations. The resulting data is further compressed and encrypted, and encoded in custom designed high-capacity color barcodes. The original image and barcodes are printed on the same paper to form a self-verifiable authentic document. During content verification, the paper document is scanned to obtain the barcodes and target image. The original image is reconstructed from data extracted from the barcodes, which is then registered with and compared to the target image. The verification is carried out hierarchically from the entire image down to word and symbol levels. For symbol level comparison, multiple types of features and shape matching are utilized in a cascade. The proposed verification method is inexpensive, robust and fast. Evaluation on 216 character tables and 102 real documents achieved greater than 99% alteration detection rate and less than 1% false positives at the word/symbol level.

  11. Chip connectivity verification program

    NASA Technical Reports Server (NTRS)

    Riley, Josh (Inventor); Patterson, George (Inventor)

    1999-01-01

    A method for testing electrical connectivity between conductive structures on a chip that is preferably layered with conductive and nonconductive layers. The method includes determining the layer on which each structure is located and defining the perimeter of each structure. Conductive layer connections between each of the layers are determined, and, for each structure, the points of intersection between the perimeter of that structure and the perimeter of each other structure on the chip are also determined. Finally, electrical connections between the structures are determined using the points of intersection and the conductive layer connections.

  12. A DICOM-RT-based toolbox for the evaluation and verification of radiotherapy plans

    NASA Astrophysics Data System (ADS)

    Spezi, E.; Lewis, D. G.; Smith, C. W.

    2002-12-01

    The verification of radiotherapy plans is an essential step in the treatment planning process. This is especially important for highly conformal and IMRT plans which produce non-intuitive fluence maps and complex 3D dose distributions. In this work we present a DICOM (Digital Imaging and Communication in Medicine) based toolbox, developed for the evaluation and the verification of radiotherapy treatment plans. The toolbox offers the possibility of importing treatment plans generated with different calculation algorithms and/or different optimization engines and evaluating dose distributions on an independent platform. Furthermore the radiotherapy set-up can be exported to the BEAM Monte Carlo code system for dose verification. This can be done by simulating the irradiation of the patient CT dataset or the irradiation of a software-generated water phantom. We show the application of some of the functions implemented in this toolbox for the evaluation and verification of an IMRT treatment of the head and neck region.

  13. A DICOM-RT-based toolbox for the evaluation and verification of radiotherapy plans.

    PubMed

    Spezi, E; Lewis, D G; Smith, C W

    2002-12-07

    The verification of radiotherapy plans is an essential step in the treatment planning process. This is especially important for highly conformal and IMRT plans which produce non-intuitive fluence maps and complex 3D dose distributions. In this work we present a DICOM (Digital Imaging and Communication in Medicine) based toolbox, developed for the evaluation and the verification of radiotherapy treatment plans. The toolbox offers the possibility of importing treatment plans generated with different calculation algorithms and/or different optimization engines and evaluating dose distributions on an independent platform. Furthermore the radiotherapy set-up can be exported to the BEAM Monte Carlo code system for dose verification. This can be done by simulating the irradiation of the patient CT dataset or the irradiation of a software-generated water phantom. We show the application of some of the functions implemented in this toolbox for the evaluation and verification of an IMRT treatment of the head and neck region.

  14. Heterogeneity of nervous system mitochondria: location, location, location!

    PubMed

    Dubinsky, Janet M

    2009-08-01

    Mitochondrial impairments have been associated with many neurological disorders, from inborn errors of metabolism or genetic disorders to age and environmentally linked diseases of aging (DiMauro S., Schon E.A. 2008. Mitochondrial disorders in the nervous system. Annu. Rev., Neurosci. 31, 91-123.). In these disorders, specific nervous system components or brain regions appear to be initially more susceptible to the triggering event or pathological process. Such regional variation in susceptibility to multiple types of stressors raises the possibility that inherent differences in mitochondrial function may mediate some aspect of pathogenesis. Regional differences in the distribution or number of mitochondria, mitochondrial enzyme activities, enzyme expression levels, mitochondrial genes or availability of necessary metabolites become attractive explanations for selective vulnerability of a nervous system structure. While regionally selective mitochondrial vulnerability has been documented, regional variations in other cellular and tissue characteristics may also contribute to metabolic impairment. Such environmental variables include high tonic firing rates, neurotransmitter phenotype, location of mitochondria within a neuron, or the varied tissue perfusion pressure of different cerebral arterial branches. These contextual variables exert regionally distinct regulatory influences on mitochondria to tune their energy production to local demands. Thus to understand variations in mitochondrial functioning and consequent selective vulnerability to injury, the organelle must be placed within the context of its cellular, functional, developmental and neuroanatomical environment.

  15. Detector-Independent Verification of Quantum Light.

    PubMed

    Sperling, J; Clements, W R; Eckstein, A; Moore, M; Renema, J J; Kolthammer, W S; Nam, S W; Lita, A; Gerrits, T; Vogel, W; Agarwal, G S; Walmsley, I A

    2017-04-21

    We introduce a method for the verification of nonclassical light which is independent of the complex interaction between the generated light and the material of the detectors. This is accomplished by means of a multiplexing arrangement. Its theoretical description yields that the coincidence statistics of this measurement layout is a mixture of multinomial distributions for any classical light field and any type of detector. This allows us to formulate bounds on the statistical properties of classical states. We apply our directly accessible method to heralded multiphoton states which are detected with a single multiplexing step only and two detectors, which are in our work superconducting transition-edge sensors. The nonclassicality of the generated light is verified and characterized through the violation of the classical bounds without the need for characterizing the used detectors.

  16. 3D VMAT Verification Based on Monte Carlo Log File Simulation with Experimental Feedback from Film Dosimetry.

    PubMed

    Barbeiro, A R; Ureba, A; Baeza, J A; Linares, R; Perucha, M; Jiménez-Ortega, E; Velázquez, S; Mateos, J C; Leal, A

    2016-01-01

    A model based on a specific phantom, called QuAArC, has been designed for the evaluation of planning and verification systems of complex radiotherapy treatments, such as volumetric modulated arc therapy (VMAT). This model uses the high accuracy provided by the Monte Carlo (MC) simulation of log files and allows the experimental feedback from the high spatial resolution of films hosted in QuAArC. This cylindrical phantom was specifically designed to host films rolled at different radial distances able to take into account the entrance fluence and the 3D dose distribution. Ionization chamber measurements are also included in the feedback process for absolute dose considerations. In this way, automated MC simulation of treatment log files is implemented to calculate the actual delivery geometries, while the monitor units are experimentally adjusted to reconstruct the dose-volume histogram (DVH) on the patient CT. Prostate and head and neck clinical cases, previously planned with Monaco and Pinnacle treatment planning systems and verified with two different commercial systems (Delta4 and COMPASS), were selected in order to test operational feasibility of the proposed model. The proper operation of the feedback procedure was proved through the achieved high agreement between reconstructed dose distributions and the film measurements (global gamma passing rates > 90% for the 2%/2 mm criteria). The necessary discretization level of the log file for dose calculation and the potential mismatching between calculated control points and detection grid in the verification process were discussed. Besides the effect of dose calculation accuracy of the analytic algorithm implemented in treatment planning systems for a dynamic technique, it was discussed the importance of the detection density level and its location in VMAT specific phantom to obtain a more reliable DVH in the patient CT. The proposed model also showed enough robustness and efficiency to be considered as a pre

  17. 3D VMAT Verification Based on Monte Carlo Log File Simulation with Experimental Feedback from Film Dosimetry

    PubMed Central

    Barbeiro, A. R.; Ureba, A.; Baeza, J. A.; Linares, R.; Perucha, M.; Jiménez-Ortega, E.; Velázquez, S.; Mateos, J. C.

    2016-01-01

    A model based on a specific phantom, called QuAArC, has been designed for the evaluation of planning and verification systems of complex radiotherapy treatments, such as volumetric modulated arc therapy (VMAT). This model uses the high accuracy provided by the Monte Carlo (MC) simulation of log files and allows the experimental feedback from the high spatial resolution of films hosted in QuAArC. This cylindrical phantom was specifically designed to host films rolled at different radial distances able to take into account the entrance fluence and the 3D dose distribution. Ionization chamber measurements are also included in the feedback process for absolute dose considerations. In this way, automated MC simulation of treatment log files is implemented to calculate the actual delivery geometries, while the monitor units are experimentally adjusted to reconstruct the dose-volume histogram (DVH) on the patient CT. Prostate and head and neck clinical cases, previously planned with Monaco and Pinnacle treatment planning systems and verified with two different commercial systems (Delta4 and COMPASS), were selected in order to test operational feasibility of the proposed model. The proper operation of the feedback procedure was proved through the achieved high agreement between reconstructed dose distributions and the film measurements (global gamma passing rates > 90% for the 2%/2 mm criteria). The necessary discretization level of the log file for dose calculation and the potential mismatching between calculated control points and detection grid in the verification process were discussed. Besides the effect of dose calculation accuracy of the analytic algorithm implemented in treatment planning systems for a dynamic technique, it was discussed the importance of the detection density level and its location in VMAT specific phantom to obtain a more reliable DVH in the patient CT. The proposed model also showed enough robustness and efficiency to be considered as a pre

  18. Verification of Loop Diagnostics

    NASA Technical Reports Server (NTRS)

    Winebarger, A.; Lionello, R.; Mok, Y.; Linker, J.; Mikic, Z.

    2014-01-01

    Many different techniques have been used to characterize the plasma in the solar corona: density-sensitive spectral line ratios are used to infer the density, the evolution of coronal structures in different passbands is used to infer the temperature evolution, and the simultaneous intensities measured in multiple passbands are used to determine the emission measure. All these analysis techniques assume that the intensity of the structures can be isolated through background subtraction. In this paper, we use simulated observations from a 3D hydrodynamic simulation of a coronal active region to verify these diagnostics. The density and temperature from the simulation are used to generate images in several passbands and spectral lines. We identify loop structures in the simulated images and calculate the loop background. We then determine the density, temperature and emission measure distribution as a function of time from the observations and compare with the true temperature and density of the loop. We find that the overall characteristics of the temperature, density, and emission measure are recovered by the analysis methods, but the details of the true temperature and density are not. For instance, the emission measure curves calculated from the simulated observations are much broader than the true emission measure distribution, though the average temperature evolution is similar. These differences are due, in part, to inadequate background subtraction, but also indicate a limitation of the analysis methods.

  19. Design, Implementation, and Verification of the Reliable Multicast Protocol. Thesis

    NASA Technical Reports Server (NTRS)

    Montgomery, Todd L.

    1995-01-01

    This document describes the Reliable Multicast Protocol (RMP) design, first implementation, and formal verification. RMP provides a totally ordered, reliable, atomic multicast service on top of an unreliable multicast datagram service. RMP is fully and symmetrically distributed so that no site bears an undue portion of the communications load. RMP provides a wide range of guarantees, from unreliable delivery to totally ordered delivery, to K-resilient, majority resilient, and totally resilient atomic delivery. These guarantees are selectable on a per message basis. RMP provides many communication options, including virtual synchrony, a publisher/subscriber model of message delivery, a client/server model of delivery, mutually exclusive handlers for messages, and mutually exclusive locks. It has been commonly believed that total ordering of messages can only be achieved at great performance expense. RMP discounts this. The first implementation of RMP has been shown to provide high throughput performance on Local Area Networks (LAN). For two or more destinations a single LAN, RMP provides higher throughput than any other protocol that does not use multicast or broadcast technology. The design, implementation, and verification activities of RMP have occurred concurrently. This has allowed the verification to maintain a high fidelity between design model, implementation model, and the verification model. The restrictions of implementation have influenced the design earlier than in normal sequential approaches. The protocol as a whole has matured smoother by the inclusion of several different perspectives into the product development.

  20. Realistic weather simulations and forecast verification with COSMO-EULAG

    NASA Astrophysics Data System (ADS)

    Wójcik, Damian; Piotrowski, Zbigniew; Rosa, Bogdan; Ziemiański, Michał

    2015-04-01

    Research conducted at Polish Institute of Meteorology and Water Management, National Research Institute, in collaboration with Consortium for Small Scale Modeling (COSMO) resulted in the development of a new prototype model COSMO-EULAG. The dynamical core of the new model is based on anelastic set of equation and numerics adopted from the EULAG model. The core is coupled, with the 1st degree of accuracy, to the COSMO physical parameterizations involving turbulence, friction, radiation, moist processes and surface fluxes. The tool is capable to compute weather forecast in mountainous area for the horizontal resolutions ranging from 2.2 km to 0.1 km and with slopes reaching 82 degree of inclination. An employment of EULAG allows to profit from its desirable conservative properties and numerical robustness confirmed in number of benchmark tests and widely documented in scientific literature. In this study we show a realistic case study of Alpine summer convection simulated by COSMO-EULAG. It compares the convection-permitting realization of the flow using 2.2 km horizontal grid size, typical for contemporary very high resolution regional NWP forecast, with realization of LES type using grid size of 100 m. The study presents comparison of flow, cloud and precipitation structure together with the reference results of standard compressible COSMO Runge-Kutta model forecast in 2.2 km horizontal resolution. The case study results are supplemented by COSMO-EULAG forecast verification results for Alpine domain in 2.2 km horizontal resolution. Wind, temperature, cloud, humidity and precipitation scores are being presented. Verification period covers one summer month (June 2013) and one autumn month (November 2013). Verification is based on data collected by a network of approximately 200 stations (surface data verification) and 6 stations (upper-air verification) located in the Alps and vicinity.

  1. Verification of the karst flow model under laboratory controlled conditions

    NASA Astrophysics Data System (ADS)

    Gotovac, Hrvoje; Andric, Ivo; Malenica, Luka; Srzic, Veljko

    2016-04-01

    Karst aquifers are very important groundwater resources around the world as well as in coastal part of Croatia. They consist of extremely complex structure defining by slow and laminar porous medium and small fissures and usually fast turbulent conduits/karst channels. Except simple lumped hydrological models that ignore high karst heterogeneity, full hydraulic (distributive) models have been developed exclusively by conventional finite element and finite volume elements considering complete karst heterogeneity structure that improves our understanding of complex processes in karst. Groundwater flow modeling in complex karst aquifers are faced by many difficulties such as a lack of heterogeneity knowledge (especially conduits), resolution of different spatial/temporal scales, connectivity between matrix and conduits, setting of appropriate boundary conditions and many others. Particular problem of karst flow modeling is verification of distributive models under real aquifer conditions due to lack of above-mentioned information. Therefore, we will show here possibility to verify karst flow models under the laboratory controlled conditions. Special 3-D karst flow model (5.6*2.6*2 m) consists of concrete construction, rainfall platform, 74 piezometers, 2 reservoirs and other supply equipment. Model is filled by fine sand (3-D porous matrix) and drainage plastic pipes (1-D conduits). This model enables knowledge of full heterogeneity structure including position of different sand layers as well as conduits location and geometry. Moreover, we know geometry of conduits perforation that enable analysis of interaction between matrix and conduits. In addition, pressure and precipitation distribution and discharge flow rates from both phases can be measured very accurately. These possibilities are not present in real sites what this model makes much more useful for karst flow modeling. Many experiments were performed under different controlled conditions such as different

  2. On the new metrics for IMRT QA verification.

    PubMed

    Garcia-Romero, Alejandro; Hernandez-Vitoria, Araceli; Millan-Cebrian, Esther; Alba-Escorihuela, Veronica; Serrano-Zabaleta, Sonia; Ortega-Pardina, Pablo

    2016-11-01

    The aim of this work is to search for new metrics that could give more reliable acceptance/rejection criteria on the IMRT verification process and to offer solutions to the discrepancies found among different conventional metrics. Therefore, besides conventional metrics, new ones are proposed and evaluated with new tools to find correlations among them. These new metrics are based on the processing of the dose-volume histogram information, evaluating the absorbed dose differences, the dose constraint fulfillment, or modified biomathematical treatment outcome models such as tumor control probability (TCP) and normal tissue complication probability (NTCP). An additional purpose is to establish whether the new metrics yield the same acceptance/rejection plan distribution as the conventional ones. Fifty eight treatment plans concerning several patient locations are analyzed. All of them were verified prior to the treatment, using conventional metrics, and retrospectively after the treatment with the new metrics. These new metrics include the definition of three continuous functions, based on dose-volume histograms resulting from measurements evaluated with a reconstructed dose system and also with a Monte Carlo redundant calculation. The 3D gamma function for every volume of interest is also calculated. The information is also processed to obtain ΔTCP or ΔNTCP for the considered volumes of interest. These biomathematical treatment outcome models have been modified to increase their sensitivity to dose changes. A robustness index from a radiobiological point of view is defined to classify plans in robustness against dose changes. Dose difference metrics can be condensed in a single parameter: the dose difference global function, with an optimal cutoff that can be determined from a receiver operating characteristics (ROC) analysis of the metric. It is not always possible to correlate differences in biomathematical treatment outcome models with dose difference metrics

  3. Gender verification testing in sport.

    PubMed

    Ferris, E A

    1992-07-01

    Gender verification testing in sport, first introduced in 1966 by the International Amateur Athletic Federation (IAAF) in response to fears that males with a physical advantage in terms of muscle mass and strength were cheating by masquerading as females in women's competition, has led to unfair disqualifications of women athletes and untold psychological harm. The discredited sex chromatin test, which identifies only the sex chromosome component of gender and is therefore misleading, was abandoned in 1991 by the IAAF in favour of medical checks for all athletes, women and men, which preclude the need for gender testing. But, women athletes will still be tested at the Olympic Games at Albertville and Barcelona using polymerase chain reaction (PCR) to amplify DNA sequences on the Y chromosome which identifies genetic sex only. Gender verification testing may in time be abolished when the sporting community are fully cognizant of its scientific and ethical implications.

  4. Wavelet Features Based Fingerprint Verification

    NASA Astrophysics Data System (ADS)

    Bagadi, Shweta U.; Thalange, Asha V.; Jain, Giridhar P.

    2010-11-01

    In this work; we present a automatic fingerprint identification system based on Level 3 features. Systems based only on minutiae features do not perform well for poor quality images. In practice, we often encounter extremely dry, wet fingerprint images with cuts, warts, etc. Due to such fingerprints, minutiae based systems show poor performance for real time authentication applications. To alleviate the problem of poor quality fingerprints, and to improve overall performance of the system, this paper proposes fingerprint verification based on wavelet statistical features & co-occurrence matrix features. The features include mean, standard deviation, energy, entropy, contrast, local homogeneity, cluster shade, cluster prominence, Information measure of correlation. In this method, matching can be done between the input image and the stored template without exhaustive search using the extracted feature. The wavelet transform based approach is better than the existing minutiae based method and it takes less response time and hence suitable for on-line verification, with high accuracy.

  5. NEXT Thruster Component Verification Testing

    NASA Technical Reports Server (NTRS)

    Pinero, Luis R.; Sovey, James S.

    2007-01-01

    Component testing is a critical part of thruster life validation activities under NASA s Evolutionary Xenon Thruster (NEXT) project testing. The high voltage propellant isolators were selected for design verification testing. Even though they are based on a heritage design, design changes were made because the isolators will be operated under different environmental conditions including temperature, voltage, and pressure. The life test of two NEXT isolators was therefore initiated and has accumulated more than 10,000 hr of operation. Measurements to date indicate only a negligibly small increase in leakage current. The cathode heaters were also selected for verification testing. The technology to fabricate these heaters, developed for the International Space Station plasma contactor hollow cathode assembly, was transferred to Aerojet for the fabrication of the NEXT prototype model ion thrusters. Testing the contractor-fabricated heaters is necessary to validate fabrication processes for high reliability heaters. This paper documents the status of the propellant isolator and cathode heater tests.

  6. Land surface Verification Toolkit (LVT)

    NASA Technical Reports Server (NTRS)

    Kumar, Sujay V.

    2017-01-01

    LVT is a framework developed to provide an automated, consolidated environment for systematic land surface model evaluation Includes support for a range of in-situ, remote-sensing and other model and reanalysis products. Supports the analysis of outputs from various LIS subsystems, including LIS-DA, LIS-OPT, LIS-UE. Note: The Land Information System Verification Toolkit (LVT) is a NASA software tool designed to enable the evaluation, analysis and comparison of outputs generated by the Land Information System (LIS). The LVT software is released under the terms and conditions of the NASA Open Source Agreement (NOSA) Version 1.1 or later. Land Information System Verification Toolkit (LVT) NOSA.

  7. Ontology Matching with Semantic Verification

    PubMed Central

    Jean-Mary, Yves R.; Shironoshita, E. Patrick; Kabuka, Mansur R.

    2009-01-01

    ASMOV (Automated Semantic Matching of Ontologies with Verification) is a novel algorithm that uses lexical and structural characteristics of two ontologies to iteratively calculate a similarity measure between them, derives an alignment, and then verifies it to ensure that it does not contain semantic inconsistencies. In this paper, we describe the ASMOV algorithm, and then present experimental results that measure its accuracy using the OAEI 2008 tests, and that evaluate its use with two different thesauri: WordNet, and the Unified Medical Language System (UMLS). These results show the increased accuracy obtained by combining lexical, structural and extensional matchers with semantic verification, and demonstrate the advantage of using a domain-specific thesaurus for the alignment of specialized ontologies. PMID:20186256

  8. Verification and transparency in future arms control

    SciTech Connect

    Pilat, J.F.

    1996-09-01

    Verification`s importance has changed dramatically over time, although it always has been in the forefront of arms control. The goals and measures of verification and the criteria for success have changed with the times as well, reflecting such factors as the centrality of the prospective agreement to East-West relations during the Cold War, the state of relations between the United States and the Soviet Union, and the technologies available for monitoring. Verification`s role may be declining in the post-Cold War period. The prospects for such a development will depend, first and foremost, on the high costs of traditional arms control, especially those associated with requirements for verification. Moreover, the growing interest in informal, or non-negotiated arms control does not allow for verification provisions by the very nature of these arrangements. Multilateral agreements are also becoming more prominent and argue against highly effective verification measures, in part because of fears of promoting proliferation by opening sensitive facilities to inspectors from potential proliferant states. As a result, it is likely that transparency and confidence-building measures will achieve greater prominence, both as supplements to and substitutes for traditional verification. Such measures are not panaceas and do not offer all that we came to expect from verification during the Cold war. But they may be the best possible means to deal with current problems of arms reductions and restraints at acceptable levels of expenditure.

  9. Subsurface barrier verification technologies, informal report

    SciTech Connect

    Heiser, J.H.

    1994-06-01

    One of the more promising remediation options available to the DOE waste management community is subsurface barriers. Some of the uses of subsurface barriers include surrounding and/or containing buried waste, as secondary confinement of underground storage tanks, to direct or contain subsurface contaminant plumes and to restrict remediation methods, such as vacuum extraction, to a limited area. To be most effective the barriers should be continuous and depending on use, have few or no breaches. A breach may be formed through numerous pathways including: discontinuous grout application, from joints between panels and from cracking due to grout curing or wet-dry cycling. The ability to verify barrier integrity is valuable to the DOE, EPA, and commercial sector and will be required to gain full public acceptance of subsurface barriers as either primary or secondary confinement at waste sites. It is recognized that no suitable method exists for the verification of an emplaced barrier`s integrity. The large size and deep placement of subsurface barriers makes detection of leaks challenging. This becomes magnified if the permissible leakage from the site is low. Detection of small cracks (fractions of an inch) at depths of 100 feet or more has not been possible using existing surface geophysical techniques. Compounding the problem of locating flaws in a barrier is the fact that no placement technology can guarantee the completeness or integrity of the emplaced barrier. This report summarizes several commonly used or promising technologies that have been or may be applied to in-situ barrier continuity verification.

  10. Crowd-Sourced Program Verification

    DTIC Science & Technology

    2012-12-01

    S / ROBERT L. KAMINSKI WARREN H. DEBANY, JR. Work Unit Manager Technical Advisor, Information Exploitation & Operations...Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington, DC 20503. PLEASE DO NOT RETURN...investigation, the contractor constructed a prototype of a crowd-sourced verification system that takes as input a given program and produces as output a

  11. Formal verification of AI software

    NASA Technical Reports Server (NTRS)

    Rushby, John; Whitehurst, R. Alan

    1989-01-01

    The application of formal verification techniques to Artificial Intelligence (AI) software, particularly expert systems, is investigated. Constraint satisfaction and model inversion are identified as two formal specification paradigms for different classes of expert systems. A formal definition of consistency is developed, and the notion of approximate semantics is introduced. Examples are given of how these ideas can be applied in both declarative and imperative forms.

  12. A methodology for the rigorous verification of Particle-in-Cell simulations

    NASA Astrophysics Data System (ADS)

    Riva, Fabio; Beadle, Carrie F.; Ricci, Paolo

    2017-05-01

    A methodology to perform a rigorous verification of Particle-in-Cell (PIC) simulations is presented, both for assessing the correct implementation of the model equations (code verification) and for evaluating the numerical uncertainty affecting the simulation results (solution verification). The proposed code verification methodology is a generalization of the procedure developed for plasma simulation codes based on finite difference schemes that was described by Riva et al. [Phys. Plasmas 21, 062301 (2014)] and consists of an order-of-accuracy test using the method of manufactured solutions. The generalization of the methodology for PIC codes consists of accounting for numerical schemes intrinsically affected by statistical noise and providing a suitable measure of the distance between continuous, analytical distribution functions and finite samples of computational particles. The solution verification consists of quantifying both the statistical and discretization uncertainties. The statistical uncertainty is estimated by repeating the simulation with different pseudorandom number generator seeds. For the discretization uncertainty, the Richardson extrapolation is used to provide an approximation of the analytical solution and the grid convergence index is used as an estimate of the relative discretization uncertainty. The code verification methodology is successfully applied to a PIC code that numerically solves the one-dimensional, electrostatic, collisionless Vlasov-Poisson system. The solution verification methodology is applied to quantify the numerical uncertainty affecting the two-stream instability growth rate, which is numerically evaluated thanks to a PIC simulation.

  13. LOCATING MONITORING STATIONS IN WATER DISTRIBUTION SYSTEMS

    EPA Science Inventory

    Water undergoes changes in quality between the time it leaves the treatment plant and the time it reaches the customer's tap, making it important to select monitoring stations that will adequately monitor these changers. But because there is no uniform schedule or framework for ...

  14. LOCATING MONITORING STATIONS IN WATER DISTRIBUTION SYSTEMS

    EPA Science Inventory

    Water undergoes changes in quality between the time it leaves the treatment plant and the time it reaches the customer's tap, making it important to select monitoring stations that will adequately monitor these changers. But because there is no uniform schedule or framework for ...

  15. Regression Verification Using Impact Summaries

    NASA Technical Reports Server (NTRS)

    Backes, John; Person, Suzette J.; Rungta, Neha; Thachuk, Oksana

    2013-01-01

    Regression verification techniques are used to prove equivalence of syntactically similar programs. Checking equivalence of large programs, however, can be computationally expensive. Existing regression verification techniques rely on abstraction and decomposition techniques to reduce the computational effort of checking equivalence of the entire program. These techniques are sound but not complete. In this work, we propose a novel approach to improve scalability of regression verification by classifying the program behaviors generated during symbolic execution as either impacted or unimpacted. Our technique uses a combination of static analysis and symbolic execution to generate summaries of impacted program behaviors. The impact summaries are then checked for equivalence using an o-the-shelf decision procedure. We prove that our approach is both sound and complete for sequential programs, with respect to the depth bound of symbolic execution. Our evaluation on a set of sequential C artifacts shows that reducing the size of the summaries can help reduce the cost of software equivalence checking. Various reduction, abstraction, and compositional techniques have been developed to help scale software verification techniques to industrial-sized systems. Although such techniques have greatly increased the size and complexity of systems that can be checked, analysis of large software systems remains costly. Regression analysis techniques, e.g., regression testing [16], regression model checking [22], and regression verification [19], restrict the scope of the analysis by leveraging the differences between program versions. These techniques are based on the idea that if code is checked early in development, then subsequent versions can be checked against a prior (checked) version, leveraging the results of the previous analysis to reduce analysis cost of the current version. Regression verification addresses the problem of proving equivalence of closely related program

  16. Nuclear Data Verification and Standardization

    SciTech Connect

    Karam, Lisa R.; Arif, Muhammad; Thompson, Alan K.

    2011-10-01

    The objective of this interagency program is to provide accurate neutron interaction verification and standardization data for the U.S. Department of Energy Division of Nuclear Physics programs which include astrophysics, radioactive beam studies, and heavy-ion reactions. The measurements made in this program are also useful to other programs that indirectly use the unique properties of the neutron for diagnostic and analytical purposes. These include homeland security, personnel health and safety, nuclear waste disposal, treaty verification, national defense, and nuclear based energy production. The work includes the verification of reference standard cross sections and related neutron data employing the unique facilities and capabilities at NIST and other laboratories as required; leadership and participation in international intercomparisons and collaborations; and the preservation of standard reference deposits. An essential element of the program is critical evaluation of neutron interaction data standards including international coordinations. Data testing of critical data for important applications is included. The program is jointly supported by the Department of Energy and the National Institute of Standards and Technology.

  17. Verification Challenges at Low Numbers

    SciTech Connect

    Benz, Jacob M.; Booker, Paul M.; McDonald, Benjamin S.

    2013-06-01

    Many papers have dealt with the political difficulties and ramifications of deep nuclear arms reductions, and the issues of “Going to Zero”. Political issues include extended deterrence, conventional weapons, ballistic missile defense, and regional and geo-political security issues. At each step on the road to low numbers, the verification required to ensure compliance of all parties will increase significantly. Looking post New START, the next step will likely include warhead limits in the neighborhood of 1000 . Further reductions will include stepping stones at1000 warheads, 100’s of warheads, and then 10’s of warheads before final elimination could be considered of the last few remaining warheads and weapons. This paper will focus on these three threshold reduction levels, 1000, 100’s, 10’s. For each, the issues and challenges will be discussed, potential solutions will be identified, and the verification technologies and chain of custody measures that address these solutions will be surveyed. It is important to note that many of the issues that need to be addressed have no current solution. In these cases, the paper will explore new or novel technologies that could be applied. These technologies will draw from the research and development that is ongoing throughout the national laboratory complex, and will look at technologies utilized in other areas of industry for their application to arms control verification.

  18. Verification Test Suite for Physics Simulation Codes

    SciTech Connect

    Brock, J S; Kamm, J R; Rider, W J; Brandon, S; Woodward, C; Knupp, P; Trucano, T G

    2006-12-21

    The DOE/NNSA Advanced Simulation & Computing (ASC) Program directs the development, demonstration and deployment of physics simulation codes. The defensible utilization of these codes for high-consequence decisions requires rigorous verification and validation of the simulation software. The physics and engineering codes used at Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratory (SNL) are arguably among the most complex utilized in computational science. Verification represents an important aspect of the development, assessment and application of simulation software for physics and engineering. The purpose of this note is to formally document the existing tri-laboratory suite of verification problems used by LANL, LLNL, and SNL, i.e., the Tri-Lab Verification Test Suite. Verification is often referred to as ensuring that ''the [discrete] equations are solved [numerically] correctly''. More precisely, verification develops evidence of mathematical consistency between continuum partial differential equations (PDEs) and their discrete analogues, and provides an approach by which to estimate discretization errors. There are two variants of verification: (1) code verification, which compares simulation results to known analytical solutions, and (2) calculation verification, which estimates convergence rates and discretization errors without knowledge of a known solution. Together, these verification analyses support defensible verification and validation (V&V) of physics and engineering codes that are used to simulate complex problems that do not possess analytical solutions. Discretization errors (e.g., spatial and temporal errors) are embedded in the numerical solutions of the PDEs that model the relevant governing equations. Quantifying discretization errors, which comprise only a portion of the total numerical simulation error, is possible through code and calculation verification. Code verification

  19. The verification system of the Comprehensive Nuclear-Test-Ban Treaty

    NASA Astrophysics Data System (ADS)

    Suarez, Gerardo

    2002-11-01

    The Comprehensive Nuclear-Test-Ban Treaty was opened for signature in September 1996. To date, the treaty has been signed by 165 countries and ratified by 93; among the latter, 31 out of the 44 whose ratification is needed for the treaty to enter into force. The treaty calls for the installation and operation of a verification system to ensure compliance. The verification system is composed of the International Monitoring System (IMS), the International Data Centre (IDC), and the On Site Inspection Division (OSI). The IMS is a global network of 321 stations hosted by 90 countries. The primary network is composed of 50 seismic stations, 31 of which are seismic arrays and 19 three-component, broad-band stations, 11 hydroacoustic stations, 60 infrasound arrays, and 80 radionuclide monitoring stations measuring radioactive particulates and noble gases in the atmosphere. The radionuclide network is supported by 16 laboratories. The auxiliary network of 120 seismic stations is interrogated on request by the IDC to improve the accuracy of the locations. The data from the 321 stations and from the laboratories is transmitted to the IDC in Vienna via a dedicated Global Communication Infrastructure (GCI) based on VSAT antennas. The IDC collects and processes the data collected from the four technologies and produces bulletins of events. The raw data and bulletins are distributed to state signatories. Upon entry into force, an on-site inspection may be carried out if it is suspected that a nuclear explosion has taken place. Since mid-1997, when the Provisional Technical Secretariat responsible for the implementation of the verification system began its work in Vienna, over 86% of the sites have been surveyed and the final location of the stations selected. By the end of 2002 this number will reach about 90%, essentially completing this phase. To date, 131 stations have been built or upgraded, and 80 are now sending data to the IDC; 112 others are under construction or under

  20. Gender verification in competitive sports.

    PubMed

    Simpson, J L; Ljungqvist, A; de la Chapelle, A; Ferguson-Smith, M A; Genel, M; Carlson, A S; Ehrhardt, A A; Ferris, E

    1993-11-01

    The possibility that men might masquerade as women and be unfair competitors in women's sports is accepted as outrageous by athletes and the public alike. Since the 1930s, media reports have fuelled claims that individuals who once competed as female athletes subsequently appeared to be men. In most of these cases there was probably ambiguity of the external genitalia, possibly as a result of male pseudohermaphroditism. Nonetheless, beginning at the Rome Olympic Games in 1960, the International Amateur Athletics Federation (IAAF) began establishing rules of eligibility for women athletes. Initially, physical examination was used as a method for gender verification, but this plan was widely resented. Thus, sex chromatin testing (buccal smear) was introduced at the Mexico City Olympic Games in 1968. The principle was that genetic females (46,XX) show a single X-chromatic mass, whereas males (46,XY) do not. Unfortunately, sex chromatin analysis fell out of common diagnostic use by geneticists shortly after the International Olympic Committee (IOC) began its implementation for gender verification. The lack of laboratories routinely performing the test aggravated the problem of errors in interpretation by inexperienced workers, yielding false-positive and false-negative results. However, an even greater problem is that there exist phenotypic females with male sex chromatin patterns (e.g. androgen insensitivity, XY gonadal dysgenesis). These individuals have no athletic advantage as a result of their congenital abnormality and reasonably should not be excluded from competition. That is, only the chromosomal (genetic) sex is analysed by sex chromatin testing, not the anatomical or psychosocial status. For all the above reasons sex chromatin testing unfairly excludes many athletes. Although the IOC offered follow-up physical examinations that could have restored eligibility for those 'failing' sex chromatin tests, most affected athletes seemed to prefer to 'retire'. All

  1. PERFORMANCE VERIFICATION OF STORMWATER TREATMENT DEVICES UNDER EPA�S ENVIRONMENTAL TECHNOLOGY VERIFICATION PROGRAM

    EPA Science Inventory

    The Environmental Technology Verification (ETV) Program was created to facilitate the deployment of innovative or improved environmental technologies through performance verification and dissemination of information. The program�s goal is to further environmental protection by a...

  2. PERFORMANCE VERIFICATION OF ANIMAL WATER TREATMENT TECHNOLOGIES THROUGH EPA'S ENVIRONMENTAL TECHNOLOGY VERIFICATION PROGRAM

    EPA Science Inventory

    The U.S. Environmental Protection Agency created the Environmental Technology Verification Program (ETV) to further environmental protection by accelerating the commercialization of new and innovative technology through independent performance verification and dissemination of in...

  3. PERFORMANCE VERIFICATION OF STORMWATER TREATMENT DEVICES UNDER EPA�S ENVIRONMENTAL TECHNOLOGY VERIFICATION PROGRAM

    EPA Science Inventory

    The Environmental Technology Verification (ETV) Program was created to facilitate the deployment of innovative or improved environmental technologies through performance verification and dissemination of information. The program�s goal is to further environmental protection by a...

  4. ENVIRONMENTAL TECHNOLOGY VERIFICATION COATINGS AND COATING EQUIPMENT PROGRAM (ETV CCEP): LIQUID COATINGS--GENERIC VERIFICATION PROTOCOL

    EPA Science Inventory

    This report is a generic verification protocol or GVP which provides standards for testing liquid coatings for their enviornmental impacts under the Environmental Technology Verification program. It provides generic guidelines for product specific testing and quality assurance p...

  5. ENVIRONMENTAL TECHNOLOGY VERIFICATION: GENERIC VERIFICATION PROTOCOL FOR BIOLOGICAL AND AEROSOL TESTING OF GENERAL VENTILATION AIR CLEANERS

    EPA Science Inventory

    The U.S. Environmental Protection Agency established the Environmental Technology Verification Program to accelerate the development and commercialization of improved environmental technology through third party verification and reporting of product performance. Research Triangl...

  6. PERFORMANCE VERIFICATION OF ANIMAL WATER TREATMENT TECHNOLOGIES THROUGH EPA'S ENVIRONMENTAL TECHNOLOGY VERIFICATION PROGRAM

    EPA Science Inventory

    The U.S. Environmental Protection Agency created the Environmental Technology Verification Program (ETV) to further environmental protection by accelerating the commercialization of new and innovative technology through independent performance verification and dissemination of in...

  7. ENVIRONMENTAL TECHNOLOGY VERIFICATION: Reduction of Nitrogen in Domestic Wastewater from Individual Residential Homes. BioConcepts, Inc. ReCip® RTS ~ 500 System

    EPA Science Inventory

    Verification testing of the ReCip® RTS-500 System was conducted over a 12-month period at the Massachusetts Alternative Septic System Test Center (MASSTC) located on Otis Air National Guard Base in Bourne, Massachusetts. A nine-week startup period preceded the verification test t...

  8. Using tools for verification, documentation and testing

    NASA Technical Reports Server (NTRS)

    Osterweil, L. J.

    1978-01-01

    Methodologies are discussed on four of the major approaches to program upgrading -- namely dynamic testing, symbolic execution, formal verification and static analysis. The different patterns of strengths, weaknesses and applications of these approaches are shown. It is demonstrated that these patterns are in many ways complementary, offering the hope that they can be coordinated and unified into a single comprehensive program testing and verification system capable of performing a diverse and useful variety of error detection, verification and documentation functions.

  9. Magnetic cleanliness verification approach on tethered satellite

    NASA Technical Reports Server (NTRS)

    Messidoro, Piero; Braghin, Massimo; Grande, Maurizio

    1990-01-01

    Magnetic cleanliness testing was performed on the Tethered Satellite as the last step of an articulated verification campaign aimed at demonstrating the capability of the satellite to support its TEMAG (TEthered MAgnetometer) experiment. Tests at unit level and analytical predictions/correlations using a dedicated mathematical model (GANEW program) are also part of the verification activities. Details of the tests are presented, and the results of the verification are described together with recommendations for later programs.

  10. Input apparatus for dynamic signature verification systems

    DOEpatents

    EerNisse, Errol P.; Land, Cecil E.; Snelling, Jay B.

    1978-01-01

    The disclosure relates to signature verification input apparatus comprising a writing instrument and platen containing piezoelectric transducers which generate signals in response to writing pressures.

  11. The SeaHorn Verification Framework

    NASA Technical Reports Server (NTRS)

    Gurfinkel, Arie; Kahsai, Temesghen; Komuravelli, Anvesh; Navas, Jorge A.

    2015-01-01

    In this paper, we present SeaHorn, a software verification framework. The key distinguishing feature of SeaHorn is its modular design that separates the concerns of the syntax of the programming language, its operational semantics, and the verification semantics. SeaHorn encompasses several novelties: it (a) encodes verification conditions using an efficient yet precise inter-procedural technique, (b) provides flexibility in the verification semantics to allow different levels of precision, (c) leverages the state-of-the-art in software model checking and abstract interpretation for verification, and (d) uses Horn-clauses as an intermediate language to represent verification conditions which simplifies interfacing with multiple verification tools based on Horn-clauses. SeaHorn provides users with a powerful verification tool and researchers with an extensible and customizable framework for experimenting with new software verification techniques. The effectiveness and scalability of SeaHorn are demonstrated by an extensive experimental evaluation using benchmarks from SV-COMP 2015 and real avionics code.

  12. Automated verification of flight software. User's manual

    NASA Technical Reports Server (NTRS)

    Saib, S. H.

    1982-01-01

    (Automated Verification of Flight Software), a collection of tools for analyzing source programs written in FORTRAN and AED is documented. The quality and the reliability of flight software are improved by: (1) indented listings of source programs, (2) static analysis to detect inconsistencies in the use of variables and parameters, (3) automated documentation, (4) instrumentation of source code, (5) retesting guidance, (6) analysis of assertions, (7) symbolic execution, (8) generation of verification conditions, and (9) simplification of verification conditions. Use of AVFS in the verification of flight software is described.

  13. Acceptance sampling methods for sample results verification

    SciTech Connect

    Jesse, C.A.

    1993-06-01

    This report proposes a statistical sampling method for use during the sample results verification portion of the validation of data packages. In particular, this method was derived specifically for the validation of data packages for metals target analyte analysis performed under United States Environmental Protection Agency Contract Laboratory Program protocols, where sample results verification can be quite time consuming. The purpose of such a statistical method is to provide options in addition to the ``all or nothing`` options that currently exist for sample results verification. The proposed method allows the amount of data validated during the sample results verification process to be based on a balance between risks and the cost of inspection.

  14. Hydrologic data-verification management program plan

    USGS Publications Warehouse

    Alexander, C.W.

    1982-01-01

    Data verification refers to the performance of quality control on hydrologic data that have been retrieved from the field and are being prepared for dissemination to water-data users. Water-data users now have access to computerized data files containing unpublished, unverified hydrologic data. Therefore, it is necessary to develop techniques and systems whereby the computer can perform some data-verification functions before the data are stored in user-accessible files. Computerized data-verification routines can be developed for this purpose. A single, unified concept describing master data-verification program using multiple special-purpose subroutines, and a screen file containing verification criteria, can probably be adapted to any type and size of computer-processing system. Some traditional manual-verification procedures can be adapted for computerized verification, but new procedures can also be developed that would take advantage of the powerful statistical tools and data-handling procedures available to the computer. Prototype data-verification systems should be developed for all three data-processing environments as soon as possible. The WATSTORE system probably affords the greatest opportunity for long-range research and testing of new verification subroutines. (USGS)

  15. The JPSS Ground Project Algorithm Verification, Test and Evaluation System

    NASA Astrophysics Data System (ADS)

    Vicente, G. A.; Jain, P.; Chander, G.; Nguyen, V. T.; Dixon, V.

    2016-12-01

    The Government Resource for Algorithm Verification, Independent Test, and Evaluation (GRAVITE) is an operational system that provides services to the Suomi National Polar-orbiting Partnership (S-NPP) Mission. It is also a unique environment for Calibration/Validation (Cal/Val) and Data Quality Assessment (DQA) of the Join Polar Satellite System (JPSS) mission data products. GRAVITE provides a fast and direct access to the data and products created by the Interface Data Processing Segment (IDPS), the NASA/NOAA operational system that converts Raw Data Records (RDR's) generated by sensors on the S-NPP into calibrated geo-located Sensor Data Records (SDR's) and generates Mission Unique Products (MUPS). It also facilitates algorithm investigation, integration, checkouts and tuning, instrument and product calibration and data quality support, monitoring and data/products distribution. GRAVITE is the portal for the latest S-NPP and JPSS baselined Processing Coefficient Tables (PCT's) and Look-Up-Tables (LUT's) and hosts a number DQA offline tools that takes advantage of the proximity to the near-real time data flows. It also contains a set of automated and ad-hoc Cal/Val tools used for algorithm analysis and updates, including an instance of the IDPS called GRAVITE Algorithm Development Area (G-ADA), that has the latest installation of the IDPS algorithms running in an identical software and hardware platforms. Two other important GRAVITE component are the Investigator-led Processing System (IPS) and the Investigator Computing Facility (ICF). The IPS is a dedicated environment where authorized users run automated scripts called Product Generation Executables (PGE's) to support Cal/Val and data quality assurance offline. This data-rich and data-driven service holds its own distribution system and allows operators to retrieve science data products. The ICF is a workspace where users can share computing applications and resources and have full access to libraries and

  16. Location of atypical femoral fracture can be determined by tensile stress distribution influenced by femoral bowing and neck-shaft angle: a CT-based nonlinear finite element analysis model for the assessment of femoral shaft loading stress.

    PubMed

    Oh, Yoto; Fujita, Koji; Wakabayashi, Yoshiaki; Kurosa, Yoshiro; Okawa, Atsushi

    2017-09-27

    Loading stress due to individual variations in femoral morphology is thought to be strongly associated with the pathogenesis of atypical femoral fracture (AFF). In Japan, studies on AFF regarding pathogenesis in the mid-shaft are well-documented and a key factor in the injury is thought to be femoral shaft bowing deformity. Thus, we developed a CT-based finite element analysis (CT/FEA) model to assess distribution of loading stress in the femoral shaft. A multicenter prospective study was performed at 12 hospitals in Japan from August 2015 to February 2017. We assembled three study groups-the mid-shaft AFF group (n=12), the subtrochanteric AFF group (n=10), and the control group (n=11)-and analyzed femoral morphology and loading stress in the femoral shaft by nonlinear CT/FEA. Femoral bowing in the mid-shaft AFF group was significantly greater (lateral bowing, p<0.0001; anterior bowing, p<0.01). Femoral neck-shaft angle in the subtrochanteric AFF group was significantly smaller (p<0.001). On CT/FEA, both the mid-shaft and subtrochanteric AFF group showed maximum tensile stress located adjacent to the fracture site. Quantitatively, there was a correlation between femoral bowing and the ratio of tensile stress, which was calculated between the mid-shaft and subtrochanteric region (lateral bowing, r=0.6373, p<0.0001; anterior bowing, r=-0.5825, p<0.001). CT/FEA demonstrated that tensile stress by loading stress can cause AFF. The location of AFF injury could be determined by individual stress distribution influenced by femoral bowing and neck-shaft angle. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. 40 CFR 1065.550 - Gas analyzer range verification and drift verification.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Cycles § 1065.550 Gas analyzer range verification and drift verification. (a) Range verification. If an... with a CLD and the removed water is corrected based on measured CO2, CO, THC, and NOX concentrations...-specific emissions over the entire duty cycle for drift. For each constituent to be verified, both sets...

  18. Verification of regional climates of GISS GCM. Part 2: Summer

    NASA Technical Reports Server (NTRS)

    Druyan, Leonard M.; Rind, David

    1989-01-01

    Verification is made of the synoptic fields, sea-level pressure, precipitation rate, 200mb zonal wind and the surface resultant wind generated by two versions of the Goddard Institute for Space Studies (GISS) climate model. The models differ regarding the horizontal resolution of the computation grids and the specification of the sea-surface temperatures. Maps of the regional distributions of seasonal means of the model fields are shown alongside maps that show the observed distributions. Comparisons of the model results with observations are discussed and also summarized in tables according to geographic region.

  19. Verification of Public Weather Forecasts Available via the Media.

    NASA Astrophysics Data System (ADS)

    Brooks, Harold E.; Witt, Arthur; Eilts, Michael D.

    1997-10-01

    The question of who is the "best" forecaster in a particular media market is one that the public frequently asks. The authors have collected approximately one year's forecasts from the National Weather Service and major media presentations for Oklahoma City. Diagnostic verification procedures indicate that the question of best does not have a clear answer. All of the forecast sources have strengths and weaknesses, and it is possible that a user could take information from a variety of sources to come up with a forecast that has more value than any one individual source provides. The analysis provides numerous examples of the utility of a distributions-oriented approach to verification while also providing insight into the problems the public faces in evaluating the array of forecasts presented to them.

  20. Variability in the Galactic globular cluster M15. Science verification phase of T80Cam/JAST80@OAJ

    NASA Astrophysics Data System (ADS)

    Vázquez Ramió, H.; Varela, J.; Cristóbal-Hornillos, D.; Muniesa, D.; Civera, T.; Hernández-Fuertes, J.; Ederoclite, A.; Blanco Siffert, B.; Chies Santos, A.; San Roman, I.; Lamadrid, J. L.; Iglesias Marzoa, R.; Díaz-Martín, M. C.; Kanaan, A.; Carvano, J.; Cortesi, A.; Ribeiro, T.; Reis, R.; Coelho, P.; Castillo, J.; López, A.; López San Juan, C.; Cenarro, A. J.; Marín-Franch, A.; Yanes, A.; Moles, M.

    2017-03-01

    In the framework of the Science Verification Phase of T80Cam of the 83cm Javalambre Auxiliary Survey Telescope (JAST80) located at the Observatorio Astrofísico de Javalambre (OAJ), Teruel, Spain, a program was proposed to study the variability of RR Lyrae stars, as well as other variable sources, belonging to the Galactic globular cluster M15. The observations were carried out on different epochs (almost a dozen different nights along a ˜ 4 months period) using the complete set of 12 filters, centered at the optical spectral range, that are being devoted to the exectuion of the ongoing Javalambre Photometric Local Universe Survey (J-PLUS). One of the main goals is the characterization of the variability of the spectral energy distribution of RR Lyrae stars along their pulsation. This will be used to define methods to detect these type of variables in J-PLUS and J-PLUS. Preliminarly results are presented here.

  1. Why do verification and validation?

    SciTech Connect

    Hu, Kenneth T.; Paez, Thomas L.

    2016-02-19

    In this discussion paper, we explore different ways to assess the value of verification and validation (V&V) of engineering models. We first present a literature review on the value of V&V and then use value chains and decision trees to show how value can be assessed from a decision maker's perspective. In this context, the value is what the decision maker is willing to pay for V&V analysis with the understanding that the V&V results are uncertain. As a result, the 2014 Sandia V&V Challenge Workshop is used to illustrate these ideas.

  2. Optimal Imaging for Treaty Verification

    SciTech Connect

    Brubaker, Erik; Hilton, Nathan R.; Johnson, William; Marleau, Peter; Kupinski, Matthew; MacGahan, Christopher Jonathan

    2014-09-01

    Future arms control treaty verification regimes may use radiation imaging measurements to confirm and track nuclear warheads or other treaty accountable items (TAIs). This project leverages advanced inference methods developed for medical and adaptive imaging to improve task performance in arms control applications. Additionally, we seek a method to acquire and analyze imaging data of declared TAIs without creating an image of those objects or otherwise storing or revealing any classified information. Such a method would avoid the use of classified-information barriers (IB).

  3. Structural dynamics verification facility study

    NASA Technical Reports Server (NTRS)

    Kiraly, L. J.; Hirchbein, M. S.; Mcaleese, J. M.; Fleming, D. P.

    1981-01-01

    The need for a structural dynamics verification facility to support structures programs was studied. Most of the industry operated facilities are used for highly focused research, component development, and problem solving, and are not used for the generic understanding of the coupled dynamic response of major engine subsystems. Capabilities for the proposed facility include: the ability to both excite and measure coupled structural dynamic response of elastic blades on elastic shafting, the mechanical simulation of various dynamical loadings representative of those seen in operating engines, and the measurement of engine dynamic deflections and interface forces caused by alternative engine mounting configurations and compliances.

  4. Location-dependent RF geotags for positioning and security

    NASA Astrophysics Data System (ADS)

    Qiu, Di; Lynch, Robert; Yang, Chun

    2011-06-01

    Geo-security service, which refers to the authorization of persons or facilities based on their distinctive location information, is an application of the fields of position, navigation and time (PNT). Location features from radio navigation signals are mapped into a precise verification tag or geotag to block or allow certain action or access. A device that integrates a location sensor and geotag generation algorithm is tamper-resistant, that is, one cannot spoof the device to bypass the location validation. This paper develops a theoretical framework of the geotag-based positioning and security systems, and evaluates the system performance analytically and experimentally by using Loran signals as a case study.

  5. GENERIC VERIFICATION PROTOCOL FOR THE VERIFICATION OF PESTICIDE SPRAY DRIFT REDUCTION TECHNOLOGIES FOR ROW AND FIELD CROPS

    EPA Science Inventory

    This ETV program generic verification protocol was prepared and reviewed for the Verification of Pesticide Drift Reduction Technologies project. The protocol provides a detailed methodology for conducting and reporting results from a verification test of pesticide drift reductio...

  6. METHOD OF LOCATING GROUNDS

    DOEpatents

    Macleish, K.G.

    1958-02-11

    ABS>This patent presents a method for locating a ground in a d-c circult having a number of parallel branches connected across a d-c source or generator. The complete method comprises the steps of locating the ground with reference to the mildpoint of the parallel branches by connecting a potentiometer across the terminals of the circuit and connecting the slider of the potentiometer to ground through a current indicating instrument, adjusting the slider to right or left of the mildpoint so as to cause the instrument to indicate zero, connecting the terminal of the network which is farthest from the ground as thus indicated by the potentiometer to ground through a condenser, impressing a ripple voltage on the circuit, and then measuring the ripple voltage at the midpoint of each parallel branch to find the branch in which is the lowest value of ripple voltage, and then measuring the distribution of the ripple voltage along this branch to determine the point at which the ripple voltage drops off to zero or substantially zero due to the existence of a ground. The invention has particular application where a circuit ground is present which will disappear if the normal circuit voltage is removed.

  7. Location-assured, multifactor authentication on smartphones via LTE communication

    NASA Astrophysics Data System (ADS)

    Kuseler, Torben; Lami, Ihsan A.; Al-Assam, Hisham

    2013-05-01

    With the added security provided by LTE, geographical location has become an important factor for authentication to enhance the security of remote client authentication during mCommerce applications using Smartphones. Tight combination of geographical location with classic authentication factors like PINs/Biometrics in a real-time, remote verification scheme over the LTE layer connection assures the authenticator about the client itself (via PIN/biometric) as well as the client's current location, thus defines the important aspects of "who", "when", and "where" of the authentication attempt without eaves dropping or man on the middle attacks. To securely integrate location as an authentication factor into the remote authentication scheme, client's location must be verified independently, i.e. the authenticator should not solely rely on the location determined on and reported by the client's Smartphone. The latest wireless data communication technology for mobile phones (4G LTE, Long-Term Evolution), recently being rolled out in various networks, can be employed to enhance this location-factor requirement of independent location verification. LTE's Control Plane LBS provisions, when integrated with user-based authentication and independent source of localisation factors ensures secure efficient, continuous location tracking of the Smartphone. This feature can be performed during normal operation of the LTE-based communication between client and network operator resulting in the authenticator being able to verify the client's claimed location more securely and accurately. Trials and experiments show that such algorithm implementation is viable for nowadays Smartphone-based banking via LTE communication.

  8. Enrichment Assay Methods Development for the Integrated Cylinder Verification System

    SciTech Connect

    Smith, Leon E.; Misner, Alex C.; Hatchell, Brian K.; Curtis, Michael M.

    2009-10-22

    International Atomic Energy Agency (IAEA) inspectors currently perform periodic inspections at uranium enrichment plants to verify UF6 cylinder enrichment declarations. Measurements are typically performed with handheld high-resolution sensors on a sampling of cylinders taken to be representative of the facility's entire product-cylinder inventory. Pacific Northwest National Laboratory (PNNL) is developing a concept to automate the verification of enrichment plant cylinders to enable 100 percent product-cylinder verification and potentially, mass-balance calculations on the facility as a whole (by also measuring feed and tails cylinders). The Integrated Cylinder Verification System (ICVS) could be located at key measurement points to positively identify each cylinder, measure its mass and enrichment, store the collected data in a secure database, and maintain continuity of knowledge on measured cylinders until IAEA inspector arrival. The three main objectives of this FY09 project are summarized here and described in more detail in the report: (1) Develop a preliminary design for a prototype NDA system, (2) Refine PNNL's MCNP models of the NDA system, and (3) Procure and test key pulse-processing components. Progress against these tasks to date, and next steps, are discussed.

  9. Hailstorms over Switzerland: Verification of Crowd-sourced Data

    NASA Astrophysics Data System (ADS)

    Noti, Pascal-Andreas; Martynov, Andrey; Hering, Alessandro; Martius, Olivia

    2016-04-01

    The reports of smartphone users, witnessing hailstorms, can be used as source of independent, ground-based observation data on ground-reaching hailstorms with high temporal and spatial resolution. The presented work focuses on the verification of crowd-sourced data collected over Switzerland with the help of a smartphone application recently developed by MeteoSwiss. The precise location, time of hail precipitation and the hailstone size are included in the crowd-sourced data, assessed on the basis of the weather radar data of MeteoSwiss. Two radar-based hail detection algorithms, POH (Probability of Hail) and MESHS (Maximum Expected Severe Hail Size), in use at MeteoSwiss are confronted with the crowd-sourced data. The available data and investigation time period last from June to August 2015. Filter criteria have been applied in order to remove false reports from the crowd-sourced data. Neighborhood methods have been introduced to reduce the uncertainties which result from spatial and temporal biases. The crowd-sourced and radar data are converted into binary sequences according to previously set thresholds, allowing for using a categorical verification. Verification scores (e.g. hit rate) are then calculated from a 2x2 contingency table. The hail reporting activity and patterns corresponding to "hail" and "no hail" reports, sent from smartphones, have been analyzed. The relationship between the reported hailstone sizes and both radar-based hail detection algorithms have been investigated.

  10. The monitoring and verification of nuclear weapons

    SciTech Connect

    Garwin, Richard L.

    2014-05-09

    This paper partially reviews and updates the potential for monitoring and verification of nuclear weapons, including verification of their destruction. Cooperative monitoring with templates of the gamma-ray spectrum are an important tool, dependent on the use of information barriers.

  11. IMPROVING AIR QUALITY THROUGH ENVIRONMENTAL TECHNOLOGY VERIFICATIONS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) began the Environmental Technology Verification (ETV) Program in 1995 as a means of working with the private sector to establish a market-based verification process available to all environmental technologies. Under EPA's Office of R...

  12. IMPROVING AIR QUALITY THROUGH ENVIRONMENTAL TECHNOLOGY VERIFICATIONS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) began the Environmental Technology Verification (ETV) Program in 1995 as a means of working with the private sector to establish a market-based verification process available to all environmental technologies. Under EPA's Office of R...

  13. Gender Verification of Female Olympic Athletes.

    ERIC Educational Resources Information Center

    Dickinson, Barry D.; Genel, Myron; Robinowitz, Carolyn B.; Turner, Patricia L.; Woods, Gary L.

    2002-01-01

    Gender verification of female athletes has long been criticized by geneticists, endocrinologists, and others in the medical community. Recently, the International Olympic Committee's Athletic Commission called for discontinuation of mandatory laboratory-based gender verification of female athletes. This article discusses normal sexual…

  14. 47 CFR 2.902 - Verification.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 1 2013-10-01 2013-10-01 false Verification. 2.902 Section 2.902 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL FREQUENCY ALLOCATIONS AND RADIO TREATY MATTERS; GENERAL RULES AND REGULATIONS Equipment Authorization Procedures General Provisions § 2.902 Verification....

  15. 47 CFR 2.902 - Verification.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 1 2012-10-01 2012-10-01 false Verification. 2.902 Section 2.902 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL FREQUENCY ALLOCATIONS AND RADIO TREATY MATTERS; GENERAL RULES AND REGULATIONS Equipment Authorization Procedures General Provisions § 2.902 Verification....

  16. 47 CFR 2.902 - Verification.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 1 2014-10-01 2014-10-01 false Verification. 2.902 Section 2.902 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL FREQUENCY ALLOCATIONS AND RADIO TREATY MATTERS; GENERAL RULES AND REGULATIONS Equipment Authorization Procedures General Provisions § 2.902 Verification....

  17. 47 CFR 2.902 - Verification.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Verification. 2.902 Section 2.902 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL FREQUENCY ALLOCATIONS AND RADIO TREATY MATTERS; GENERAL RULES AND REGULATIONS Equipment Authorization Procedures General Provisions § 2.902 Verification....

  18. 47 CFR 2.902 - Verification.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Verification. 2.902 Section 2.902 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL FREQUENCY ALLOCATIONS AND RADIO TREATY MATTERS; GENERAL RULES AND REGULATIONS Equipment Authorization Procedures General Provisions § 2.902 Verification....

  19. 14 CFR 460.17 - Verification program.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... a flight. Verification must include flight testing. ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Verification program. 460.17 Section 460.17... TRANSPORTATION LICENSING HUMAN SPACE FLIGHT REQUIREMENTS Launch and Reentry with Crew § 460.17...

  20. ENVIRONMENTAL TECHNOLOGY VERIFICATION FOR INDOOR AIR PRODUCTS

    EPA Science Inventory

    The paper discusses environmental technology verification (ETV) for indoor air products. RTI is developing the framework for a verification testing program for indoor air products, as part of EPA's ETV program. RTI is establishing test protocols for products that fit into three...

  1. Fingerprint verification prediction model in hand dermatitis.

    PubMed

    Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah

    2015-07-01

    Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.

  2. ENVIRONMENTAL TECHNOLOGY VERIFICATION FOR INDOOR AIR PRODUCTS

    EPA Science Inventory

    The paper discusses environmental technology verification (ETV) for indoor air products. RTI is developing the framework for a verification testing program for indoor air products, as part of EPA's ETV program. RTI is establishing test protocols for products that fit into three...

  3. 40 CFR 1066.220 - Linearity verification.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Linearity verification. 1066.220 Section 1066.220 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Dynamometer Specifications § 1066.220 Linearity verification. (a) Scope...

  4. 18 CFR 158.5 - Verification.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Verification. 158.5 Section 158.5 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT....5 Verification. The facts stated in the memorandum must be sworn to by persons having...

  5. 18 CFR 158.5 - Verification.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 18 Conservation of Power and Water Resources 1 2012-04-01 2012-04-01 false Verification. 158.5 Section 158.5 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT....5 Verification. The facts stated in the memorandum must be sworn to by persons having...

  6. 18 CFR 286.107 - Verification.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Verification. 286.107 Section 286.107 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT... Contested Audit Findings and Proposed Remedies § 286.107 Verification. The facts stated in the...

  7. 18 CFR 41.5 - Verification.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Verification. 41.5 Section 41.5 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF....5 Verification. The facts stated in the memorandum must be sworn to by persons having...

  8. 18 CFR 349.5 - Verification.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Verification. 349.5 Section 349.5 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT... PROPOSED REMEDIES § 349.5 Verification. The facts stated in the memorandum must be sworn to by...

  9. 18 CFR 286.107 - Verification.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Verification. 286.107 Section 286.107 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT... Contested Audit Findings and Proposed Remedies § 286.107 Verification. The facts stated in the...

  10. 18 CFR 41.5 - Verification.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Verification. 41.5 Section 41.5 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF....5 Verification. The facts stated in the memorandum must be sworn to by persons having...

  11. 18 CFR 41.5 - Verification.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 18 Conservation of Power and Water Resources 1 2012-04-01 2012-04-01 false Verification. 41.5 Section 41.5 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF....5 Verification. The facts stated in the memorandum must be sworn to by persons having...

  12. 18 CFR 349.5 - Verification.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 18 Conservation of Power and Water Resources 1 2012-04-01 2012-04-01 false Verification. 349.5 Section 349.5 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT... PROPOSED REMEDIES § 349.5 Verification. The facts stated in the memorandum must be sworn to by...

  13. 18 CFR 158.5 - Verification.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Verification. 158.5 Section 158.5 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT....5 Verification. The facts stated in the memorandum must be sworn to by persons having...

  14. 18 CFR 349.5 - Verification.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Verification. 349.5 Section 349.5 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT... PROPOSED REMEDIES § 349.5 Verification. The facts stated in the memorandum must be sworn to by...

  15. 14 CFR 460.17 - Verification program.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... TRANSPORTATION LICENSING HUMAN SPACE FLIGHT REQUIREMENTS Launch and Reentry with Crew § 460.17 Verification... software in an operational flight environment before allowing any space flight participant on board during... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Verification program. 460.17 Section...

  16. 14 CFR 460.17 - Verification program.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... TRANSPORTATION LICENSING HUMAN SPACE FLIGHT REQUIREMENTS Launch and Reentry with Crew § 460.17 Verification... software in an operational flight environment before allowing any space flight participant on board during... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Verification program. 460.17 Section...

  17. 14 CFR 460.17 - Verification program.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... TRANSPORTATION LICENSING HUMAN SPACE FLIGHT REQUIREMENTS Launch and Reentry with Crew § 460.17 Verification... software in an operational flight environment before allowing any space flight participant on board during... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Verification program. 460.17 Section...

  18. Identity Verification, Control, and Aggression in Marriage

    ERIC Educational Resources Information Center

    Stets, Jan E.; Burke, Peter J.

    2005-01-01

    In this research we study the identity verification process and its effects in marriage. Drawing on identity control theory, we hypothesize that a lack of verification in the spouse identity (1) threatens stable self-meanings and interaction patterns between spouses, and (2) challenges a (nonverified) spouse's perception of control over the…

  19. CFE verification: The decision to inspect

    SciTech Connect

    Allentuck, J.

    1990-01-01

    Verification of compliance with the provisions of the treaty on Conventional Forces-Europe (CFE) is subject to inspection quotas of various kinds. Thus the decision to carry out a specific inspection or verification activity must be prudently made. This decision process is outlined, and means for conserving quotas'' are suggested. 4 refs., 1 fig.

  20. 24 CFR 4001.112 - Income verification.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 5 2010-04-01 2010-04-01 false Income verification. 4001.112... Requirements and Underwriting Procedures § 4001.112 Income verification. The mortgagee shall use FHA's procedures to verify the mortgagor's income and shall comply with the following additional requirements:...

  1. 24 CFR 4001.112 - Income verification.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 5 2011-04-01 2011-04-01 false Income verification. 4001.112... Requirements and Underwriting Procedures § 4001.112 Income verification. The mortgagee shall use FHA's procedures to verify the mortgagor's income and shall comply with the following additional requirements:...

  2. 18 CFR 34.8 - Verification.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Verification. 34.8 Section 34.8 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF... SECURITIES OR THE ASSUMPTION OF LIABILITIES § 34.8 Verification. Link to an amendment published at 70 FR...

  3. ENVIRONMENTAL TECHNOLOGY VERIFICATION AND INDOOR AIR

    EPA Science Inventory

    The paper discusses environmental technology verification and indoor air. RTI has responsibility for a pilot program for indoor air products as part of the U.S. EPA's Environmental Technology Verification (ETV) program. The program objective is to further the development of sel...

  4. Guidelines for qualifying cleaning and verification materials

    NASA Technical Reports Server (NTRS)

    Webb, D.

    1995-01-01

    This document is intended to provide guidance in identifying technical issues which must be addressed in a comprehensive qualification plan for materials used in cleaning and cleanliness verification processes. Information presented herein is intended to facilitate development of a definitive checklist that should address all pertinent materials issues when down selecting a cleaning/verification media.

  5. ENVIRONMENTAL TECHNOLOGY VERIFICATION AND INDOOR AIR

    EPA Science Inventory

    The paper discusses environmental technology verification and indoor air. RTI has responsibility for a pilot program for indoor air products as part of the U.S. EPA's Environmental Technology Verification (ETV) program. The program objective is to further the development of sel...

  6. Video-based fingerprint verification.

    PubMed

    Qin, Wei; Yin, Yilong; Liu, Lili

    2013-09-04

    Conventional fingerprint verification systems use only static information. In this paper, fingerprint videos, which contain dynamic information, are utilized for verification. Fingerprint videos are acquired by the same capture device that acquires conventional fingerprint images, and the user experience of providing a fingerprint video is the same as that of providing a single impression. After preprocessing and aligning processes, "inside similarity" and "outside similarity" are defined and calculated to take advantage of both dynamic and static information contained in fingerprint videos. Match scores between two matching fingerprint videos are then calculated by combining the two kinds of similarity. Experimental results show that the proposed video-based method leads to a relative reduction of 60 percent in the equal error rate (EER) in comparison to the conventional single impression-based method. We also analyze the time complexity of our method when different combinations of strategies are used. Our method still outperforms the conventional method, even if both methods have the same time complexity. Finally, experimental results demonstrate that the proposed video-based method can lead to better accuracy than the multiple impressions fusion method, and the proposed method has a much lower false acceptance rate (FAR) when the false rejection rate (FRR) is quite low.

  7. Quantitative reactive modeling and verification.

    PubMed

    Henzinger, Thomas A

    Formal verification aims to improve the quality of software by detecting errors before they do harm. At the basis of formal verification is the logical notion of correctness, which purports to capture whether or not a program behaves as desired. We suggest that the boolean partition of software into correct and incorrect programs falls short of the practical need to assess the behavior of software in a more nuanced fashion against multiple criteria. We therefore propose to introduce quantitative fitness measures for programs, specifically for measuring the function, performance, and robustness of reactive programs such as concurrent processes. This article describes the goals of the ERC Advanced Investigator Project QUAREM. The project aims to build and evaluate a theory of quantitative fitness measures for reactive models. Such a theory must strive to obtain quantitative generalizations of the paradigms that have been success stories in qualitative reactive modeling, such as compositionality, property-preserving abstraction and abstraction refinement, model checking, and synthesis. The theory will be evaluated not only in the context of software and hardware engineering, but also in the context of systems biology. In particular, we will use the quantitative reactive models and fitness measures developed in this project for testing hypotheses about the mechanisms behind data from biological experiments.

  8. RISKIND verification and benchmark comparisons

    SciTech Connect

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  9. Video-Based Fingerprint Verification

    PubMed Central

    Qin, Wei; Yin, Yilong; Liu, Lili

    2013-01-01

    Conventional fingerprint verification systems use only static information. In this paper, fingerprint videos, which contain dynamic information, are utilized for verification. Fingerprint videos are acquired by the same capture device that acquires conventional fingerprint images, and the user experience of providing a fingerprint video is the same as that of providing a single impression. After preprocessing and aligning processes, “inside similarity” and “outside similarity” are defined and calculated to take advantage of both dynamic and static information contained in fingerprint videos. Match scores between two matching fingerprint videos are then calculated by combining the two kinds of similarity. Experimental results show that the proposed video-based method leads to a relative reduction of 60 percent in the equal error rate (EER) in comparison to the conventional single impression-based method. We also analyze the time complexity of our method when different combinations of strategies are used. Our method still outperforms the conventional method, even if both methods have the same time complexity. Finally, experimental results demonstrate that the proposed video-based method can lead to better accuracy than the multiple impressions fusion method, and the proposed method has a much lower false acceptance rate (FAR) when the false rejection rate (FRR) is quite low. PMID:24008283

  10. Runtime Verification with State Estimation

    NASA Technical Reports Server (NTRS)

    Stoller, Scott D.; Bartocci, Ezio; Seyster, Justin; Grosu, Radu; Havelund, Klaus; Smolka, Scott A.; Zadok, Erez

    2011-01-01

    We introduce the concept of Runtime Verification with State Estimation and show how this concept can be applied to estimate theprobability that a temporal property is satisfied by a run of a program when monitoring overhead is reduced by sampling. In such situations, there may be gaps in the observed program executions, thus making accurate estimation challenging. To deal with the effects of sampling on runtime verification, we view event sequences as observation sequences of a Hidden Markov Model (HMM), use an HMM model of the monitored program to "fill in" sampling-induced gaps in observation sequences, and extend the classic forward algorithm for HMM state estimation (which determines the probability of a state sequence, given an observation sequence) to compute the probability that the property is satisfied by an execution of the program. To validate our approach, we present a case study based on the mission software for a Mars rover. The results of our case study demonstrate high prediction accuracy for the probabilities computed by our algorithm. They also show that our technique is much more accurate than simply evaluating the temporal property on the given observation sequences, ignoring the gaps.

  11. Cognitive Bias in Systems Verification

    NASA Technical Reports Server (NTRS)

    Larson, Steve

    2012-01-01

    Working definition of cognitive bias: Patterns by which information is sought and interpreted that can lead to systematic errors in decisions. Cognitive bias is used in diverse fields: Economics, Politics, Intelligence, Marketing, to name a few. Attempts to ground cognitive science in physical characteristics of the cognitive apparatus exceed our knowledge. Studies based on correlations; strict cause and effect is difficult to pinpoint. Effects cited in the paper and discussed here have been replicated many times over, and appear sound. Many biases have been described, but it is still unclear whether they are all distinct. There may only be a handful of fundamental biases, which manifest in various ways. Bias can effect system verification in many ways . Overconfidence -> Questionable decisions to deploy. Availability -> Inability to conceive critical tests. Representativeness -> Overinterpretation of results. Positive Test Strategies -> Confirmation bias. Debiasing at individual level very difficult. The potential effect of bias on the verification process can be managed, but not eliminated. Worth considering at key points in the process.

  12. Hierarchical Design and Verification for VLSI

    NASA Technical Reports Server (NTRS)

    Shostak, R. E.; Elliott, W. D.; Levitt, K. N.

    1983-01-01

    The specification and verification work is described in detail, and some of the problems and issues to be resolved in their application to Very Large Scale Integration VLSI systems are examined. The hierarchical design methodologies enable a system architect or design team to decompose a complex design into a formal hierarchy of levels of abstraction. The first step inprogram verification is tree formation. The next step after tree formation is the generation from the trees of the verification conditions themselves. The approach taken here is similar in spirit to the corresponding step in program verification but requires modeling of the semantics of circuit elements rather than program statements. The last step is that of proving the verification conditions using a mechanical theorem-prover.

  13. Space transportation system payload interface verification

    NASA Technical Reports Server (NTRS)

    Everline, R. T.

    1977-01-01

    The paper considers STS payload-interface verification requirements and the capability provided by STS to support verification. The intent is to standardize as many interfaces as possible, not only through the design, development, test and evaluation (DDT and E) phase of the major payload carriers but also into the operational phase. The verification process is discussed in terms of its various elements, such as the Space Shuttle DDT and E (including the orbital flight test program) and the major payload carriers DDT and E (including the first flights). Five tools derived from the Space Shuttle DDT and E are available to support the verification process: mathematical (structural and thermal) models, the Shuttle Avionics Integration Laboratory, the Shuttle Manipulator Development Facility, and interface-verification equipment (cargo-integration test equipment).

  14. Criteria for monitoring a chemical arms treaty: Implications for the verification regime. Report No. 13

    SciTech Connect

    Mullen, M.F.; Apt, K.E.; Stanbro, W.D.

    1991-12-01

    The multinational Chemical Weapons Convention (CWC) being negotiated at the Conference on Disarmament in Geneva is viewed by many as an effective way to rid the world of the threat of chemical weapons. Parties could, however, legitimately engage in certain CW-related activities in industry, agriculture, research, medicine, and law enforcement. Treaty verification requirements related to declared activities include: confirming destruction of declared CW stockpiles and production facilities; monitoring legitimate, treaty-allowed activities, such as production of certain industrial chemicals; and, detecting proscribed activities within the declared locations of treaty signatories, e.g., the illegal production of CW agents at a declared industrial facility or the diversion or substitution of declared CW stockpile items. Verification requirements related to undeclared activities or locations include investigating possible clandestine CW stocks and production capability not originally declared by signatories; detecting clandestine, proscribed activities at facilities or sites that are not declared and hence not subject to routine inspection; and, investigating allegations of belligerent use of CW. We discuss here a possible set of criteria for assessing the effectiveness of CWC verification (and certain aspects of the bilateral CW reduction agreement). Although the criteria are applicable to the full range of verification requirements, the discussion emphasizes verification of declared activities and sites.

  15. Criteria for monitoring a chemical arms treaty: Implications for the verification regime

    SciTech Connect

    Mullen, M.F.; Apt, K.E.; Stanbro, W.D.

    1991-12-01

    The multinational Chemical Weapons Convention (CWC) being negotiated at the Conference on Disarmament in Geneva is viewed by many as an effective way to rid the world of the threat of chemical weapons. Parties could, however, legitimately engage in certain CW-related activities in industry, agriculture, research, medicine, and law enforcement. Treaty verification requirements related to declared activities include: confirming destruction of declared CW stockpiles and production facilities; monitoring legitimate, treaty-allowed activities, such as production of certain industrial chemicals; and, detecting proscribed activities within the declared locations of treaty signatories, e.g., the illegal production of CW agents at a declared industrial facility or the diversion or substitution of declared CW stockpile items. Verification requirements related to undeclared activities or locations include investigating possible clandestine CW stocks and production capability not originally declared by signatories; detecting clandestine, proscribed activities at facilities or sites that are not declared and hence not subject to routine inspection; and, investigating allegations of belligerent use of CW. We discuss here a possible set of criteria for assessing the effectiveness of CWC verification (and certain aspects of the bilateral CW reduction agreement). Although the criteria are applicable to the full range of verification requirements, the discussion emphasizes verification of declared activities and sites.

  16. Stennis Space Center Verification & Validation Capabilities

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary; Ryan, Robert E.; Holekamp, Kara; O'Neal, Duane; Knowlton, Kelly; Ross, Kenton; Blonski, Slawomir

    2007-01-01

    Scientists within NASA#s Applied Research & Technology Project Office (formerly the Applied Sciences Directorate) have developed a well-characterized remote sensing Verification & Validation (V&V) site at the John C. Stennis Space Center (SSC). This site enables the in-flight characterization of satellite and airborne high spatial resolution remote sensing systems and their products. The smaller scale of the newer high resolution remote sensing systems allows scientists to characterize geometric, spatial, and radiometric data properties using a single V&V site. The targets and techniques used to characterize data from these newer systems can differ significantly from the techniques used to characterize data from the earlier, coarser spatial resolution systems. Scientists have used the SSC V&V site to characterize thermal infrared systems. Enhancements are being considered to characterize active lidar systems. SSC employs geodetic targets, edge targets, radiometric tarps, atmospheric monitoring equipment, and thermal calibration ponds to characterize remote sensing data products. Similar techniques are used to characterize moderate spatial resolution sensing systems at selected nearby locations. The SSC Instrument Validation Lab is a key component of the V&V capability and is used to calibrate field instrumentation and to provide National Institute of Standards and Technology traceability. This poster presents a description of the SSC characterization capabilities and examples of calibration data.

  17. Stennis Space Center Verification & Validation Capabilities

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary; Ryan, Robert E.; Holekamp, Kara; O'Neal, Duane; Knowlton, Kelly; Ross, Kenton; Blonski, Slawomir

    2007-01-01

    Scientists within NASA#s Applied Research & Technology Project Office (formerly the Applied Sciences Directorate) have developed a well-characterized remote sensing Verification & Validation (V&V) site at the John C. Stennis Space Center (SSC). This site enables the in-flight characterization of satellite and airborne high spatial resolution remote sensing systems and their products. The smaller scale of the newer high resolution remote sensing systems allows scientists to characterize geometric, spatial, and radiometric data properties using a single V&V site. The targets and techniques used to characterize data from these newer systems can differ significantly from the techniques used to characterize data from the earlier, coarser spatial resolution systems. Scientists have used the SSC V&V site to characterize thermal infrared systems. Enhancements are being considered to characterize active lidar systems. SSC employs geodetic targets, edge targets, radiometric tarps, atmospheric monitoring equipment, and thermal calibration ponds to characterize remote sensing data products. Similar techniques are used to characterize moderate spatial resolution sensing systems at selected nearby locations. The SSC Instrument Validation Lab is a key component of the V&V capability and is used to calibrate field instrumentation and to provide National Institute of Standards and Technology traceability. This poster presents a description of the SSC characterization capabilities and examples of calibration data.

  18. CTBT Integrated Verification System Evaluation Model

    SciTech Connect

    Edenburn, M.W.; Bunting, M.L.; Payne, A.C. Jr.

    1997-10-01

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia`s Monitoring Systems and Technology Center and has been funded by the US Department of Energy`s Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, top-level, modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM`s unique features is that it integrates results from the various CTBT sensor technologies (seismic, infrasound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection) and location accuracy of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system`s performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. This report describes version 1.2 of IVSEM.

  19. LOCATING LEAKS WITH ACOUSTIC TECHNOLOGY

    EPA Science Inventory

    Many water distribution systems in this country are almost 100 years old. About 26 percent of piping in these systems is made of unlined cast iron or steel and is in poor condition. Many methods that locate leaks in these pipes are time-consuming, costly, disruptive to operations...

  20. LOCATING LEAKS WITH ACOUSTIC TECHNOLOGY

    EPA Science Inventory

    Many water distribution systems in this country are almost 100 years old. About 26 percent of piping in these systems is made of unlined cast iron or steel and is in poor condition. Many methods that locate leaks in these pipes are time-consuming, costly, disruptive to operations...