Sample records for distributed source term

  1. Size distribution, directional source contributions and pollution status of PM from Chengdu, China during a long-term sampling campaign.

    PubMed

    Shi, Guo-Liang; Tian, Ying-Ze; Ma, Tong; Song, Dan-Lin; Zhou, Lai-Dong; Han, Bo; Feng, Yin-Chang; Russell, Armistead G

    2017-06-01

    Long-term and synchronous monitoring of PM 10 and PM 2.5 was conducted in Chengdu in China from 2007 to 2013. The levels, variations, compositions and size distributions were investigated. The sources were quantified by two-way and three-way receptor models (PMF2, ME2-2way and ME2-3way). Consistent results were found: the primary source categories contributed 63.4% (PMF2), 64.8% (ME2-2way) and 66.8% (ME2-3way) to PM 10 , and contributed 60.9% (PMF2), 65.5% (ME2-2way) and 61.0% (ME2-3way) to PM 2.5 . Secondary sources contributed 31.8% (PMF2), 32.9% (ME2-2way) and 31.7% (ME2-3way) to PM 10 , and 35.0% (PMF2), 33.8% (ME2-2way) and 36.0% (ME2-3way) to PM 2.5 . The size distribution of source categories was estimated better by the ME2-3way method. The three-way model can simultaneously consider chemical species, temporal variability and PM sizes, while a two-way model independently computes datasets of different sizes. A method called source directional apportionment (SDA) was employed to quantify the contributions from various directions for each source category. Crustal dust from east-north-east (ENE) contributed the highest to both PM 10 (12.7%) and PM 2.5 (9.7%) in Chengdu, followed by the crustal dust from south-east (SE) for PM 10 (9.8%) and secondary nitrate & secondary organic carbon from ENE for PM 2.5 (9.6%). Source contributions from different directions are associated with meteorological conditions, source locations and emission patterns during the sampling period. These findings and methods provide useful tools to better understand PM pollution status and to develop effective pollution control strategies. Copyright © 2016. Published by Elsevier B.V.

  2. Source Term Model for Steady Micro Jets in a Navier-Stokes Computer Code

    NASA Technical Reports Server (NTRS)

    Waithe, Kenrick A.

    2005-01-01

    A source term model for steady micro jets was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the mass flow and momentum created by a steady blowing micro jet. The model is obtained by adding the momentum and mass flow created by the jet to the Navier-Stokes equations. The model was tested by comparing with data from numerical simulations of a single, steady micro jet on a flat plate in two and three dimensions. The source term model predicted the velocity distribution well compared to the two-dimensional plate using a steady mass flow boundary condition, which was used to simulate a steady micro jet. The model was also compared to two three-dimensional flat plate cases using a steady mass flow boundary condition to simulate a steady micro jet. The three-dimensional comparison included a case with a grid generated to capture the circular shape of the jet and a case without a grid generated for the micro jet. The case without the jet grid mimics the application of the source term. The source term model compared well with both of the three-dimensional cases. Comparisons of velocity distribution were made before and after the jet and Mach and vorticity contours were examined. The source term model allows a researcher to quickly investigate different locations of individual or several steady micro jets. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  3. Over-Distribution in Source Memory

    PubMed Central

    Brainerd, C. J.; Reyna, V. F.; Holliday, R. E.; Nakamura, K.

    2012-01-01

    Semantic false memories are confounded with a second type of error, over-distribution, in which items are attributed to contradictory episodic states. Over-distribution errors have proved to be more common than false memories when the two are disentangled. We investigated whether over-distribution is prevalent in another classic false memory paradigm: source monitoring. It is. Conventional false memory responses (source misattributions) were predominantly over-distribution errors, but unlike semantic false memory, over-distribution also accounted for more than half of true memory responses (correct source attributions). Experimental control of over-distribution was achieved via a series of manipulations that affected either recollection of contextual details or item memory (concreteness, frequency, list-order, number of presentation contexts, and individual differences in verbatim memory). A theoretical model was used to analyze the data (conjoint process dissociation) that predicts that predicts that (a) over-distribution is directly proportional to item memory but inversely proportional to recollection and (b) item memory is not a necessary precondition for recollection of contextual details. The results were consistent with both predictions. PMID:21942494

  4. Laser induced heat source distribution in bio-tissues

    NASA Astrophysics Data System (ADS)

    Li, Xiaoxia; Fan, Shifu; Zhao, Youquan

    2006-09-01

    During numerical simulation of laser and tissue thermal interaction, the light fluence rate distribution should be formularized and constituted to the source term in the heat transfer equation. Usually the solution of light irradiative transport equation is given in extreme conditions such as full absorption (Lambert-Beer Law), full scattering (Lubelka-Munk theory), most scattering (Diffusion Approximation) et al. But in specific conditions, these solutions will induce different errors. The usually used Monte Carlo simulation (MCS) is more universal and exact but has difficulty to deal with dynamic parameter and fast simulation. Its area partition pattern has limits when applying FEM (finite element method) to solve the bio-heat transfer partial differential coefficient equation. Laser heat source plots of above methods showed much difference with MCS. In order to solve this problem, through analyzing different optical actions such as reflection, scattering and absorption on the laser induced heat generation in bio-tissue, a new attempt was made out which combined the modified beam broaden model and the diffusion approximation model. First the scattering coefficient was replaced by reduced scattering coefficient in the beam broaden model, which is more reasonable when scattering was treated as anisotropic scattering. Secondly the attenuation coefficient was replaced by effective attenuation coefficient in scattering dominating turbid bio-tissue. The computation results of the modified method were compared with Monte Carlo simulation and showed the model provided reasonable predictions of heat source term distribution than past methods. Such a research is useful for explaining the physical characteristics of heat source in the heat transfer equation, establishing effective photo-thermal model, and providing theory contrast for related laser medicine experiments.

  5. Source Localization in Wireless Sensor Networks with Randomly Distributed Elements under Multipath Propagation Conditions

    DTIC Science & Technology

    2009-03-01

    IN WIRELESS SENSOR NETWORKS WITH RANDOMLY DISTRIBUTED ELEMENTS UNDER MULTIPATH PROPAGATION CONDITIONS by Georgios Tsivgoulis March 2009...COVERED Engineer’s Thesis 4. TITLE Source Localization in Wireless Sensor Networks with Randomly Distributed Elements under Multipath Propagation...the non-line-of-sight information. 15. NUMBER OF PAGES 111 14. SUBJECT TERMS Wireless Sensor Network , Direction of Arrival, DOA, Random

  6. Bayesian estimation of a source term of radiation release with approximately known nuclide ratios

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek

    2016-04-01

    We are concerned with estimation of a source term in case of an accidental release from a known location, e.g. a power plant. Usually, the source term of an accidental release of radiation comprises of a mixture of nuclide. The gamma dose rate measurements do not provide a direct information on the source term composition. However, physical properties of respective nuclide (deposition properties, decay half-life) can be used when uncertain information on nuclide ratios is available, e.g. from known reactor inventory. The proposed method is based on linear inverse model where the observation vector y arise as a linear combination y = Mx of a source-receptor-sensitivity (SRS) matrix M and the source term x. The task is to estimate the unknown source term x. The problem is ill-conditioned and further regularization is needed to obtain a reasonable solution. In this contribution, we assume that nuclide ratios of the release is known with some degree of uncertainty. This knowledge is used to form the prior covariance matrix of the source term x. Due to uncertainty in the ratios the diagonal elements of the covariance matrix are considered to be unknown. Positivity of the source term estimate is guaranteed by using multivariate truncated Gaussian distribution. Following Bayesian approach, we estimate all parameters of the model from the data so that y, M, and known ratios are the only inputs of the method. Since the inference of the model is intractable, we follow the Variational Bayes method yielding an iterative algorithm for estimation of all model parameters. Performance of the method is studied on simulated 6 hour power plant release where 3 nuclide are released and 2 nuclide ratios are approximately known. The comparison with method with unknown nuclide ratios will be given to prove the usefulness of the proposed approach. This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases

  7. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2014-01-01 2014-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...

  8. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2012-01-01 2012-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...

  9. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2010-01-01 2010-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...

  10. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2013-01-01 2013-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...

  11. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2011-01-01 2011-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...

  12. Image authentication using distributed source coding.

    PubMed

    Lin, Yao-Chung; Varodayan, David; Girod, Bernd

    2012-01-01

    We present a novel approach using distributed source coding for image authentication. The key idea is to provide a Slepian-Wolf encoded quantized image projection as authentication data. This version can be correctly decoded with the help of an authentic image as side information. Distributed source coding provides the desired robustness against legitimate variations while detecting illegitimate modification. The decoder incorporating expectation maximization algorithms can authenticate images which have undergone contrast, brightness, and affine warping adjustments. Our authentication system also offers tampering localization by using the sum-product algorithm.

  13. Coarse Grid Modeling of Turbine Film Cooling Flows Using Volumetric Source Terms

    NASA Technical Reports Server (NTRS)

    Heidmann, James D.; Hunter, Scott D.

    2001-01-01

    The recent trend in numerical modeling of turbine film cooling flows has been toward higher fidelity grids and more complex geometries. This trend has been enabled by the rapid increase in computing power available to researchers. However, the turbine design community requires fast turnaround time in its design computations, rendering these comprehensive simulations ineffective in the design cycle. The present study describes a methodology for implementing a volumetric source term distribution in a coarse grid calculation that can model the small-scale and three-dimensional effects present in turbine film cooling flows. This model could be implemented in turbine design codes or in multistage turbomachinery codes such as APNASA, where the computational grid size may be larger than the film hole size. Detailed computations of a single row of 35 deg round holes on a flat plate have been obtained for blowing ratios of 0.5, 0.8, and 1.0, and density ratios of 1.0 and 2.0 using a multiblock grid system to resolve the flows on both sides of the plate as well as inside the hole itself. These detailed flow fields were spatially averaged to generate a field of volumetric source terms for each conservative flow variable. Solutions were also obtained using three coarse grids having streamwise and spanwise grid spacings of 3d, 1d, and d/3. These coarse grid solutions used the integrated hole exit mass, momentum, energy, and turbulence quantities from the detailed solutions as volumetric source terms. It is shown that a uniform source term addition over a distance from the wall on the order of the hole diameter is able to predict adiabatic film effectiveness better than a near-wall source term model, while strictly enforcing correct values of integrated boundary layer quantities.

  14. Adaptive distributed source coding.

    PubMed

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  15. 14 CFR 23.1310 - Power source capacity and distribution.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Power source capacity and distribution. 23... Equipment General § 23.1310 Power source capacity and distribution. (a) Each installation whose functioning... power supply system, distribution system, or other utilization system. (b) In determining compliance...

  16. 14 CFR 23.1310 - Power source capacity and distribution.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Power source capacity and distribution. 23... Equipment General § 23.1310 Power source capacity and distribution. (a) Each installation whose functioning... power supply system, distribution system, or other utilization system. (b) In determining compliance...

  17. 14 CFR 23.1310 - Power source capacity and distribution.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Power source capacity and distribution. 23... Equipment General § 23.1310 Power source capacity and distribution. (a) Each installation whose functioning... power supply system, distribution system, or other utilization system. (b) In determining compliance...

  18. 78 FR 56685 - SourceGas Distribution LLC; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-13

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. CP13-540-000] SourceGas Distribution LLC; Notice of Application Take notice that on August 27, 2013, SourceGas Distribution LLC (Source... areas across the Nebraska-Colorado border within which SourceGas may, without further commission...

  19. Microseism Source Distribution Observed from Ireland

    NASA Astrophysics Data System (ADS)

    Craig, David; Bean, Chris; Donne, Sarah; Le Pape, Florian; Möllhoff, Martin

    2017-04-01

    Ocean generated microseisms (OGM) are recorded globally with similar spectral features observed everywhere. The generation mechanism for OGM and their subsequent propagation to continental regions has led to their use as a proxy for sea-state characteristics. Also many modern seismological methods make use of OGM signals. For example, the Earth's crust and upper mantle can be imaged using ``ambient noise tomography``. For many of these methods an understanding of the source distribution is necessary to properly interpret the results. OGM recorded on near coastal seismometers are known to be related to the local ocean wavefield. However, contributions from more distant sources may also be present. This is significant for studies attempting to use OGM as a proxy for sea-state characteristics such as significant wave height. Ireland has a highly energetic ocean wave climate and is close to one of the major source regions for OGM. This provides an ideal location to study an OGM source region in detail. Here we present the source distribution observed from seismic arrays in Ireland. The region is shown to consist of several individual source areas. These source areas show some frequency dependence and generally occur at or near the continental shelf edge. We also show some preliminary results from an off-shore OBS network to the North-West of Ireland. The OBS network includes instruments on either side of the shelf and should help interpret the array observations.

  20. Bayesian source term determination with unknown covariance of measurements

    NASA Astrophysics Data System (ADS)

    Belal, Alkomiet; Tichý, Ondřej; Šmídl, Václav

    2017-04-01

    Determination of a source term of release of a hazardous material into the atmosphere is a very important task for emergency response. We are concerned with the problem of estimation of the source term in the conventional linear inverse problem, y = Mx, where the relationship between the vector of observations y is described using the source-receptor-sensitivity (SRS) matrix M and the unknown source term x. Since the system is typically ill-conditioned, the problem is recast as an optimization problem minR,B(y - Mx)TR-1(y - Mx) + xTB-1x. The first term minimizes the error of the measurements with covariance matrix R, and the second term is a regularization of the source term. There are different types of regularization arising for different choices of matrices R and B, for example, Tikhonov regularization assumes covariance matrix B as the identity matrix multiplied by scalar parameter. In this contribution, we adopt a Bayesian approach to make inference on the unknown source term x as well as unknown R and B. We assume prior on x to be a Gaussian with zero mean and unknown diagonal covariance matrix B. The covariance matrix of the likelihood R is also unknown. We consider two potential choices of the structure of the matrix R. First is the diagonal matrix and the second is a locally correlated structure using information on topology of the measuring network. Since the inference of the model is intractable, iterative variational Bayes algorithm is used for simultaneous estimation of all model parameters. The practical usefulness of our contribution is demonstrated on an application of the resulting algorithm to real data from the European Tracer Experiment (ETEX). This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).

  1. BWR ASSEMBLY SOURCE TERMS FOR WASTE PACKAGE DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    T.L. Lotz

    1997-02-15

    This analysis is prepared by the Mined Geologic Disposal System (MGDS) Waste Package Development Department (WPDD) to provide boiling water reactor (BWR) assembly radiation source term data for use during Waste Package (WP) design. The BWR assembly radiation source terms are to be used for evaluation of radiolysis effects at the WP surface, and for personnel shielding requirements during assembly or WP handling operations. The objectives of this evaluation are to generate BWR assembly radiation source terms that bound selected groupings of BWR assemblies, with regard to assembly average burnup and cooling time, which comprise the anticipated MGDS BWR commercialmore » spent nuclear fuel (SNF) waste stream. The source term data is to be provided in a form which can easily be utilized in subsequent shielding/radiation dose calculations. Since these calculations may also be used for Total System Performance Assessment (TSPA), with appropriate justification provided by TSPA, or radionuclide release rate analysis, the grams of each element and additional cooling times out to 25 years will also be calculated and the data included in the output files.« less

  2. 14 CFR 25.1310 - Power source capacity and distribution.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Power source capacity and distribution. 25... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY AIRPLANES Equipment General § 25.1310 Power source capacity and distribution. (a) Each installation whose functioning is required for type...

  3. 14 CFR 25.1310 - Power source capacity and distribution.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Power source capacity and distribution. 25... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY AIRPLANES Equipment General § 25.1310 Power source capacity and distribution. (a) Each installation whose functioning is required for type...

  4. 14 CFR 25.1310 - Power source capacity and distribution.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Power source capacity and distribution. 25... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY AIRPLANES Equipment General § 25.1310 Power source capacity and distribution. (a) Each installation whose functioning is required for type...

  5. Accuracy-preserving source term quadrature for third-order edge-based discretization

    NASA Astrophysics Data System (ADS)

    Nishikawa, Hiroaki; Liu, Yi

    2017-09-01

    In this paper, we derive a family of source term quadrature formulas for preserving third-order accuracy of the node-centered edge-based discretization for conservation laws with source terms on arbitrary simplex grids. A three-parameter family of source term quadrature formulas is derived, and as a subset, a one-parameter family of economical formulas is identified that does not require second derivatives of the source term. Among the economical formulas, a unique formula is then derived that does not require gradients of the source term at neighbor nodes, thus leading to a significantly smaller discretization stencil for source terms. All the formulas derived in this paper do not require a boundary closure, and therefore can be directly applied at boundary nodes. Numerical results are presented to demonstrate third-order accuracy at interior and boundary nodes for one-dimensional grids and linear triangular/tetrahedral grids over straight and curved geometries.

  6. Homogenization of the Brush Problem with a Source Term in L 1

    NASA Astrophysics Data System (ADS)

    Gaudiello, Antonio; Guibé, Olivier; Murat, François

    2017-07-01

    We consider a domain which has the form of a brush in 3 D or the form of a comb in 2 D, i.e. an open set which is composed of cylindrical vertical teeth distributed over a fixed basis. All the teeth have a similar fixed height; their cross sections can vary from one tooth to another and are not supposed to be smooth; moreover the teeth can be adjacent, i.e. they can share parts of their boundaries. The diameter of every tooth is supposed to be less than or equal to ɛ, and the asymptotic volume fraction of the teeth (as ɛ tends to zero) is supposed to be bounded from below away from zero, but no periodicity is assumed on the distribution of the teeth. In this domain we study the asymptotic behavior (as ɛ tends to zero) of the solution of a second order elliptic equation with a zeroth order term which is bounded from below away from zero, when the homogeneous Neumann boundary condition is satisfied on the whole of the boundary. First, we revisit the problem where the source term belongs to L 2. This is a classical problem, but our homogenization result takes place in a geometry which is more general that the ones which have been considered before. Moreover we prove a corrector result which is new. Then, we study the case where the source term belongs to L 1. Working in the framework of renormalized solutions and introducing a definition of renormalized solutions for degenerate elliptic equations where only the vertical derivative is involved (such a definition is new), we identify the limit problem and prove a corrector result.

  7. Distributed Coding of Compressively Sensed Sources

    NASA Astrophysics Data System (ADS)

    Goukhshtein, Maxim

    In this work we propose a new method for compressing multiple correlated sources with a very low-complexity encoder in the presence of side information. Our approach uses ideas from compressed sensing and distributed source coding. At the encoder, syndromes of the quantized compressively sensed sources are generated and transmitted. The decoder uses side information to predict the compressed sources. The predictions are then used to recover the quantized measurements via a two-stage decoding process consisting of bitplane prediction and syndrome decoding. Finally, guided by the structure of the sources and the side information, the sources are reconstructed from the recovered measurements. As a motivating example, we consider the compression of multispectral images acquired on board satellites, where resources, such as computational power and memory, are scarce. Our experimental results exhibit a significant improvement in the rate-distortion trade-off when compared against approaches with similar encoder complexity.

  8. The Competition Between a Localised and Distributed Source of Buoyancy

    NASA Astrophysics Data System (ADS)

    Partridge, Jamie; Linden, Paul

    2012-11-01

    We propose a new mathematical model to study the competition between localised and distributed sources of buoyancy within a naturally ventilated filling box. The main controlling parameters in this configuration are the buoyancy fluxes of the distributed and local source, specifically their ratio Ψ. The steady state dynamics of the flow are heavily dependent on this parameter. For large Ψ, where the distributed source dominates, we find the space becomes well mixed as expected if driven by an distributed source alone. Conversely, for small Ψ we find the space reaches a stable two layer stratification. This is analogous to the classical case of a purely local source but here the lower layer is buoyant compared to the ambient, due to the constant flux of buoyancy emanating from the distributed source. The ventilation flow rate, buoyancy of the layers and also the location of the interface height, which separates the two layer stratification, are obtainable from the model. To validate the theoretical model, small scale laboratory experiments were carried out. Water was used as the working medium with buoyancy being driven directly by temperature differences. Theoretical results were compared with experimental data and overall good agreement was found. A CASE award project with Arup.

  9. Photocounting distributions for exponentially decaying sources.

    PubMed

    Teich, M C; Card, H C

    1979-05-01

    Exact photocounting distributions are obtained for a pulse of light whose intensity is exponentially decaying in time, when the underlying photon statistics are Poisson. It is assumed that the starting time for the sampling interval (which is of arbitrary duration) is uniformly distributed. The probability of registering n counts in the fixed time T is given in terms of the incomplete gamma function for n >/= 1 and in terms of the exponential integral for n = 0. Simple closed-form expressions are obtained for the count mean and variance. The results are expected to be of interest in certain studies involving spontaneous emission, radiation damage in solids, and nuclear counting. They will also be useful in neurobiology and psychophysics, since habituation and sensitization processes may sometimes be characterized by the same stochastic model.

  10. Common Calibration Source for Monitoring Long-term Ozone Trends

    NASA Technical Reports Server (NTRS)

    Kowalewski, Matthew

    2004-01-01

    Accurate long-term satellite measurements are crucial for monitoring the recovery of the ozone layer. The slow pace of the recovery and limited lifetimes of satellite monitoring instruments demands that datasets from multiple observation systems be combined to provide the long-term accuracy needed. A fundamental component of accurately monitoring long-term trends is the calibration of these various instruments. NASA s Radiometric Calibration and Development Facility at the Goddard Space Flight Center has provided resources to minimize calibration biases between multiple instruments through the use of a common calibration source and standardized procedures traceable to national standards. The Facility s 50 cm barium sulfate integrating sphere has been used as a common calibration source for both US and international satellite instruments, including the Total Ozone Mapping Spectrometer (TOMS), Solar Backscatter Ultraviolet 2 (SBUV/2) instruments, Shuttle SBUV (SSBUV), Ozone Mapping Instrument (OMI), Global Ozone Monitoring Experiment (GOME) (ESA), Scanning Imaging SpectroMeter for Atmospheric ChartographY (SCIAMACHY) (ESA), and others. We will discuss the advantages of using a common calibration source and its effects on long-term ozone data sets. In addition, sphere calibration results from various instruments will be presented to demonstrate the accuracy of the long-term characterization of the source itself.

  11. Source term identification in atmospheric modelling via sparse optimization

    NASA Astrophysics Data System (ADS)

    Adam, Lukas; Branda, Martin; Hamburger, Thomas

    2015-04-01

    Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the

  12. Distributed Joint Source-Channel Coding in Wireless Sensor Networks

    PubMed Central

    Zhu, Xuqi; Liu, Yu; Zhang, Lin

    2009-01-01

    Considering the fact that sensors are energy-limited and the wireless channel conditions in wireless sensor networks, there is an urgent need for a low-complexity coding method with high compression ratio and noise-resisted features. This paper reviews the progress made in distributed joint source-channel coding which can address this issue. The main existing deployments, from the theory to practice, of distributed joint source-channel coding over the independent channels, the multiple access channels and the broadcast channels are introduced, respectively. To this end, we also present a practical scheme for compressing multiple correlated sources over the independent channels. The simulation results demonstrate the desired efficiency. PMID:22408560

  13. Problem solving as intelligent retrieval from distributed knowledge sources

    NASA Technical Reports Server (NTRS)

    Chen, Zhengxin

    1987-01-01

    Distributed computing in intelligent systems is investigated from a different perspective. From the viewpoint that problem solving can be viewed as intelligent knowledge retrieval, the use of distributed knowledge sources in intelligent systems is proposed.

  14. Impact of routine episodic emissions on the expected frequency distribution of emissions from oil and gas production sources.

    NASA Astrophysics Data System (ADS)

    Smith, N.; Blewitt, D.; Hebert, L. B.

    2015-12-01

    In coordination with oil and gas operators, we developed a high resolution (< 1 min) simulation of temporal variability in well-pad oil and gas emissions over a year. We include routine emissions from condensate tanks, dehydrators, pneumatic devices, fugitive leaks and liquids unloading. We explore the variability in natural gas emissions from these individual well-pad sources, and find that routine short-term episodic emissions such as tank flashing and liquids unloading result in the appearance of a skewed, or 'fat-tail' distribution of emissions, from an individual well-pad over time. Additionally, we explore the expected variability in emissions from multiple wells with different raw gas composition, gas/liquids production volumes and control equipment. Differences in well-level composition, production volume and control equipment translate into differences in well-level emissions leading to a fat-tail distribution of emissions in the absence of operational upsets. Our results have several implications for recent studies focusing on emissions from oil and gas sources. Time scale of emission estimates are important and have important policy implications. Fat tail distributions may not be entirely driven by avoidable mechanical failures, and are expected to occur under routine operational conditions from short-duration emissions (e.g., tank flashing, liquid unloading). An understanding of the expected distribution of emissions for a particular population of wells is necessary to evaluate whether the observed distribution is more skewed than expected. Temporal variability in well-pad emissions make comparisons to annual average emissions inventories difficult and may complicate the interpretation of long-term ambient fenceline monitoring data. Sophisticated change detection algorithms will be necessary to identify when true operational upsets occur versus routine short-term emissions.

  15. Flowsheets and source terms for radioactive waste projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forsberg, C.W.

    1985-03-01

    Flowsheets and source terms used to generate radioactive waste projections in the Integrated Data Base (IDB) Program are given. Volumes of each waste type generated per unit product throughput have been determined for the following facilities: uranium mining, UF/sub 6/ conversion, uranium enrichment, fuel fabrication, boiling-water reactors (BWRs), pressurized-water reactors (PWRs), and fuel reprocessing. Source terms for DOE/defense wastes have been developed. Expected wastes from typical decommissioning operations for each facility type have been determined. All wastes are also characterized by isotopic composition at time of generation and by general chemical composition. 70 references, 21 figures, 53 tables.

  16. The Fukushima releases: an inverse modelling approach to assess the source term by using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Saunier, Olivier; Mathieu, Anne; Didier, Damien; Tombette, Marilyne; Quélo, Denis; Winiarek, Victor; Bocquet, Marc

    2013-04-01

    The Chernobyl nuclear accident and more recently the Fukushima accident highlighted that the largest source of error on consequences assessment is the source term estimation including the time evolution of the release rate and its distribution between radioisotopes. Inverse modelling methods have proved to be efficient to assess the source term due to accidental situation (Gudiksen, 1989, Krysta and Bocquet, 2007, Stohl et al 2011, Winiarek et al 2012). These methods combine environmental measurements and atmospheric dispersion models. They have been recently applied to the Fukushima accident. Most existing approaches are designed to use air sampling measurements (Winiarek et al, 2012) and some of them use also deposition measurements (Stohl et al, 2012, Winiarek et al, 2013). During the Fukushima accident, such measurements are far less numerous and not as well distributed within Japan than the dose rate measurements. To efficiently document the evolution of the contamination, gamma dose rate measurements were numerous, well distributed within Japan and they offered a high temporal frequency. However, dose rate data are not as easy to use as air sampling measurements and until now they were not used in inverse modelling approach. Indeed, dose rate data results from all the gamma emitters present in the ground and in the atmosphere in the vicinity of the receptor. They do not allow one to determine the isotopic composition or to distinguish the plume contribution from wet deposition. The presented approach proposes a way to use dose rate measurement in inverse modeling approach without the need of a-priori information on emissions. The method proved to be efficient and reliable when applied on the Fukushima accident. The emissions for the 8 main isotopes Xe-133, Cs-134, Cs-136, Cs-137, Ba-137m, I-131, I-132 and Te-132 have been assessed. The Daiichi power plant events (such as ventings, explosions…) known to have caused atmospheric releases are well identified in

  17. Source environment feature related phylogenetic distribution pattern of anoxygenic photosynthetic bacteria as revealed by pufM analysis.

    PubMed

    Zeng, Yonghui; Jiao, Nianzhi

    2007-06-01

    Anoxygenic photosynthesis, performed primarily by anoxygenic photosynthetic bacteria (APB), has been supposed to arise on Earth more than 3 billion years ago. The long established APB are distributed in almost every corner where light can reach. However, the relationship between APB phylogeny and source environments has been largely unexplored. Here we retrieved the pufM sequences and related source information of 89 pufM containing species from the public database. Phylogenetic analysis revealed that horizontal gene transfer (HGT) most likely occurred within 11 out of a total 21 pufM subgroups, not only among species within the same class but also among species of different phyla or subphyla. A clear source environment feature related phylogenetic distribution pattern was observed, with all species from oxic habitats and those from anoxic habitats clustering into independent subgroups, respectively. HGT among ancient APB and subsequent long term evolution and adaptation to separated niches may have contributed to the coupling of environment and pufM phylogeny.

  18. Observation-based source terms in the third-generation wave model WAVEWATCH

    NASA Astrophysics Data System (ADS)

    Zieger, Stefan; Babanin, Alexander V.; Erick Rogers, W.; Young, Ian R.

    2015-12-01

    Measurements collected during the AUSWEX field campaign, at Lake George (Australia), resulted in new insights into the processes of wind wave interaction and whitecapping dissipation, and consequently new parameterizations of the input and dissipation source terms. The new nonlinear wind input term developed accounts for dependence of the growth on wave steepness, airflow separation, and for negative growth rate under adverse winds. The new dissipation terms feature the inherent breaking term, a cumulative dissipation term and a term due to production of turbulence by waves, which is particularly relevant for decaying seas and for swell. The latter is consistent with the observed decay rate of ocean swell. This paper describes these source terms implemented in WAVEWATCH III ®and evaluates the performance against existing source terms in academic duration-limited tests, against buoy measurements for windsea-dominated conditions, under conditions of extreme wind forcing (Hurricane Katrina), and against altimeter data in global hindcasts. Results show agreement by means of growth curves as well as integral and spectral parameters in the simulations and hindcast.

  19. Understanding the electrical behavior of the action potential in terms of elementary electrical sources.

    PubMed

    Rodriguez-Falces, Javier

    2015-03-01

    A concept of major importance in human electrophysiology studies is the process by which activation of an excitable cell results in a rapid rise and fall of the electrical membrane potential, the so-called action potential. Hodgkin and Huxley proposed a model to explain the ionic mechanisms underlying the formation of action potentials. However, this model is unsuitably complex for teaching purposes. In addition, the Hodgkin and Huxley approach describes the shape of the action potential only in terms of ionic currents, i.e., it is unable to explain the electrical significance of the action potential or describe the electrical field arising from this source using basic concepts of electromagnetic theory. The goal of the present report was to propose a new model to describe the electrical behaviour of the action potential in terms of elementary electrical sources (in particular, dipoles). The efficacy of this model was tested through a closed-book written exam. The proposed model increased the ability of students to appreciate the distributed character of the action potential and also to recognize that this source spreads out along the fiber as function of space. In addition, the new approach allowed students to realize that the amplitude and sign of the extracellular electrical potential arising from the action potential are determined by the spatial derivative of this intracellular source. The proposed model, which incorporates intuitive graphical representations, has improved students' understanding of the electrical potentials generated by bioelectrical sources and has heightened their interest in bioelectricity. Copyright © 2015 The American Physiological Society.

  20. Directional Unfolded Source Term (DUST) for Compton Cameras.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, Dean J.; Horne, Steven M.; O'Brien, Sean

    2018-03-01

    A Directional Unfolded Source Term (DUST) algorithm was developed to enable improved spectral analysis capabilities using data collected by Compton cameras. Achieving this objective required modification of the detector response function in the Gamma Detector Response and Analysis Software (GADRAS). Experimental data that were collected in support of this work include measurements of calibration sources at a range of separation distances and cylindrical depleted uranium castings.

  1. Ethnicity-specific birthweight distributions improve identification of term newborns at risk for short-term morbidity.

    PubMed

    Hanley, Gillian E; Janssen, Patricia A

    2013-11-01

    We aimed to determine whether ethnicity-specific birthweight distributions more accurately identify newborns at risk for short-term neonatal morbidity associated with small for gestational age (SGA) birth than population-based distributions not stratified on ethnicity. We examined 100,463 singleton term infants born to parents in Washington State between Jan. 1, 2006, and Dec. 31, 2008. Using multivariable logistic regression models, we compared the ability of an ethnicity-specific growth distribution and a population-based growth distribution to predict which infants were at increased risk for Apgar score <7 at 5 minutes, admission to the neonatal intensive care unit, ventilation, extended length of stay in hospital, hypothermia, hypoglycemia, and infection. Newborns considered SGA by ethnicity-specific weight distributions had the highest rates of each of the adverse outcomes assessed-more than double those of infants only considered SGA by the population-based standards. When controlling for mother's age, parity, body mass index, education, gestational age, mode of delivery, and marital status, newborns considered SGA by ethnicity-specific birthweight distributions were between 2 and 7 times more likely to suffer from the adverse outcomes listed above than infants who were not SGA. In contrast, newborns considered SGA by population-based birthweight distributions alone were at no higher risk of any adverse outcome except hypothermia (adjusted odds ratio, 2.76; 95% confidence interval, 1.68-4.55) and neonatal intensive care unit admission (adjusted odds ratio, 1.40; 95% confidence interval, 1.18-1.67). Ethnicity-specific birthweight distributions were significantly better at identifying the infants at higher risk of short-term neonatal morbidity, suggesting that their use could save resources and unnecessary parental anxiety. Copyright © 2013 Mosby, Inc. All rights reserved.

  2. Updating source term and atmospheric dispersion simulations for the dose reconstruction in Fukushima Daiichi Nuclear Power Station Accident

    NASA Astrophysics Data System (ADS)

    Nagai, Haruyasu; Terada, Hiroaki; Tsuduki, Katsunori; Katata, Genki; Ota, Masakazu; Furuno, Akiko; Akari, Shusaku

    2017-09-01

    In order to assess the radiological dose to the public resulting from the Fukushima Daiichi Nuclear Power Station (FDNPS) accident in Japan, especially for the early phase of the accident when no measured data are available for that purpose, the spatial and temporal distribution of radioactive materials in the environment are reconstructed by computer simulations. In this study, by refining the source term of radioactive materials discharged into the atmosphere and modifying the atmospheric transport, dispersion and deposition model (ATDM), the atmospheric dispersion simulation of radioactive materials is improved. Then, a database of spatiotemporal distribution of radioactive materials in the air and on the ground surface is developed from the output of the simulation. This database is used in other studies for the dose assessment by coupling with the behavioral pattern of evacuees from the FDNPS accident. By the improvement of the ATDM simulation to use a new meteorological model and sophisticated deposition scheme, the ATDM simulations reproduced well the 137Cs and 131I deposition patterns. For the better reproducibility of dispersion processes, further refinement of the source term was carried out by optimizing it to the improved ATDM simulation by using new monitoring data.

  3. Spatial distribution and migration of nonylphenol in groundwater following long-term wastewater irrigation.

    PubMed

    Wang, Shiyu; Wu, Wenyong; Liu, Fei; Yin, Shiyang; Bao, Zhe; Liu, Honglu

    2015-01-01

    Seen as a solution to water shortages, wastewater reuse for crop irrigation does however poses a risk owing to the potential release of organic contaminants into soil and water. The frequency of detection (FOD), concentration, and migration of nonylphenol (NP) isomers in reclaimed water (FODRW), surface water (FODSW), and groundwater (FODGW) were investigated in a long-term wastewater irrigation area in Beijing. The FODRW, FODSW and FODGW of any or all of 12 NP isomers were 66.7% to 100%, 76.9% to 100% and 13.3% to 60%, respectively. The mean (±standard deviation) NP concentrations of the reclaimed water, surface water, and groundwater (NPRW, NPSW, NPGW, repectively) were 469.4±73.4 ng L(-1), 694.6±248.7 ng(-1) and 244.4±230.8 ng(-1), respectively. The existence of external pollution sources during water transmission and distribution resulted in NPSW exceeding NPRW. NP distribution in groundwater was related to the duration and quantity of wastewater irrigation, the sources of aquifer recharge, and was seen to decrease with increasing aquifer depth. Higher riverside infiltration rate nearby leads to higher FODGW values. The migration rate of NP isomers was classified as high, moderate or low. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Radial Distribution of X-Ray Point Sources Near the Galactic Center

    NASA Astrophysics Data System (ADS)

    Hong, Jae Sub; van den Berg, Maureen; Grindlay, Jonathan E.; Laycock, Silas

    2009-11-01

    We present the log N-log S and spatial distributions of X-ray point sources in seven Galactic bulge (GB) fields within 4° from the Galactic center (GC). We compare the properties of 1159 X-ray point sources discovered in our deep (100 ks) Chandra observations of three low extinction Window fields near the GC with the X-ray sources in the other GB fields centered around Sgr B2, Sgr C, the Arches Cluster, and Sgr A* using Chandra archival data. To reduce the systematic errors induced by the uncertain X-ray spectra of the sources coupled with field-and-distance-dependent extinction, we classify the X-ray sources using quantile analysis and estimate their fluxes accordingly. The result indicates that the GB X-ray population is highly concentrated at the center, more heavily than the stellar distribution models. It extends out to more than 1fdg4 from the GC, and the projected density follows an empirical radial relation inversely proportional to the offset from the GC. We also compare the total X-ray and infrared surface brightness using the Chandra and Spitzer observations of the regions. The radial distribution of the total infrared surface brightness from the 3.6 band μm images appears to resemble the radial distribution of the X-ray point sources better than that predicted by the stellar distribution models. Assuming a simple power-law model for the X-ray spectra, the closer to the GC the intrinsically harder the X-ray spectra appear, but adding an iron emission line at 6.7 keV in the model allows the spectra of the GB X-ray sources to be largely consistent across the region. This implies that the majority of these GB X-ray sources can be of the same or similar type. Their X-ray luminosity and spectral properties support the idea that the most likely candidate is magnetic cataclysmic variables (CVs), primarily intermediate polars (IPs). Their observed number density is also consistent with the majority being IPs, provided the relative CV to star density in the GB

  5. Open Source Live Distributions for Computer Forensics

    NASA Astrophysics Data System (ADS)

    Giustini, Giancarlo; Andreolini, Mauro; Colajanni, Michele

    Current distributions of open source forensic software provide digital investigators with a large set of heterogeneous tools. Their use is not always focused on the target and requires high technical expertise. We present a new GNU/Linux live distribution, named CAINE (Computer Aided INvestigative Environment) that contains a collection of tools wrapped up into a user friendly environment. The CAINE forensic framework introduces novel important features, aimed at filling the interoperability gap across different forensic tools. Moreover, it provides a homogeneous graphical interface that drives digital investigators during the acquisition and analysis of electronic evidence, and it offers a semi-automatic mechanism for the creation of the final report.

  6. Potential breeding distributions of U.S. birds predicted with both short-term variability and long-term average climate data.

    PubMed

    Bateman, Brooke L; Pidgeon, Anna M; Radeloff, Volker C; Flather, Curtis H; VanDerWal, Jeremy; Akçakaya, H Resit; Thogmartin, Wayne E; Albright, Thomas P; Vavrus, Stephen J; Heglund, Patricia J

    2016-12-01

    Climate conditions, such as temperature or precipitation, averaged over several decades strongly affect species distributions, as evidenced by experimental results and a plethora of models demonstrating statistical relations between species occurrences and long-term climate averages. However, long-term averages can conceal climate changes that have occurred in recent decades and may not capture actual species occurrence well because the distributions of species, especially at the edges of their range, are typically dynamic and may respond strongly to short-term climate variability. Our goal here was to test whether bird occurrence models can be predicted by either covariates based on short-term climate variability or on long-term climate averages. We parameterized species distribution models (SDMs) based on either short-term variability or long-term average climate covariates for 320 bird species in the conterminous USA and tested whether any life-history trait-based guilds were particularly sensitive to short-term conditions. Models including short-term climate variability performed well based on their cross-validated area-under-the-curve AUC score (0.85), as did models based on long-term climate averages (0.84). Similarly, both models performed well compared to independent presence/absence data from the North American Breeding Bird Survey (independent AUC of 0.89 and 0.90, respectively). However, models based on short-term variability covariates more accurately classified true absences for most species (73% of true absences classified within the lowest quarter of environmental suitability vs. 68%). In addition, they have the advantage that they can reveal the dynamic relationship between species and their environment because they capture the spatial fluctuations of species potential breeding distributions. With this information, we can identify which species and guilds are sensitive to climate variability, identify sites of high conservation value where climate

  7. An Evaluation of Short-Term Distributed Online Learning Events

    ERIC Educational Resources Information Center

    Barker, Bradley; Brooks, David

    2005-01-01

    The purpose of this study was to evaluate the effectiveness of short-term distributed online training events using an adapted version of the compressed evaluation form developed by Wisher and Curnow (1998). Evaluating online distributed training events provides insight into course effectiveness, the contribution of prior knowledge to learning, and…

  8. Method for image reconstruction of moving radionuclide source distribution

    DOEpatents

    Stolin, Alexander V.; McKisson, John E.; Lee, Seung Joon; Smith, Mark Frederick

    2012-12-18

    A method for image reconstruction of moving radionuclide distributions. Its particular embodiment is for single photon emission computed tomography (SPECT) imaging of awake animals, though its techniques are general enough to be applied to other moving radionuclide distributions as well. The invention eliminates motion and blurring artifacts for image reconstructions of moving source distributions. This opens new avenues in the area of small animal brain imaging with radiotracers, which can now be performed without the perturbing influences of anesthesia or physical restraint on the biological system.

  9. Long-Term Stability of Radio Sources in VLBI Analysis

    NASA Technical Reports Server (NTRS)

    Engelhardt, Gerald; Thorandt, Volkmar

    2010-01-01

    Positional stability of radio sources is an important requirement for modeling of only one source position for the complete length of VLBI data of presently more than 20 years. The stability of radio sources can be verified by analyzing time series of radio source coordinates. One approach is a statistical test for normal distribution of residuals to the weighted mean for each radio source component of the time series. Systematic phenomena in the time series can thus be detected. Nevertheless, an inspection of rate estimation and weighted root-mean-square (WRMS) variations about the mean is also necessary. On the basis of the time series computed by the BKG group in the frame of the ICRF2 working group, 226 stable radio sources with an axis stability of 10 as could be identified. They include 100 ICRF2 axes-defining sources which are determined independently of the method applied in the ICRF2 working group. 29 stable radio sources with a source structure index of less than 3.0 can also be used to increase the number of 295 ICRF2 defining sources.

  10. A Composite Source Model With Fractal Subevent Size Distribution

    NASA Astrophysics Data System (ADS)

    Burjanek, J.; Zahradnik, J.

    A composite source model, incorporating different sized subevents, provides a pos- sible description of complex rupture processes during earthquakes. The number of subevents with characteristic dimension greater than R is proportional to R-2. The subevents do not overlap with each other, and the sum of their areas equals to the area of the target event (e.g. mainshock) . The subevents are distributed randomly over the fault. Each subevent is modeled as a finite source, using kinematic approach (radial rupture propagation, constant rupture velocity, boxcar slip-velocity function, with constant rise time on the subevent). The final slip at each subevent is related to its characteristic dimension, using constant stress-drop scaling. Variation of rise time with subevent size is a free parameter of modeling. The nucleation point of each subevent is taken as the point closest to mainshock hypocentre. The synthetic Green's functions are calculated by the discrete-wavenumber method in a 1D horizontally lay- ered crustal model in a relatively coarse grid of points covering the fault plane. The Green's functions needed for the kinematic model in a fine grid are obtained by cu- bic spline interpolation. As different frequencies may be efficiently calculated with different sampling, the interpolation simplifies and speeds-up the procedure signifi- cantly. The composite source model described above allows interpretation in terms of a kinematic model with non-uniform final slip and rupture velocity spatial distribu- tions. The 1994 Northridge earthquake (Mw = 6.7) is used as a validation event. The strong-ground motion modeling of the 1999 Athens earthquake (Mw = 5.9) is also performed.

  11. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 16 Commercial Practices 2 2014-01-01 2014-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420 21...

  12. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420 21...

  13. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 16 Commercial Practices 2 2013-01-01 2013-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420 21...

  14. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420 21...

  15. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420 21...

  16. Distributed single source coding with side information

    NASA Astrophysics Data System (ADS)

    Vila-Forcen, Jose E.; Koval, Oleksiy; Voloshynovskiy, Sviatoslav V.

    2004-01-01

    In the paper we advocate image compression technique in the scope of distributed source coding framework. The novelty of the proposed approach is twofold: classical image compression is considered from the positions of source coding with side information and, contrarily to the existing scenarios, where side information is given explicitly, side information is created based on deterministic approximation of local image features. We consider an image in the transform domain as a realization of a source with a bounded codebook of symbols where each symbol represents a particular edge shape. The codebook is image independent and plays the role of auxiliary source. Due to the partial availability of side information at both encoder and decoder we treat our problem as a modification of Berger-Flynn-Gray problem and investigate a possible gain over the solutions when side information is either unavailable or available only at decoder. Finally, we present a practical compression algorithm for passport photo images based on our concept that demonstrates the superior performance in very low bit rate regime.

  17. 78 FR 33691 - Distribution of Source Material to Exempt Persons and to General Licensees and Revision of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-05

    ... Distribution of Source Material to Exempt Persons and to General Licensees and Revision of General License and..., Distribution of Source Material to Exempt Persons and to General Licensees and Revision of General License and Exemptions (Distribution of Source Material Rule). The Distribution of Source Material Rule amended the NRC's...

  18. Source Term Model for Vortex Generator Vanes in a Navier-Stokes Computer Code

    NASA Technical Reports Server (NTRS)

    Waithe, Kenrick A.

    2004-01-01

    A source term model for an array of vortex generators was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the side force created by a vortex generator vane. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on the local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low profile vortex generator vane on a flat plate. In addition, the model was compared to experimental data of an S-duct with 22 co-rotating, low profile vortex generators. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator on a flat plate without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream-wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data. The peak vorticity and its location were also predicted very well when compared to numerical simulations and experimental data. The circulation predicted by the source term model matches the prediction of the numerical simulation. The source term model predicted the engine fan face distortion and total pressure recovery of the S-duct with 22 co-rotating vortex generators very well. The source term model allows a researcher to quickly investigate different locations of individual or a row of vortex generators. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  19. Source term model evaluations for the low-level waste facility performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yim, M.S.; Su, S.I.

    1995-12-31

    The estimation of release of radionuclides from various waste forms to the bottom boundary of the waste disposal facility (source term) is one of the most important aspects of LLW facility performance assessment. In this work, several currently used source term models are comparatively evaluated for the release of carbon-14 based on a test case problem. The models compared include PRESTO-EPA-CPG, IMPACTS, DUST and NEFTRAN-II. Major differences in assumptions and approaches between the models are described and key parameters are identified through sensitivity analysis. The source term results from different models are compared and other concerns or suggestions are discussed.

  20. Availability of added sugars in Brazil: distribution, food sources and time trends.

    PubMed

    Levy, Renata Bertazzi; Claro, Rafael Moreira; Bandoni, Daniel Henrique; Mondini, Lenise; Monteiro, Carlos Augusto

    2012-03-01

    To describe the regional and socio-economic distribution of consumption of added sugar in Brazil in 2002/03, particularly products, sources of sugar and trends in the past 15 years. The study used data from Household Budget Surveys since the 1980s about the type and quantity of food and beverages bought by Brazilian families. Different indicators were analyzed: % of sugar calories over the total diet energy and caloric % of table sugar fractions and sugar added to processed food/ sugar calories of diet. In 2002/03, of the total energy available for consumption, 16.7% came from added sugar in all regional and socio-economic strata. The table sugar/ sugar added to processed food ratio was inversely proportional to increase in income. Although this proportion fell in the past 15 years, sugar added to processed food doubled, especially in terms of consumption of soft drinks and cookies. Brazilians consume more sugar than the recommended levels determined by the WHO and the sources of consumption of sugar have changed significantly.

  1. Continuous-variable quantum key distribution with Gaussian source noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen Yujie; Peng Xiang; Yang Jian

    2011-05-15

    Source noise affects the security of continuous-variable quantum key distribution (CV QKD) and is difficult to analyze. We propose a model to characterize Gaussian source noise through introducing a neutral party (Fred) who induces the noise with a general unitary transformation. Without knowing Fred's exact state, we derive the security bounds for both reverse and direct reconciliations and show that the bound for reverse reconciliation is tight.

  2. Do forests represent a long-term source of contaminated particulate matter in the Fukushima Prefecture?

    PubMed

    Laceby, J Patrick; Huon, Sylvain; Onda, Yuichi; Vaury, Veronique; Evrard, Olivier

    2016-12-01

    The Fukushima Daiichi Nuclear Power Plant (FDNPP) accident resulted in radiocesium fallout contaminating coastal catchments of the Fukushima Prefecture. As the decontamination effort progresses, the potential downstream migration of radiocesium contaminated particulate matter from forests, which cover over 65% of the most contaminated region, requires investigation. Carbon and nitrogen elemental concentrations and stable isotope ratios are thus used to model the relative contributions of forest, cultivated and subsoil sources to deposited particulate matter in three contaminated coastal catchments. Samples were taken from the main identified sources: cultivated (n = 28), forest (n = 46), and subsoils (n = 25). Deposited particulate matter (n = 82) was sampled during four fieldwork campaigns from November 2012 to November 2014. A distribution modelling approach quantified relative source contributions with multiple combinations of element parameters (carbon only, nitrogen only, and four parameters) for two particle size fractions (<63 μm and <2 mm). Although there was significant particle size enrichment for the particulate matter parameters, these differences only resulted in a 6% (SD 3%) mean difference in relative source contributions. Further, the three different modelling approaches only resulted in a 4% (SD 3%) difference between relative source contributions. For each particulate matter sample, six models (i.e. <63 μm and <2 mm from the three modelling approaches) were used to incorporate a broader definition of potential uncertainty into model results. Forest sources were modelled to contribute 17% (SD 10%) of particulate matter indicating they present a long term potential source of radiocesium contaminated material in fallout impacted catchments. Subsoils contributed 45% (SD 26%) of particulate matter and cultivated sources contributed 38% (SD 19%). The reservoir of radiocesium in forested landscapes in the Fukushima region represents a

  3. Fission Product Appearance Rate Coefficients in Design Basis Source Term Determinations - Past and Present

    NASA Astrophysics Data System (ADS)

    Perez, Pedro B.; Hamawi, John N.

    2017-09-01

    Nuclear power plant radiation protection design features are based on radionuclide source terms derived from conservative assumptions that envelope expected operating experience. Two parameters that significantly affect the radionuclide concentrations in the source term are failed fuel fraction and effective fission product appearance rate coefficients. Failed fuel fraction may be a regulatory based assumption such as in the U.S. Appearance rate coefficients are not specified in regulatory requirements, but have been referenced to experimental data that is over 50 years old. No doubt the source terms are conservative as demonstrated by operating experience that has included failed fuel, but it may be too conservative leading to over-designed shielding for normal operations as an example. Design basis source term methodologies for normal operations had not advanced until EPRI published in 2015 an updated ANSI/ANS 18.1 source term basis document. Our paper revisits the fission product appearance rate coefficients as applied in the derivation source terms following the original U.S. NRC NUREG-0017 methodology. New coefficients have been calculated based on recent EPRI results which demonstrate the conservatism in nuclear power plant shielding design.

  4. Source structure errors in radio-interferometric clock synchronization for ten measured distributions

    NASA Technical Reports Server (NTRS)

    Thomas, J. B.

    1981-01-01

    The effects of source structure on radio interferometry measurements were investigated. The brightness distribution measurements for ten extragalactic sources were analyzed. Significant results are reported.

  5. The distribution of infrared point sources in nearby elliptical galaxies

    NASA Astrophysics Data System (ADS)

    Gogoi, Rupjyoti; Shalima, P.; Misra, Ranjeev

    2018-02-01

    Infrared (IR) point sources as observed by Spitzer, in nearby early-type galaxies should either be bright sources in the galaxy such as globular clusters, or they may be background sources such as AGNs. These objects are often counterparts of sources in other wavebands such as optical and X-rays and the IR information provides crucial information regarding their nature. However, many of the IR sources may be background objects and it is important to identify them or at least quantify the level of background contamination. Moreover, the distribution of these IR point sources in flux, distance from the centre and colour would be useful in understanding their origin. Archival Spitzer IRAC images provide a unique opportunity for such a study and here we present the results of such an analysis for four nearby galaxies, NGC 1399, NGC 2768, NGC 4365 and NGC 4649. We estimate the background contamination using several blank fields. Our results suggest that IR colours can be effectively used to differentiate between sources in the galaxy and background ones. In particular we find that sources having AGN like colours are indeed consistent with being background AGNs. For sources with non AGN like colours we compute the distribution of flux and normalised distance from the centre which is found to be of a power-law form. Although our sample size is small, the power-law index for the galaxies are different indicating perhaps that the galaxy environment may be playing a part in their origin and nature.

  6. Full Waveform Inversion Using Student's t Distribution: a Numerical Study for Elastic Waveform Inversion and Simultaneous-Source Method

    NASA Astrophysics Data System (ADS)

    Jeong, Woodon; Kang, Minji; Kim, Shinwoong; Min, Dong-Joo; Kim, Won-Ki

    2015-06-01

    Seismic full waveform inversion (FWI) has primarily been based on a least-squares optimization problem for data residuals. However, the least-squares objective function can suffer from its weakness and sensitivity to noise. There have been numerous studies to enhance the robustness of FWI by using robust objective functions, such as l 1-norm-based objective functions. However, the l 1-norm can suffer from a singularity problem when the residual wavefield is very close to zero. Recently, Student's t distribution has been applied to acoustic FWI to give reasonable results for noisy data. Student's t distribution has an overdispersed density function compared with the normal distribution, and is thus useful for data with outliers. In this study, we investigate the feasibility of Student's t distribution for elastic FWI by comparing its basic properties with those of the l 2-norm and l 1-norm objective functions and by applying the three methods to noisy data. Our experiments show that the l 2-norm is sensitive to noise, whereas the l 1-norm and Student's t distribution objective functions give relatively stable and reasonable results for noisy data. When noise patterns are complicated, i.e., due to a combination of missing traces, unexpected outliers, and random noise, FWI based on Student's t distribution gives better results than l 1- and l 2-norm FWI. We also examine the application of simultaneous-source methods to acoustic FWI based on Student's t distribution. Computing the expectation of the coefficients of gradient and crosstalk noise terms and plotting the signal-to-noise ratio with iteration, we were able to confirm that crosstalk noise is suppressed as the iteration progresses, even when simultaneous-source FWI is combined with Student's t distribution. From our experiments, we conclude that FWI based on Student's t distribution can retrieve subsurface material properties with less distortion from noise than l 1- and l 2-norm FWI, and the simultaneous-source

  7. Strategies for satellite-based monitoring of CO2 from distributed area and point sources

    NASA Astrophysics Data System (ADS)

    Schwandner, Florian M.; Miller, Charles E.; Duren, Riley M.; Natraj, Vijay; Eldering, Annmarie; Gunson, Michael R.; Crisp, David

    2014-05-01

    and sensor provides the full range of temporal sampling needed to characterize distributed area and point source emissions. For instance, point source emission patterns will vary with source strength, wind speed and direction. Because wind speed, direction and other environmental factors change rapidly, short term variabilities should be sampled. For detailed target selection and pointing verification, important lessons have already been learned and strategies devised during JAXA's GOSAT mission (Schwandner et al, 2013). The fact that competing spatial and temporal requirements drive satellite remote sensing sampling strategies dictates a systematic, multi-factor consideration of potential solutions. Factors to consider include vista, revisit frequency, integration times, spatial resolution, and spatial coverage. No single satellite-based remote sensing solution can address this problem for all scales. It is therefore of paramount importance for the international community to develop and maintain a constellation of atmospheric CO2 monitoring satellites that complement each other in their temporal and spatial observation capabilities: Polar sun-synchronous orbits (fixed local solar time, no diurnal information) with agile pointing allow global sampling of known distributed area and point sources like megacities, power plants and volcanoes with daily to weekly temporal revisits and moderate to high spatial resolution. Extensive targeting of distributed area and point sources comes at the expense of reduced mapping or spatial coverage, and the important contextual information that comes with large-scale contiguous spatial sampling. Polar sun-synchronous orbits with push-broom swath-mapping but limited pointing agility may allow mapping of individual source plumes and their spatial variability, but will depend on fortuitous environmental conditions during the observing period. These solutions typically have longer times between revisits, limiting their ability to resolve

  8. Enhancing GADRAS Source Term Inputs for Creation of Synthetic Spectra.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horne, Steven M.; Harding, Lee

    The Gamma Detector Response and Analysis Software (GADRAS) team has enhanced the source term input for the creation of synthetic spectra. These enhancements include the following: allowing users to programmatically provide source information to GADRAS through memory, rather than through a string limited to 256 characters; allowing users to provide their own source decay database information; and updating the default GADRAS decay database to fix errors and include coincident gamma information.

  9. Quantum key distribution with an unknown and untrusted source

    NASA Astrophysics Data System (ADS)

    Zhao, Yi; Qi, Bing; Lo, Hoi-Kwong

    2008-05-01

    The security of a standard bidirectional “plug-and-play” quantum key distribution (QKD) system has been an open question for a long time. This is mainly because its source is equivalently controlled by an eavesdropper, which means the source is unknown and untrusted. Qualitative discussion on this subject has been made previously. In this paper, we solve this question directly by presenting the quantitative security analysis on a general class of QKD protocols whose sources are unknown and untrusted. The securities of standard Bennett-Brassard 1984 protocol, weak+vacuum decoy state protocol, and one-decoy state protocol, with unknown and untrusted sources are rigorously proved. We derive rigorous lower bounds to the secure key generation rates of the above three protocols. Our numerical simulation results show that QKD with an untrusted source gives a key generation rate that is close to that with a trusted source.

  10. Free-space quantum key distribution with a high generation rate potassium titanyl phosphate waveguide photon-pair source

    NASA Astrophysics Data System (ADS)

    Wilson, Jeffrey D.; Chaffee, Dalton W.; Wilson, Nathaniel C.; Lekki, John D.; Tokars, Roger P.; Pouch, John J.; Roberts, Tony D.; Battle, Philip R.; Floyd, Bertram; Lind, Alexander J.; Cavin, John D.; Helmick, Spencer R.

    2016-09-01

    A high generation rate photon-pair source using a dual element periodically-poled potassium titanyl phosphate (PP KTP) waveguide is described. The fully integrated photon-pair source consists of a 1064-nm pump diode laser, fiber-coupled to a dual element waveguide within which a pair of 1064-nm photons are up-converted to a single 532-nm photon in the first stage. In the second stage, the 532-nm photon is down-converted to an entangled photon-pair at 800 nm and 1600 nm which are fiber-coupled at the waveguide output. The photon-pair source features a high pair generation rate, a compact power-efficient package, and continuous wave (CW) or pulsed operation. This is a significant step towards the long term goal of developing sources for high-rate Quantum Key Distribution (QKD) to enable Earth-space secure communications. Characterization and test results are presented. Details and preliminary results of a laboratory free space QKD experiment with the B92 protocol are also presented.

  11. Free-Space Quantum Key Distribution with a High Generation Rate Potassium Titanyl Phosphate Waveguide Photon-Pair Source

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey D.; Chaffee, Dalton W.; Wilson, Nathaniel C.; Lekki, John D.; Tokars, Roger P.; Pouch, John J.; Roberts, Tony D.; Battle, Philip; Floyd, Bertram M.; Lind, Alexander J.; hide

    2016-01-01

    A high generation rate photon-pair source using a dual element periodically-poled potassium titanyl phosphate (PP KTP) waveguide is described. The fully integrated photon-pair source consists of a 1064-nanometer pump diode laser, fiber-coupled to a dual element waveguide within which a pair of 1064-nanometer photons are up-converted to a single 532-nanometer photon in the first stage. In the second stage, the 532-nanometer photon is down-converted to an entangled photon-pair at 800 nanometer and 1600 nanometer which are fiber-coupled at the waveguide output. The photon-pair source features a high pair generation rate, a compact power-efficient package, and continuous wave (CW) or pulsed operation. This is a significant step towards the long term goal of developing sources for high-rate Quantum Key Distribution (QKD) to enable Earth-space secure communications. Characterization and test results are presented. Details and preliminary results of a laboratory free-space QKD experiment with the B92 protocol are also presented.

  12. Quantum key distribution with an unknown and untrusted source

    NASA Astrophysics Data System (ADS)

    Zhao, Yi; Qi, Bing; Lo, Hoi-Kwong

    2009-03-01

    The security of a standard bi-directional ``plug & play'' quantum key distribution (QKD) system has been an open question for a long time. This is mainly because its source is equivalently controlled by an eavesdropper, which means the source is unknown and untrusted. Qualitative discussion on this subject has been made previously. In this paper, we present the first quantitative security analysis on a general class of QKD protocols whose sources are unknown and untrusted. The securities of standard BB84 protocol, weak+vacuum decoy state protocol, and one-decoy decoy state protocol, with unknown and untrusted sources are rigorously proved. We derive rigorous lower bounds to the secure key generation rates of the above three protocols. Our numerical simulation results show that QKD with an untrusted source gives a key generation rate that is close to that with a trusted source. Our work is published in [1]. [4pt] [1] Y. Zhao, B. Qi, and H.-K. Lo, Phys. Rev. A, 77:052327 (2008).

  13. PFLOTRAN-RepoTREND Source Term Comparison Summary.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frederick, Jennifer M.

    Code inter-comparison studies are useful exercises to verify and benchmark independently developed software to ensure proper function, especially when the software is used to model high-consequence systems which cannot be physically tested in a fully representative environment. This summary describes the results of the first portion of the code inter-comparison between PFLOTRAN and RepoTREND, which compares the radionuclide source term used in a typical performance assessment.

  14. Open-source framework for power system transmission and distribution dynamics co-simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Renke; Fan, Rui; Daily, Jeff

    The promise of the smart grid entails more interactions between the transmission and distribution networks, and there is an immediate need for tools to provide the comprehensive modelling and simulation required to integrate operations at both transmission and distribution levels. Existing electromagnetic transient simulators can perform simulations with integration of transmission and distribution systems, but the computational burden is high for large-scale system analysis. For transient stability analysis, currently there are only separate tools for simulating transient dynamics of the transmission and distribution systems. In this paper, we introduce an open source co-simulation framework “Framework for Network Co-Simulation” (FNCS), togethermore » with the decoupled simulation approach that links existing transmission and distribution dynamic simulators through FNCS. FNCS is a middleware interface and framework that manages the interaction and synchronization of the transmission and distribution simulators. Preliminary testing results show the validity and capability of the proposed open-source co-simulation framework and the decoupled co-simulation methodology.« less

  15. Effect of Americium-241 Content on Plutonium Radiation Source Terms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rainisch, R.

    1998-12-28

    The management of excess plutonium by the US Department of Energy includes a number of storage and disposition alternatives. Savannah River Site (SRS) is supporting DOE with plutonium disposition efforts, including the immobilization of certain plutonium materials in a borosilicate glass matrix. Surplus plutonium inventories slated for vitrification include materials with elevated levels of Americium-241. The Am-241 content of plutonium materials generally reflects in-growth of the isotope due to decay of plutonium and is age-dependent. However, select plutonium inventories have Am-241 levels considerably above the age-based levels. Elevated levels of americium significantly impact radiation source terms of plutonium materials andmore » will make handling of the materials more difficult. Plutonium materials are normally handled in shielded glove boxes, and the work entails both extremity and whole body exposures. This paper reports results of an SRS analysis of plutonium materials source terms vs. the Americium-241 content of the materials. Data with respect to dependence and magnitude of source terms on/vs. Am-241 levels are presented and discussed. The investigation encompasses both vitrified and un-vitrified plutonium oxide (PuO2) batches.« less

  16. Extending Marine Species Distribution Maps Using Non-Traditional Sources

    PubMed Central

    Moretzsohn, Fabio; Gibeaut, James

    2015-01-01

    Abstract Background Traditional sources of species occurrence data such as peer-reviewed journal articles and museum-curated collections are included in species databases after rigorous review by species experts and evaluators. The distribution maps created in this process are an important component of species survival evaluations, and are used to adapt, extend and sometimes contract polygons used in the distribution mapping process. New information During an IUCN Red List Gulf of Mexico Fishes Assessment Workshop held at The Harte Research Institute for Gulf of Mexico Studies, a session included an open discussion on the topic of including other sources of species occurrence data. During the last decade, advances in portable electronic devices and applications enable 'citizen scientists' to record images, location and data about species sightings, and submit that data to larger species databases. These applications typically generate point data. Attendees of the workshop expressed an interest in how that data could be incorporated into existing datasets, how best to ascertain the quality and value of that data, and what other alternate data sources are available. This paper addresses those issues, and provides recommendations to ensure quality data use. PMID:25941453

  17. The long-term problems of contaminated land: Sources, impacts and countermeasures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baes, C.F. III

    1986-11-01

    This report examines the various sources of radiological land contamination; its extent; its impacts on man, agriculture, and the environment; countermeasures for mitigating exposures; radiological standards; alternatives for achieving land decontamination and cleanup; and possible alternatives for utilizing the land. The major potential sources of extensive long-term land contamination with radionuclides, in order of decreasing extent, are nuclear war, detonation of a single nuclear weapon (e.g., a terrorist act), serious reactor accidents, and nonfission nuclear weapons accidents that disperse the nuclear fuels (termed ''broken arrows'').

  18. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems

    PubMed Central

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-01-01

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm. PMID:26985896

  19. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems.

    PubMed

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-03-12

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm.

  20. 2dFLenS and KiDS: determining source redshift distributions with cross-correlations

    NASA Astrophysics Data System (ADS)

    Johnson, Andrew; Blake, Chris; Amon, Alexandra; Erben, Thomas; Glazebrook, Karl; Harnois-Deraps, Joachim; Heymans, Catherine; Hildebrandt, Hendrik; Joudaki, Shahab; Klaes, Dominik; Kuijken, Konrad; Lidman, Chris; Marin, Felipe A.; McFarland, John; Morrison, Christopher B.; Parkinson, David; Poole, Gregory B.; Radovich, Mario; Wolf, Christian

    2017-03-01

    We develop a statistical estimator to infer the redshift probability distribution of a photometric sample of galaxies from its angular cross-correlation in redshift bins with an overlapping spectroscopic sample. This estimator is a minimum-variance weighted quadratic function of the data: a quadratic estimator. This extends and modifies the methodology presented by McQuinn & White. The derived source redshift distribution is degenerate with the source galaxy bias, which must be constrained via additional assumptions. We apply this estimator to constrain source galaxy redshift distributions in the Kilo-Degree imaging survey through cross-correlation with the spectroscopic 2-degree Field Lensing Survey, presenting results first as a binned step-wise distribution in the range z < 0.8, and then building a continuous distribution using a Gaussian process model. We demonstrate the robustness of our methodology using mock catalogues constructed from N-body simulations, and comparisons with other techniques for inferring the redshift distribution.

  1. Simulated and measured neutron/gamma light output distribution for poly-energetic neutron/gamma sources

    NASA Astrophysics Data System (ADS)

    Hosseini, S. A.; Zangian, M.; Aghabozorgi, S.

    2018-03-01

    In the present paper, the light output distribution due to poly-energetic neutron/gamma (neutron or gamma) source was calculated using the developed MCNPX-ESUT-PE (MCNPX-Energy engineering of Sharif University of Technology-Poly Energetic version) computational code. The simulation of light output distribution includes the modeling of the particle transport, the calculation of scintillation photons induced by charged particles, simulation of the scintillation photon transport and considering the light resolution obtained from the experiment. The developed computational code is able to simulate the light output distribution due to any neutron/gamma source. In the experimental step of the present study, the neutron-gamma discrimination based on the light output distribution was performed using the zero crossing method. As a case study, 241Am-9Be source was considered and the simulated and measured neutron/gamma light output distributions were compared. There is an acceptable agreement between the discriminated neutron/gamma light output distributions obtained from the simulation and experiment.

  2. Robust video transmission with distributed source coded auxiliary channel.

    PubMed

    Wang, Jiajun; Majumdar, Abhik; Ramchandran, Kannan

    2009-12-01

    We propose a novel solution to the problem of robust, low-latency video transmission over lossy channels. Predictive video codecs, such as MPEG and H.26x, are very susceptible to prediction mismatch between encoder and decoder or "drift" when there are packet losses. These mismatches lead to a significant degradation in the decoded quality. To address this problem, we propose an auxiliary codec system that sends additional information alongside an MPEG or H.26x compressed video stream to correct for errors in decoded frames and mitigate drift. The proposed system is based on the principles of distributed source coding and uses the (possibly erroneous) MPEG/H.26x decoder reconstruction as side information at the auxiliary decoder. The distributed source coding framework depends upon knowing the statistical dependency (or correlation) between the source and the side information. We propose a recursive algorithm to analytically track the correlation between the original source frame and the erroneous MPEG/H.26x decoded frame. Finally, we propose a rate-distortion optimization scheme to allocate the rate used by the auxiliary encoder among the encoding blocks within a video frame. We implement the proposed system and present extensive simulation results that demonstrate significant gains in performance both visually and objectively (on the order of 2 dB in PSNR over forward error correction based solutions and 1.5 dB in PSNR over intrarefresh based solutions for typical scenarios) under tight latency constraints.

  3. Quantum key distribution with entangled photon sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma Xiongfeng; Fung, Chi-Hang Fred; Lo, H.-K.

    2007-07-15

    A parametric down-conversion (PDC) source can be used as either a triggered single-photon source or an entangled-photon source in quantum key distribution (QKD). The triggering PDC QKD has already been studied in the literature. On the other hand, a model and a post-processing protocol for the entanglement PDC QKD are still missing. We fill in this important gap by proposing such a model and a post-processing protocol for the entanglement PDC QKD. Although the PDC model is proposed to study the entanglement-based QKD, we emphasize that our generic model may also be useful for other non-QKD experiments involving a PDCmore » source. Since an entangled PDC source is a basis-independent source, we apply Koashi and Preskill's security analysis to the entanglement PDC QKD. We also investigate the entanglement PDC QKD with two-way classical communications. We find that the recurrence scheme increases the key rate and the Gottesman-Lo protocol helps tolerate higher channel losses. By simulating a recent 144-km open-air PDC experiment, we compare three implementations: entanglement PDC QKD, triggering PDC QKD, and coherent-state QKD. The simulation result suggests that the entanglement PDC QKD can tolerate higher channel losses than the coherent-state QKD. The coherent-state QKD with decoy states is able to achieve highest key rate in the low- and medium-loss regions. By applying the Gottesman-Lo two-way post-processing protocol, the entanglement PDC QKD can tolerate up to 70 dB combined channel losses (35 dB for each channel) provided that the PDC source is placed in between Alice and Bob. After considering statistical fluctuations, the PDC setup can tolerate up to 53 dB channel losses.« less

  4. Intensity distribution of the x ray source for the AXAF VETA-I mirror test

    NASA Technical Reports Server (NTRS)

    Zhao, Ping; Kellogg, Edwin M.; Schwartz, Daniel A.; Shao, Yibo; Fulton, M. Ann

    1992-01-01

    The X-ray generator for the AXAF VETA-I mirror test is an electron impact X-ray source with various anode materials. The source sizes of different anodes and their intensity distributions were measured with a pinhole camera before the VETA-I test. The pinhole camera consists of a 30 micrometers diameter pinhole for imaging the source and a Microchannel Plate Imaging Detector with 25 micrometers FWHM spatial resolution for detecting and recording the image. The camera has a magnification factor of 8.79, which enables measuring the detailed spatial structure of the source. The spot size, the intensity distribution, and the flux level of each source were measured with different operating parameters. During the VETA-I test, microscope pictures were taken for each used anode immediately after it was brought out of the source chamber. The source sizes and the intensity distribution structures are clearly shown in the pictures. They are compared and agree with the results from the pinhole camera measurements. This paper presents the results of the above measurements. The results show that under operating conditions characteristic of the VETA-I test, all the source sizes have a FWHM of less than 0.45 mm. For a source of this size at 528 meters away, the angular size to VETA is less than 0.17 arcsec which is small compared to the on ground VETA angular resolution (0.5 arcsec, required and 0.22 arcsec, measured). Even so, the results show the intensity distributions of the sources have complicated structures. These results were crucial for the VETA data analysis and for obtaining the on ground and predicted in orbit VETA Point Response Function.

  5. Streamlined Genome Sequence Compression using Distributed Source Coding

    PubMed Central

    Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel

    2014-01-01

    We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552

  6. A Well-Balanced Path-Integral f-Wave Method for Hyperbolic Problems with Source Terms

    PubMed Central

    2014-01-01

    Systems of hyperbolic partial differential equations with source terms (balance laws) arise in many applications where it is important to compute accurate time-dependent solutions modeling small perturbations of equilibrium solutions in which the source terms balance the hyperbolic part. The f-wave version of the wave-propagation algorithm is one approach, but requires the use of a particular averaged value of the source terms at each cell interface in order to be “well balanced” and exactly maintain steady states. A general approach to choosing this average is developed using the theory of path conservative methods. A scalar advection equation with a decay or growth term is introduced as a model problem for numerical experiments. PMID:24563581

  7. Statistical measurement of the gamma-ray source-count distribution as a function of energy

    NASA Astrophysics Data System (ADS)

    Zechlin, H.-S.; Cuoco, A.; Donato, F.; Fornengo, N.; Regis, M.

    2017-01-01

    Photon counts statistics have recently been proven to provide a sensitive observable for characterizing gamma-ray source populations and for measuring the composition of the gamma-ray sky. In this work, we generalize the use of the standard 1-point probability distribution function (1pPDF) to decompose the high-latitude gamma-ray emission observed with Fermi-LAT into: (i) point-source contributions, (ii) the Galactic foreground contribution, and (iii) a diffuse isotropic background contribution. We analyze gamma-ray data in five adjacent energy bands between 1 and 171 GeV. We measure the source-count distribution dN/dS as a function of energy, and demonstrate that our results extend current measurements from source catalogs to the regime of so far undetected sources. Our method improves the sensitivity for resolving point-source populations by about one order of magnitude in flux. The dN/dS distribution as a function of flux is found to be compatible with a broken power law. We derive upper limits on further possible breaks as well as the angular power of unresolved sources. We discuss the composition of the gamma-ray sky and capabilities of the 1pPDF method.

  8. Electric Transport Traction Power Supply System With Distributed Energy Sources

    NASA Astrophysics Data System (ADS)

    Abramov, E. Y.; Schurov, N. I.; Rozhkova, M. V.

    2016-04-01

    The paper states the problem of traction substation (TSS) leveling of daily-load curve for urban electric transport. The circuit of traction power supply system (TPSS) with distributed autonomous energy source (AES) based on photovoltaic (PV) and energy storage (ES) units is submitted here. The distribution algorithm of power flow for the daily traction load curve leveling is also introduced in this paper. In addition, it illustrates the implemented experiment model of power supply system.

  9. Using sediment particle size distribution to evaluate sediment sources in the Tobacco Creek Watershed

    NASA Astrophysics Data System (ADS)

    Liu, Cenwei; Lobb, David; Li, Sheng; Owens, Philip; Kuzyk, ZouZou

    2014-05-01

    Lake Winnipeg has recently brought attention to the deteriorated water quality due to in part to nutrient and sediment input from agricultural land. Improving water quality in Lake Winnipeg requires the knowledge of the sediment sources within this ecosystem. There are a variety of environmental fingerprinting techniques have been successfully used in the assessment of sediment sources. In this study, we used particle size distribution to evaluate spatial and temporal variations of suspended sediment and potential sediment sources collected in the Tobacco Creek Watershed in Manitoba, Canada. The particle size distribution of suspended sediment can reflect the origin of sediment and processes during sediment transport, deposition and remobilization within the watershed. The objectives of this study were to quantify visually observed spatial and temporal changes in sediment particles, and to assess the sediment source using a rapid and cost-effective fingerprinting technique based on particle size distribution. The suspended sediment was collected by sediment traps twice a year during rainfall and snowmelt periods from 2009 to 2012. The potential sediment sources included the top soil of cultivated field, riparian area and entire profile from stream banks. Suspended sediment and soil samples were pre-wet with RO water and sieved through 600 μm sieve before analyzing. Particle size distribution of all samples was determined using a Malvern Mastersizer 2000S laser diffraction with the measurement range up to 600μm. Comparison of the results for different fractions of sediment showed significant difference in particle size distribution of suspended sediment between snowmelt and rainfall events. An important difference of particle size distribution also found between the cultivated soil and forest soil. This difference can be explained by different land uses which provided a distinct fingerprint of sediment. An overall improvement in water quality can be achieved by

  10. CMP reflection imaging via interferometry of distributed subsurface sources

    NASA Astrophysics Data System (ADS)

    Kim, D.; Brown, L. D.; Quiros, D. A.

    2015-12-01

    The theoretical foundations of recovering body wave energy via seismic interferometry are well established. However in practice, such recovery remains problematic. Here, synthetic seismograms computed for subsurface sources are used to evaluate the geometrical combinations of realistic ambient source and receiver distributions that result in useful recovery of virtual body waves. This study illustrates how surface receiver arrays that span a limited distribution suite of sources, can be processed to reproduce virtual shot gathers that result in CMP gathers which can be effectively stacked with traditional normal moveout corrections. To verify the feasibility of the approach in practice, seismic recordings of 50 aftershocks following the magnitude of 5.8 Virginia earthquake occurred in August, 2011 have been processed using seismic interferometry to produce seismic reflection images of the crustal structure above and beneath the aftershock cluster. Although monotonic noise proved to be problematic by significantly reducing the number of usable recordings, the edited dataset resulted in stacked seismic sections characterized by coherent reflections that resemble those seen on a nearby conventional reflection survey. In particular, "virtual" reflections at travel times of 3 to 4 seconds suggest reflector sat approximately 7 to 12 km depth that would seem to correspond to imbricate thrust structures formed during the Appalachian orogeny. The approach described here represents a promising new means of body wave imaging of 3D structure that can be applied to a wide array of geologic and energy problems. Unlike other imaging techniques using natural sources, this technique does not require precise source locations or times. It can thus exploit aftershocks too small for conventional analyses. This method can be applied to any type of microseismic cloud, whether tectonic, volcanic or man-made.

  11. Correlated Sources in Distributed Networks--Data Transmission, Common Information Characterization and Inferencing

    ERIC Educational Resources Information Center

    Liu, Wei

    2011-01-01

    Correlation is often present among observations in a distributed system. This thesis deals with various design issues when correlated data are observed at distributed terminals, including: communicating correlated sources over interference channels, characterizing the common information among dependent random variables, and testing the presence of…

  12. TRIGA MARK-II source term

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Usang, M. D., E-mail: mark-dennis@nuclearmalaysia.gov.my; Hamzah, N. S., E-mail: mark-dennis@nuclearmalaysia.gov.my; Abi, M. J. B., E-mail: mark-dennis@nuclearmalaysia.gov.my

    ORIGEN 2.2 are employed to obtain data regarding γ source term and the radio-activity of irradiated TRIGA fuel. The fuel composition are specified in grams for use as input data. Three types of fuel are irradiated in the reactor, each differs from the other in terms of the amount of Uranium compared to the total weight. Each fuel are irradiated for 365 days with 50 days time step. We obtain results on the total radioactivity of the fuel, the composition of activated materials, composition of fission products and the photon spectrum of the burned fuel. We investigate the differences ofmore » results using BWR and PWR library for ORIGEN. Finally, we compare the composition of major nuclides after 1 year irradiation of both ORIGEN library with results from WIMS. We found only minor disagreements between the yields of PWR and BWR libraries. In comparison with WIMS, the errors are a little bit more pronounced. To overcome this errors, the irradiation power used in ORIGEN could be increased a little, so that the differences in the yield of ORIGEN and WIMS could be reduced. A more permanent solution is to use a different code altogether to simulate burnup such as DRAGON and ORIGEN-S. The result of this study are essential for the design of radiation shielding from the fuel.« less

  13. Distributed Source Modeling of Language with Magnetoencephalography: Application to Patients with Intractable Epilepsy

    PubMed Central

    McDonald, Carrie R.; Thesen, Thomas; Hagler, Donald J.; Carlson, Chad; Devinksy, Orrin; Kuzniecky, Rubin; Barr, William; Gharapetian, Lusineh; Trongnetrpunya, Amy; Dale, Anders M.; Halgren, Eric

    2009-01-01

    Purpose To examine distributed patterns of language processing in healthy controls and patients with epilepsy using magnetoencephalography (MEG), and to evaluate the concordance between laterality of distributed MEG sources and language laterality as determined by the intracarotid amobarbitol procedure (IAP). Methods MEG was performed in ten healthy controls using an anatomically-constrained, noise-normalized distributed source solution (dSPM). Distributed source modeling of language was then applied to eight patients with intractable epilepsy. Average source strengths within temporoparietal and frontal lobe regions of interest (ROIs) were calculated and the laterality of activity within ROIs during discrete time windows was compared to results from the IAP. Results In healthy controls, dSPM revealed activity in visual cortex bilaterally from ~80-120ms in response to novel words and sensory control stimuli (i.e., false fonts). Activity then spread to fusiform cortex ~160-200ms, and was dominated by left hemisphere activity in response to novel words. From ~240-450ms, novel words produced activity that was left-lateralized in frontal and temporal lobe regions, including anterior and inferior temporal, temporal pole, and pars opercularis, as well as bilaterally in posterior superior temporal cortex. Analysis of patient data with dSPM demonstrated that from 350-450ms, laterality of temporoparietal sources agreed with the IAP 75% of the time, whereas laterality of frontal MEG sources agreed with the IAP in all eight patients. Discussion Our results reveal that dSPM can unveil the timing and spatial extent of language processes in patients with epilepsy and may enhance knowledge of language lateralization and localization for use in preoperative planning. PMID:19552656

  14. Transverse distribution of beam current oscillations of a 14 GHz electron cyclotron resonance ion source.

    PubMed

    Tarvainen, O; Toivanen, V; Komppula, J; Kalvas, T; Koivisto, H

    2014-02-01

    The temporal stability of oxygen ion beams has been studied with the 14 GHz A-ECR at JYFL (University of Jyvaskyla, Department of Physics). A sector Faraday cup was employed to measure the distribution of the beam current oscillations across the beam profile. The spatial and temporal characteristics of two different oscillation "modes" often observed with the JYFL 14 GHz ECRIS are discussed. It was observed that the low frequency oscillations below 200 Hz are distributed almost uniformly. In the high frequency oscillation "mode," with frequencies >300 Hz at the core of the beam, carrying most of the current, oscillates with smaller amplitude than the peripheral parts of the beam. The results help to explain differences observed between the two oscillation modes in terms of the transport efficiency through the JYFL K-130 cyclotron. The dependence of the oscillation pattern on ion source parameters is a strong indication that the mechanisms driving the fluctuations are plasma effects.

  15. Regulatory Technology Development Plan - Sodium Fast Reactor: Mechanistic Source Term – Trial Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabaskas, David; Bucknor, Matthew; Jerden, James

    2016-10-01

    The potential release of radioactive material during a plant incident, referred to as the source term, is a vital design metric and will be a major focus of advanced reactor licensing. The U.S. Nuclear Regulatory Commission has stated an expectation for advanced reactor vendors to present a mechanistic assessment of the potential source term in their license applications. The mechanistic source term presents an opportunity for vendors to realistically assess the radiological consequences of an incident, and may allow reduced emergency planning zones and smaller plant sites. However, the development of a mechanistic source term for advanced reactors is notmore » without challenges, as there are often numerous phenomena impacting the transportation and retention of radionuclides. This project sought to evaluate U.S. capabilities regarding the mechanistic assessment of radionuclide release from core damage incidents at metal fueled, pool-type sodium fast reactors (SFRs). The purpose of the analysis was to identify, and prioritize, any gaps regarding computational tools or data necessary for the modeling of radionuclide transport and retention phenomena. To accomplish this task, a parallel-path analysis approach was utilized. One path, led by Argonne and Sandia National Laboratories, sought to perform a mechanistic source term assessment using available codes, data, and models, with the goal to identify gaps in the current knowledge base. The second path, performed by an independent contractor, performed sensitivity analyses to determine the importance of particular radionuclides and transport phenomena in regards to offsite consequences. The results of the two pathways were combined to prioritize gaps in current capabilities.« less

  16. Depth to the bottom of magnetic sources (DBMS) from aeromagnetic data of Central India using modified centroid method for fractal distribution of sources

    NASA Astrophysics Data System (ADS)

    Bansal, A. R.; Anand, S.; Rajaram, M.; Rao, V.; Dimri, V. P.

    2012-12-01

    The depth to the bottom of the magnetic sources (DBMS) may be used as an estimate of the Curie - point depth. The DBMSs can also be interpreted in term of thermal structure of the crust. The thermal structure of the crust is a sensitive parameter and depends on the many properties of crust e.g. modes of deformation, depths of brittle and ductile deformation zones, regional heat flow variations, seismicity, subsidence/uplift patterns and maturity of organic matter in sedimentary basins. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on fractal distribution has been proposed. We applied this modified centroid method to the aeromagnetic data of the central Indian region and selected 29 half overlapping blocks of dimension 200 km x 200 km covering different parts of the central India. Shallower values of the DBMS are found for the western and southern portion of Indian shield. The DBMSs values are found as low as close to middle crust in the south west Deccan trap and probably deeper than Moho in the Chhatisgarh basin. In few places DBMS are close to the Moho depth found from the seismic study and others places shallower than the Moho. The DBMS indicate complex nature of the Indian crust.

  17. Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems

    DOE PAGES

    Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.; ...

    2018-04-30

    The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less

  18. Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.

    The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less

  19. Long Term 2 Second Round Source Water Monitoring and Bin Placement Memo

    EPA Pesticide Factsheets

    The Long Term 2 Enhanced Surface Water Treatment Rule (LT2ESWTR) applies to all public water systems served by a surface water source or public water systems served by a ground water source under the direct influence of surface water.

  20. High-order scheme for the source-sink term in a one-dimensional water temperature model

    PubMed Central

    Jing, Zheng; Kang, Ling

    2017-01-01

    The source-sink term in water temperature models represents the net heat absorbed or released by a water system. This term is very important because it accounts for solar radiation that can significantly affect water temperature, especially in lakes. However, existing numerical methods for discretizing the source-sink term are very simplistic, causing significant deviations between simulation results and measured data. To address this problem, we present a numerical method specific to the source-sink term. A vertical one-dimensional heat conduction equation was chosen to describe water temperature changes. A two-step operator-splitting method was adopted as the numerical solution. In the first step, using the undetermined coefficient method, a high-order scheme was adopted for discretizing the source-sink term. In the second step, the diffusion term was discretized using the Crank-Nicolson scheme. The effectiveness and capability of the numerical method was assessed by performing numerical tests. Then, the proposed numerical method was applied to a simulation of Guozheng Lake (located in central China). The modeling results were in an excellent agreement with measured data. PMID:28264005

  1. High-order scheme for the source-sink term in a one-dimensional water temperature model.

    PubMed

    Jing, Zheng; Kang, Ling

    2017-01-01

    The source-sink term in water temperature models represents the net heat absorbed or released by a water system. This term is very important because it accounts for solar radiation that can significantly affect water temperature, especially in lakes. However, existing numerical methods for discretizing the source-sink term are very simplistic, causing significant deviations between simulation results and measured data. To address this problem, we present a numerical method specific to the source-sink term. A vertical one-dimensional heat conduction equation was chosen to describe water temperature changes. A two-step operator-splitting method was adopted as the numerical solution. In the first step, using the undetermined coefficient method, a high-order scheme was adopted for discretizing the source-sink term. In the second step, the diffusion term was discretized using the Crank-Nicolson scheme. The effectiveness and capability of the numerical method was assessed by performing numerical tests. Then, the proposed numerical method was applied to a simulation of Guozheng Lake (located in central China). The modeling results were in an excellent agreement with measured data.

  2. Experimental quantum key distribution with source flaws

    NASA Astrophysics Data System (ADS)

    Xu, Feihu; Wei, Kejin; Sajeed, Shihan; Kaiser, Sarah; Sun, Shihai; Tang, Zhiyuan; Qian, Li; Makarov, Vadim; Lo, Hoi-Kwong

    2015-09-01

    Decoy-state quantum key distribution (QKD) is a standard technique in current quantum cryptographic implementations. Unfortunately, existing experiments have two important drawbacks: the state preparation is assumed to be perfect without errors and the employed security proofs do not fully consider the finite-key effects for general attacks. These two drawbacks mean that existing experiments are not guaranteed to be proven to be secure in practice. Here, we perform an experiment that shows secure QKD with imperfect state preparations over long distances and achieves rigorous finite-key security bounds for decoy-state QKD against coherent attacks in the universally composable framework. We quantify the source flaws experimentally and demonstrate a QKD implementation that is tolerant to channel loss despite the source flaws. Our implementation considers more real-world problems than most previous experiments, and our theory can be applied to general discrete-variable QKD systems. These features constitute a step towards secure QKD with imperfect devices.

  3. Reconstruction of far-field tsunami amplitude distributions from earthquake sources

    USGS Publications Warehouse

    Geist, Eric L.; Parsons, Thomas E.

    2016-01-01

    The probability distribution of far-field tsunami amplitudes is explained in relation to the distribution of seismic moment at subduction zones. Tsunami amplitude distributions at tide gauge stations follow a similar functional form, well described by a tapered Pareto distribution that is parameterized by a power-law exponent and a corner amplitude. Distribution parameters are first established for eight tide gauge stations in the Pacific, using maximum likelihood estimation. A procedure is then developed to reconstruct the tsunami amplitude distribution that consists of four steps: (1) define the distribution of seismic moment at subduction zones; (2) establish a source-station scaling relation from regression analysis; (3) transform the seismic moment distribution to a tsunami amplitude distribution for each subduction zone; and (4) mix the transformed distribution for all subduction zones to an aggregate tsunami amplitude distribution specific to the tide gauge station. The tsunami amplitude distribution is adequately reconstructed for four tide gauge stations using globally constant seismic moment distribution parameters established in previous studies. In comparisons to empirical tsunami amplitude distributions from maximum likelihood estimation, the reconstructed distributions consistently exhibit higher corner amplitude values, implying that in most cases, the empirical catalogs are too short to include the largest amplitudes. Because the reconstructed distribution is based on a catalog of earthquakes that is much larger than the tsunami catalog, it is less susceptible to the effects of record-breaking events and more indicative of the actual distribution of tsunami amplitudes.

  4. Applicability of the single equivalent point dipole model to represent a spatially distributed bio-electrical source

    NASA Technical Reports Server (NTRS)

    Armoundas, A. A.; Feldman, A. B.; Sherman, D. A.; Cohen, R. J.

    2001-01-01

    Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius

  5. Short-Term Load Forecasting Based Automatic Distribution Network Reconfiguration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang; Ding, Fei; Zhang, Yingchen

    In a traditional dynamic network reconfiguration study, the optimal topology is determined at every scheduled time point by using the real load data measured at that time. The development of the load forecasting technique can provide an accurate prediction of the load power that will happen in a future time and provide more information about load changes. With the inclusion of load forecasting, the optimal topology can be determined based on the predicted load conditions during a longer time period instead of using a snapshot of the load at the time when the reconfiguration happens; thus, the distribution system operatormore » can use this information to better operate the system reconfiguration and achieve optimal solutions. This paper proposes a short-term load forecasting approach to automatically reconfigure distribution systems in a dynamic and pre-event manner. Specifically, a short-term and high-resolution distribution system load forecasting approach is proposed with a forecaster based on support vector regression and parallel parameters optimization. The network reconfiguration problem is solved by using the forecasted load continuously to determine the optimal network topology with the minimum amount of loss at the future time. The simulation results validate and evaluate the proposed approach.« less

  6. Short-Term Load Forecasting-Based Automatic Distribution Network Reconfiguration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang; Ding, Fei; Zhang, Yingchen

    In a traditional dynamic network reconfiguration study, the optimal topology is determined at every scheduled time point by using the real load data measured at that time. The development of the load forecasting technique can provide an accurate prediction of the load power that will happen in a future time and provide more information about load changes. With the inclusion of load forecasting, the optimal topology can be determined based on the predicted load conditions during a longer time period instead of using a snapshot of the load at the time when the reconfiguration happens; thus, the distribution system operatormore » can use this information to better operate the system reconfiguration and achieve optimal solutions. This paper proposes a short-term load forecasting approach to automatically reconfigure distribution systems in a dynamic and pre-event manner. Specifically, a short-term and high-resolution distribution system load forecasting approach is proposed with a forecaster based on support vector regression and parallel parameters optimization. The network reconfiguration problem is solved by using the forecasted load continuously to determine the optimal network topology with the minimum amount of loss at the future time. The simulation results validate and evaluate the proposed approach.« less

  7. Dataset for Testing Contamination Source Identification Methods for Water Distribution Networks

    EPA Pesticide Factsheets

    This dataset includes the results of a simulation study using the source inversion techniques available in the Water Security Toolkit. The data was created to test the different techniques for accuracy, specificity, false positive rate, and false negative rate. The tests examined different parameters including measurement error, modeling error, injection characteristics, time horizon, network size, and sensor placement. The water distribution system network models that were used in the study are also included in the dataset. This dataset is associated with the following publication:Seth, A., K. Klise, J. Siirola, T. Haxton , and C. Laird. Testing Contamination Source Identification Methods for Water Distribution Networks. Journal of Environmental Division, Proceedings of American Society of Civil Engineers. American Society of Civil Engineers (ASCE), Reston, VA, USA, ., (2016).

  8. Measurement-device-independent quantum key distribution with correlated source-light-intensity errors

    NASA Astrophysics Data System (ADS)

    Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin

    2018-04-01

    We present an analysis for measurement-device-independent quantum key distribution with correlated source-light-intensity errors. Numerical results show that the results here can greatly improve the key rate especially with large intensity fluctuations and channel attenuation compared with prior results if the intensity fluctuations of different sources are correlated.

  9. 77 FR 19740 - Water Sources for Long-Term Recirculation Cooling Following a Loss-of-Coolant Accident

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-02

    ... NUCLEAR REGULATORY COMMISSION [NRC-2010-0249] Water Sources for Long-Term Recirculation Cooling... Regulatory Guide (RG) 1.82, ``Water Sources for Long-Term Recirculation Cooling Following a Loss-of-Coolant... regarding the sumps and suppression pools that provide water sources for emergency core cooling, containment...

  10. Distributed policy based access to networked heterogeneous ISR data sources

    NASA Astrophysics Data System (ADS)

    Bent, G.; Vyvyan, D.; Wood, David; Zerfos, Petros; Calo, Seraphin

    2010-04-01

    Within a coalition environment, ad hoc Communities of Interest (CoI's) come together, perhaps for only a short time, with different sensors, sensor platforms, data fusion elements, and networks to conduct a task (or set of tasks) with different coalition members taking different roles. In such a coalition, each organization will have its own inherent restrictions on how it will interact with the others. These are usually stated as a set of policies, including security and privacy policies. The capability that we want to enable for a coalition operation is to provide access to information from any coalition partner in conformance with the policies of all. One of the challenges in supporting such ad-hoc coalition operations is that of providing efficient access to distributed sources of data, where the applications requiring the data do not have knowledge of the location of the data within the network. To address this challenge the International Technology Alliance (ITA) program has been developing the concept of a Dynamic Distributed Federated Database (DDFD), also know as a Gaian Database. This type of database provides a means for accessing data across a network of distributed heterogeneous data sources where access to the information is controlled by a mixture of local and global policies. We describe how a network of disparate ISR elements can be expressed as a DDFD and how this approach enables sensor and other information sources to be discovered autonomously or semi-autonomously and/or combined, fused formally defined local and global policies.

  11. Near term climate projections for invasive species distributions

    USGS Publications Warehouse

    Jarnevich, C.S.; Stohlgren, T.J.

    2009-01-01

    Climate change and invasive species pose important conservation issues separately, and should be examined together. We used existing long term climate datasets for the US to project potential climate change into the future at a finer spatial and temporal resolution than the climate change scenarios generally available. These fine scale projections, along with new species distribution modeling techniques to forecast the potential extent of invasive species, can provide useful information to aide conservation and invasive species management efforts. We created habitat suitability maps for Pueraria montana (kudzu) under current climatic conditions and potential average conditions up to 30 years in the future. We examined how the potential distribution of this species will be affected by changing climate, and the management implications associated with these changes. Our models indicated that P. montana may increase its distribution particularly in the Northeast with climate change and may decrease in other areas. ?? 2008 Springer Science+Business Media B.V.

  12. Superthermal photon bunching in terms of simple probability distributions

    NASA Astrophysics Data System (ADS)

    Lettau, T.; Leymann, H. A. M.; Melcher, B.; Wiersig, J.

    2018-05-01

    We analyze the second-order photon autocorrelation function g(2 ) with respect to the photon probability distribution and discuss the generic features of a distribution that results in superthermal photon bunching [g(2 )(0 ) >2 ]. Superthermal photon bunching has been reported for a number of optical microcavity systems that exhibit processes such as superradiance or mode competition. We show that a superthermal photon number distribution cannot be constructed from the principle of maximum entropy if only the intensity and the second-order autocorrelation are given. However, for bimodal systems, an unbiased superthermal distribution can be constructed from second-order correlations and the intensities alone. Our findings suggest modeling superthermal single-mode distributions by a mixture of a thermal and a lasinglike state and thus reveal a generic mechanism in the photon probability distribution responsible for creating superthermal photon bunching. We relate our general considerations to a physical system, i.e., a (single-emitter) bimodal laser, and show that its statistics can be approximated and understood within our proposed model. Furthermore, the excellent agreement of the statistics of the bimodal laser and our model reveals that the bimodal laser is an ideal source of bunched photons, in the sense that it can generate statistics that contain no other features but the superthermal bunching.

  13. The influence of cross-order terms in interface mobilities for structure-borne sound source characterization

    NASA Astrophysics Data System (ADS)

    Bonhoff, H. A.; Petersson, B. A. T.

    2010-08-01

    For the characterization of structure-borne sound sources with multi-point or continuous interfaces, substantial simplifications and physical insight can be obtained by incorporating the concept of interface mobilities. The applicability of interface mobilities, however, relies upon the admissibility of neglecting the so-called cross-order terms. Hence, the objective of the present paper is to clarify the importance and significance of cross-order terms for the characterization of vibrational sources. From previous studies, four conditions have been identified for which the cross-order terms can become more influential. Such are non-circular interface geometries, structures with distinctively differing transfer paths as well as a suppression of the zero-order motion and cases where the contact forces are either in phase or out of phase. In a theoretical study, the former four conditions are investigated regarding the frequency range and magnitude of a possible strengthening of the cross-order terms. For an experimental analysis, two source-receiver installations are selected, suitably designed to obtain strong cross-order terms. The transmitted power and the source descriptors are predicted by the approximations of the interface mobility approach and compared with the complete calculations. Neglecting the cross-order terms can result in large misinterpretations at certain frequencies. On average, however, the cross-order terms are found to be insignificant and can be neglected with good approximation. The general applicability of interface mobilities for structure-borne sound source characterization and the description of the transmission process thereby is confirmed.

  14. Distribution and Source Identification of Pb Contamination in industrial soil

    NASA Astrophysics Data System (ADS)

    Ko, M. S.

    2017-12-01

    INTRODUCTION Lead (Pb) is toxic element that induce neurotoxic effect to human, because competition of Pb and Ca in nerve system. Lead is classified as a chalophile element and galena (PbS) is the major mineral. Although the Pb is not an abundant element in nature, various anthropogenic source has been enhanced Pb enrichment in the environment after the Industrial Revolution. The representative anthropogenic sources are batteries, paint, mining, smelting, and combustion of fossil fuel. Isotope analysis widely used to identify the Pb contamination source. The Pb has four stable isotopes that are 208Pb, 207Pb, 206Pb, and 204Pb in natural. The Pb is stable isotope and the ratios maintain during physical and chemical fractionation. Therefore, variations of Pb isotope abundance and relative ratios could imply the certain Pb contamination source. In this study, distributions and isotope ratios of Pb in industrial soil were used to identify the Pb contamination source and dispersion pathways. MATERIALS AND METHODS Soil samples were collected at depth 0­-6 m from an industrial area in Korea. The collected soil samples were dried and sieved under 2 mm. Soil pH, aqua-regia digestion and TCLP carried out using sieved soil sample. The isotope analysis was carried out to determine the abundance of Pb isotope. RESULTS AND DISCUSSION The study area was developed land for promotion of industrial facilities. The study area was forest in 1980, and the satellite image show the alterations of land use with time. The variations of land use imply the possibilities of bringing in external contaminated soil. The Pb concentrations in core samples revealed higher in lower soil compare with top soil. Especially, 4 m soil sample show highest Pb concentrations that are approximately 1500 mg/kg. This result indicated that certain Pb source existed at 4 m depth. CONCLUSIONS This study investigated the distribution and source identification of Pb in industrial soil. The land use and Pb

  15. Long-term monitoring on environmental disasters using multi-source remote sensing technique

    NASA Astrophysics Data System (ADS)

    Kuo, Y. C.; Chen, C. F.

    2017-12-01

    Environmental disasters are extreme events within the earth's system that cause deaths and injuries to humans, as well as causing damages and losses of valuable assets, such as buildings, communication systems, farmlands, forest and etc. In disaster management, a large amount of multi-temporal spatial data is required. Multi-source remote sensing data with different spatial, spectral and temporal resolutions is widely applied on environmental disaster monitoring. With multi-source and multi-temporal high resolution images, we conduct rapid, systematic and seriate observations regarding to economic damages and environmental disasters on earth. It is based on three monitoring platforms: remote sensing, UAS (Unmanned Aircraft Systems) and ground investigation. The advantages of using UAS technology include great mobility and availability in real-time rapid and more flexible weather conditions. The system can produce long-term spatial distribution information from environmental disasters, obtaining high-resolution remote sensing data and field verification data in key monitoring areas. It also supports the prevention and control on ocean pollutions, illegally disposed wastes and pine pests in different scales. Meanwhile, digital photogrammetry can be applied on the camera inside and outside the position parameters to produce Digital Surface Model (DSM) data. The latest terrain environment information is simulated by using DSM data, and can be used as references in disaster recovery in the future.

  16. A Systematic Search for Short-term Variability of EGRET Sources

    NASA Technical Reports Server (NTRS)

    Wallace, P. M.; Griffis, N. J.; Bertsch, D. L.; Hartman, R. C.; Thompson, D. J.; Kniffen, D. A.; Bloom, S. D.

    2000-01-01

    The 3rd EGRET Catalog of High-energy Gamma-ray Sources contains 170 unidentified sources, and there is great interest in the nature of these sources. One means of determining source class is the study of flux variability on time scales of days; pulsars are believed to be stable on these time scales while blazers are known to be highly variable. In addition, previous work has demonstrated that 3EG J0241-6103 and 3EG J1837-0606 are candidates for a new gamma-ray source class. These sources near the Galactic plane display transient behavior but cannot be associated with any known blazers. Although, many instances of flaring AGN have been reported, the EGRET database has not been systematically searched for occurrences of short-timescale (approximately 1 day) variability. These considerations have led us to conduct a systematic search for short-term variability in EGRET data, covering all viewing periods through proposal cycle 4. Six 3EG catalog sources are reported here to display variability on short time scales; four of them are unidentified. In addition, three non-catalog variable sources are discussed.

  17. Oil source bed distribution in upper Tertiary of Gulf Coast

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dow, W.G.

    1985-02-01

    Effective oil source beds have not been reported in Miocene and younger Gulf Coast sediments and the organic matter present is invariably immature and oxidized. Crude oil composition, however, indicates origin from mature source beds containing reduced kerogen. Oil distribution suggests extensive vertical migration through fracture systems from localized sources in deeply buried, geopressured shales. A model is proposed in which oil source beds were deposited in intraslope basins that formed behind salt ridges. The combination of silled basin topography, rapid sedimentation, and enhanced oxygen-minimum zones during global warmups resulted in periodic anoxic environments and preservation of oil-generating organic matter.more » Anoxia was most widespread during the middle Miocene and Pliocene transgressions and rare during regressive cycles when anoxia occurred primarily in hypersaline conditions such as exist today in the Orca basin.« less

  18. Source term evaluation for combustion modeling

    NASA Technical Reports Server (NTRS)

    Sussman, Myles A.

    1993-01-01

    A modification is developed for application to the source terms used in combustion modeling. The modification accounts for the error of the finite difference scheme in regions where chain-branching chemical reactions produce exponential growth of species densities. The modification is first applied to a one-dimensional scalar model problem. It is then generalized to multiple chemical species, and used in quasi-one-dimensional computations of shock-induced combustion in a channel. Grid refinement studies demonstrate the improved accuracy of the method using this modification. The algorithm is applied in two spatial dimensions and used in simulations of steady and unsteady shock-induced combustion. Comparisons with ballistic range experiments give confidence in the numerical technique and the 9-species hydrogen-air chemistry model.

  19. GIS Based Distributed Runoff Predictions in Variable Source Area Watersheds Employing the SCS-Curve Number

    NASA Astrophysics Data System (ADS)

    Steenhuis, T. S.; Mendoza, G.; Lyon, S. W.; Gerard Marchant, P.; Walter, M. T.; Schneiderman, E.

    2003-04-01

    Because the traditional Soil Conservation Service Curve Number (SCS-CN) approach continues to be ubiquitously used in GIS-BASED water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed within an integrated GIS modeling environment a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Spatial representation of hydrologic processes is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point source pollution. The methodology presented here uses the traditional SCS-CN method to predict runoff volume and spatial extent of saturated areas and uses a topographic index to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was incorporated in an existing GWLF water quality model and applied to sub-watersheds of the Delaware basin in the Catskill Mountains region of New York State. We found that the distributed CN-VSA approach provided a physically-based method that gives realistic results for watersheds with VSA hydrology.

  20. Long-term spatial and temporal microbial community dynamics in a large-scale drinking water distribution system with multiple disinfectant regimes.

    PubMed

    Potgieter, Sarah; Pinto, Ameet; Sigudu, Makhosazana; du Preez, Hein; Ncube, Esper; Venter, Stephanus

    2018-08-01

    , temporal variations were consistently stronger as compared to spatial changes at individual sampling locations and demonstrated seasonality. This study emphasises the need for long-term studies to comprehensively understand the temporal patterns that would otherwise be missed in short-term investigations. Furthermore, systematic long-term investigations are particularly critical towards determining the impact of changes in source water quality, environmental conditions, and process operations on the changes in microbial community composition in the drinking water distribution system. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Impact of the differential fluence distribution of brachytherapy sources on the spectroscopic dose-rate constant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malin, Martha J.; Bartol, Laura J.; DeWerd, Larry A., E-mail: mmalin@wisc.edu, E-mail: ladewerd@wisc.edu

    2015-05-15

    Purpose: To investigate why dose-rate constants for {sup 125}I and {sup 103}Pd seeds computed using the spectroscopic technique, Λ{sub spec}, differ from those computed with standard Monte Carlo (MC) techniques. A potential cause of these discrepancies is the spectroscopic technique’s use of approximations of the true fluence distribution leaving the source, φ{sub full}. In particular, the fluence distribution used in the spectroscopic technique, φ{sub spec}, approximates the spatial, angular, and energy distributions of φ{sub full}. This work quantified the extent to which each of these approximations affects the accuracy of Λ{sub spec}. Additionally, this study investigated how the simplified water-onlymore » model used in the spectroscopic technique impacts the accuracy of Λ{sub spec}. Methods: Dose-rate constants as described in the AAPM TG-43U1 report, Λ{sub full}, were computed with MC simulations using the full source geometry for each of 14 different {sup 125}I and 6 different {sup 103}Pd source models. In addition, the spectrum emitted along the perpendicular bisector of each source was simulated in vacuum using the full source model and used to compute Λ{sub spec}. Λ{sub spec} was compared to Λ{sub full} to verify the discrepancy reported by Rodriguez and Rogers. Using MC simulations, a phase space of the fluence leaving the encapsulation of each full source model was created. The spatial and angular distributions of φ{sub full} were extracted from the phase spaces and were qualitatively compared to those used by φ{sub spec}. Additionally, each phase space was modified to reflect one of the approximated distributions (spatial, angular, or energy) used by φ{sub spec}. The dose-rate constant resulting from using approximated distribution i, Λ{sub approx,i}, was computed using the modified phase space and compared to Λ{sub full}. For each source, this process was repeated for each approximation in order to determine which approximations

  2. Fiber optic distributed temperature sensing for fire source localization

    NASA Astrophysics Data System (ADS)

    Sun, Miao; Tang, Yuquan; Yang, Shuang; Sigrist, Markus W.; Li, Jun; Dong, Fengzhong

    2017-08-01

    A method for localizing a fire source based on a distributed temperature sensor system is proposed. Two sections of optical fibers were placed orthogonally to each other as the sensing elements. A tray of alcohol was lit to act as a fire outbreak in a cabinet with an uneven ceiling to simulate a real scene of fire. Experiments were carried out to demonstrate the feasibility of the method. Rather large fluctuations and systematic errors with respect to predicting the exact room coordinates of the fire source caused by the uneven ceiling were observed. Two mathematical methods (smoothing recorded temperature curves and finding temperature peak positions) to improve the prediction accuracy are presented, and the experimental results indicate that the fluctuation ranges and systematic errors are significantly reduced. The proposed scheme is simple and appears reliable enough to locate a fire source in large spaces.

  3. Flows and Stratification of an Enclosure Containing Both Localised and Vertically Distributed Sources of Buoyancy

    NASA Astrophysics Data System (ADS)

    Partridge, Jamie; Linden, Paul

    2013-11-01

    We examine the flows and stratification established in a naturally ventilated enclosure containing both a localised and vertically distributed source of buoyancy. The enclosure is ventilated through upper and lower openings which connect the space to an external ambient. Small scale laboratory experiments were carried out with water as the working medium and buoyancy being driven directly by temperature differences. A point source plume gave localised heating while the distributed source was driven by a controllable heater mat located in the side wall of the enclosure. The transient temperatures, as well as steady state temperature profiles, were recorded and are reported here. The temperature profiles inside the enclosure were found to be dependent on the effective opening area A*, a combination of the upper and lower openings, and the ratio of buoyancy fluxes from the distributed and localised source Ψ =Bw/Bp . Industrial CASE award with ARUP.

  4. Panchromatic spectral energy distributions of Herschel sources

    NASA Astrophysics Data System (ADS)

    Berta, S.; Lutz, D.; Santini, P.; Wuyts, S.; Rosario, D.; Brisbin, D.; Cooray, A.; Franceschini, A.; Gruppioni, C.; Hatziminaoglou, E.; Hwang, H. S.; Le Floc'h, E.; Magnelli, B.; Nordon, R.; Oliver, S.; Page, M. J.; Popesso, P.; Pozzetti, L.; Pozzi, F.; Riguccini, L.; Rodighiero, G.; Roseboom, I.; Scott, D.; Symeonidis, M.; Valtchanov, I.; Viero, M.; Wang, L.

    2013-03-01

    Combining far-infrared Herschel photometry from the PACS Evolutionary Probe (PEP) and Herschel Multi-tiered Extragalactic Survey (HerMES) guaranteed time programs with ancillary datasets in the GOODS-N, GOODS-S, and COSMOS fields, it is possible to sample the 8-500 μm spectral energy distributions (SEDs) of galaxies with at least 7-10 bands. Extending to the UV, optical, and near-infrared, the number of bands increases up to 43. We reproduce the distribution of galaxies in a carefully selected restframe ten colors space, based on this rich data-set, using a superposition of multivariate Gaussian modes. We use this model to classify galaxies and build median SEDs of each class, which are then fitted with a modified version of the magphys code that combines stellar light, emission from dust heated by stars and a possible warm dust contribution heated by an active galactic nucleus (AGN). The color distribution of galaxies in each of the considered fields can be well described with the combination of 6-9 classes, spanning a large range of far- to near-infrared luminosity ratios, as well as different strength of the AGN contribution to bolometric luminosities. The defined Gaussian grouping is used to identify rare or odd sources. The zoology of outliers includes Herschel-detected ellipticals, very blue z ~ 1 Ly-break galaxies, quiescent spirals, and torus-dominated AGN with star formation. Out of these groups and outliers, a new template library is assembled, consisting of 32 SEDs describing the intrinsic scatter in the restframe UV-to-submm colors of infrared galaxies. This library is tested against L(IR) estimates with and without Herschel data included, and compared to eightother popular methods often adopted in the literature. When implementing Herschel photometry, these approaches produce L(IR) values consistent with each other within a median absolute deviation of 10-20%, the scatter being dominated more by fine tuning of the codes, rather than by the choice of

  5. A novel integrated approach for the hazardous radioactive dust source terms estimation in future nuclear fusion power plants.

    PubMed

    Poggi, L A; Malizia, A; Ciparisse, J F; Gaudio, P

    2016-10-01

    An open issue still under investigation by several international entities working on the safety and security field for the foreseen nuclear fusion reactors is the estimation of source terms that are a hazard for the operators and public, and for the machine itself in terms of efficiency and integrity in case of severe accident scenarios. Source term estimation is a crucial key safety issue to be addressed in the future reactors safety assessments, and the estimates available at the time are not sufficiently satisfactory. The lack of neutronic data along with the insufficiently accurate methodologies used until now, calls for an integrated methodology for source term estimation that can provide predictions with an adequate accuracy. This work proposes a complete methodology to estimate dust source terms starting from a broad information gathering. The wide number of parameters that can influence dust source term production is reduced with statistical tools using a combination of screening, sensitivity analysis, and uncertainty analysis. Finally, a preliminary and simplified methodology for dust source term production prediction for future devices is presented.

  6. Advanced Reactor PSA Methodologies for System Reliability Analysis and Source Term Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabaskas, D.; Brunett, A.; Passerini, S.

    Beginning in 2015, a project was initiated to update and modernize the probabilistic safety assessment (PSA) of the GE-Hitachi PRISM sodium fast reactor. This project is a collaboration between GE-Hitachi and Argonne National Laboratory (Argonne), and funded in part by the U.S. Department of Energy. Specifically, the role of Argonne is to assess the reliability of passive safety systems, complete a mechanistic source term calculation, and provide component reliability estimates. The assessment of passive system reliability focused on the performance of the Reactor Vessel Auxiliary Cooling System (RVACS) and the inherent reactivity feedback mechanisms of the metal fuel core. Themore » mechanistic source term assessment attempted to provide a sequence specific source term evaluation to quantify offsite consequences. Lastly, the reliability assessment focused on components specific to the sodium fast reactor, including electromagnetic pumps, intermediate heat exchangers, the steam generator, and sodium valves and piping.« less

  7. A study of numerical methods for hyperbolic conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Leveque, R. J.; Yee, H. C.

    1988-01-01

    The proper modeling of nonequilibrium gas dynamics is required in certain regimes of hypersonic flow. For inviscid flow this gives a system of conservation laws coupled with source terms representing the chemistry. Often a wide range of time scales is present in the problem, leading to numerical difficulties as in stiff systems of ordinary differential equations. Stability can be achieved by using implicit methods, but other numerical difficulties are observed. The behavior of typical numerical methods on a simple advection equation with a parameter-dependent source term was studied. Two approaches to incorporate the source term were utilized: MacCormack type predictor-corrector methods with flux limiters, and splitting methods in which the fluid dynamics and chemistry are handled in separate steps. Various comparisons over a wide range of parameter values were made. In the stiff case where the solution contains discontinuities, incorrect numerical propagation speeds are observed with all of the methods considered. This phenomenon is studied and explained.

  8. Realizing the measure-device-independent quantum-key-distribution with passive heralded-single photon sources

    PubMed Central

    Wang, Qin; Zhou, Xing-Yu; Guo, Guang-Can

    2016-01-01

    In this paper, we put forward a new approach towards realizing measurement-device-independent quantum key distribution with passive heralded single-photon sources. In this approach, both Alice and Bob prepare the parametric down-conversion source, where the heralding photons are labeled according to different types of clicks from the local detectors, and the heralded ones can correspondingly be marked with different tags at the receiver’s side. Then one can obtain four sets of data through using only one-intensity of pump light by observing different kinds of clicks of local detectors. By employing the newest formulae to do parameter estimation, we could achieve very precise prediction for the two-single-photon pulse contribution. Furthermore, by carrying out corresponding numerical simulations, we compare the new method with other practical schemes of measurement-device-independent quantum key distribution. We demonstrate that our new proposed passive scheme can exhibit remarkable improvement over the conventional three-intensity decoy-state measurement-device-independent quantum key distribution with either heralded single-photon sources or weak coherent sources. Besides, it does not need intensity modulation and can thus diminish source-error defects existing in several other active decoy-state methods. Therefore, if taking intensity modulating errors into account, our new method will show even more brilliant performance. PMID:27759085

  9. High frequency seismic signal generated by landslides on complex topographies: from point source to spatially distributed sources

    NASA Astrophysics Data System (ADS)

    Mangeney, A.; Kuehnert, J.; Capdeville, Y.; Durand, V.; Stutzmann, E.; Kone, E. H.; Sethi, S.

    2017-12-01

    During their flow along the topography, landslides generate seismic waves in a wide frequency range. These so called landquakes can be recorded at very large distances (a few hundreds of km for large landslides). The recorded signals depend on the landslide seismic source and the seismic wave propagation. If the wave propagation is well understood, the seismic signals can be inverted for the seismic source and thus can be used to get information on the landslide properties and dynamics. Analysis and modeling of long period seismic signals (10-150s) have helped in this way to discriminate between different landslide scenarios and to constrain rheological parameters (e.g. Favreau et al., 2010). This was possible as topography poorly affects wave propagation at these long periods and the landslide seismic source can be approximated as a point source. In the near-field and at higher frequencies (> 1 Hz) the spatial extent of the source has to be taken into account and the influence of the topography on the recorded seismic signal should be quantified in order to extract information on the landslide properties and dynamics. The characteristic signature of distributed sources and varying topographies is studied as a function of frequency and recording distance.The time dependent spatial distribution of the forces applied to the ground by the landslide are obtained using granular flow numerical modeling on 3D topography. The generated seismic waves are simulated using the spectral element method. The simulated seismic signal is compared to observed seismic data from rockfalls at the Dolomieu Crater of Piton de la Fournaise (La Réunion).Favreau, P., Mangeney, A., Lucas, A., Crosta, G., and Bouchut, F. (2010). Numerical modeling of landquakes. Geophysical Research Letters, 37(15):1-5.

  10. Potential breeding distributions of U.S. birds predicted with both short-term variability and long-term average climate data

    Treesearch

    Brooke L. Bateman; Anna M. Pidgeon; Volker C. Radeloff; Curtis H. Flather; Jeremy VanDerWal; H. Resit Akcakaya; Wayne E. Thogmartin; Thomas P. Albright; Stephen J. Vavrus; Patricia J. Heglund

    2016-01-01

    Climate conditions, such as temperature or precipitation, averaged over several decades strongly affect species distributions, as evidenced by experimental results and a plethora of models demonstrating statistical relations between species occurrences and long-term climate averages. However, long-term averages can conceal climate changes that have occurred in...

  11. Source Distributions of Substorm Ions Observed in the Near-Earth Magnetotail

    NASA Technical Reports Server (NTRS)

    Ashour-Abdalla, M.; El-Alaoui, M.; Peroomian, V.; Walker, R. J.; Raeder, J.; Frank, L. A.; Paterson, W. R.

    1999-01-01

    This study employs Geotail plasma observations and numerical modeling to determine sources of the ions observed in the near-Earth magnetotail near midnight during a substorm. The growth phase has the low-latitude boundary layer as its most important source of ions at Geotail, but during the expansion phase the plasma mantle is dominant. The mantle distribution shows evidence of two distinct entry mechanisms: entry through a high latitude reconnection region resulting in an accelerated component, and entry through open field lines traditionally identified with the mantle source. The two entry mechanisms are separated in time, with the high-latitude reconnection region disappearing prior to substorm onset.

  12. Evaluation of Long-term Performance of Enhanced Anaerobic Source Zone Bioremediation using mass flux

    NASA Astrophysics Data System (ADS)

    Haluska, A.; Cho, J.; Hatzinger, P.; Annable, M. D.

    2017-12-01

    Chlorinated ethene DNAPL source zones in groundwater act as potential long term sources of contamination as they dissolve yielding concentrations well above MCLs, posing an on-going public health risk. Enhanced bioremediation has been applied to treat many source zones with significant promise, but long-term sustainability of this technology has not been thoroughly assessed. This study evaluated the long-term effectiveness of enhanced anaerobic source zone bioremediation at chloroethene contaminated sites to determine if the treatment prevented contaminant rebound and removed NAPL from the source zone. Long-term performance was evaluated based on achieving MCL-based contaminant mass fluxes in parent compound concentrations during different monitoring periods. Groundwater concertation versus time data was compiled for 6-sites and post-remedial contaminant mass flux data was then measured using passive flux meters at wells both within and down-gradient of the source zone. Post-remedial mass flux data was then combined with pre-remedial water quality data to estimate pre-remedial mass flux. This information was used to characterize a DNAPL dissolution source strength function, such as the Power Law Model and the Equilibrium Stream tube model. The six-sites characterized for this study were (1) Former Charleston Air Force Base, Charleston, SC; (2) Dover Air Force Base, Dover, DE; (3) Treasure Island Naval Station, San Francisco, CA; (4) Former Raritan Arsenal, Edison, NJ; (5) Naval Air Station, Jacksonville, FL; and, (6) Former Naval Air Station, Alameda, CA. Contaminant mass fluxes decreased for all the sites by the end of the post-treatment monitoring period and rebound was limited within the source zone. Post remedial source strength function estimates suggest that decreases in contaminant mass flux will continue to occur at these sites, but a mass flux based on MCL levels may never be exceeded. Thus, site clean-up goals should be evaluated as order

  13. Measurement-device-independent quantum key distribution with source state errors and statistical fluctuation

    NASA Astrophysics Data System (ADS)

    Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin

    2017-03-01

    We show how to calculate the secure final key rate in the four-intensity decoy-state measurement-device-independent quantum key distribution protocol with both source errors and statistical fluctuations with a certain failure probability. Our results rely only on the range of only a few parameters in the source state. All imperfections in this protocol have been taken into consideration without assuming any specific error patterns of the source.

  14. The Galactic Distribution of Massive Star Formation from the Red MSX Source Survey

    NASA Astrophysics Data System (ADS)

    Figura, Charles C.; Urquhart, J. S.

    2013-01-01

    Massive stars inject enormous amounts of energy into their environments in the form of UV radiation and molecular outflows, creating HII regions and enriching local chemistry. These effects provide feedback mechanisms that aid in regulating star formation in the region, and may trigger the formation of subsequent generations of stars. Understanding the mechanics of massive star formation presents an important key to understanding this process and its role in shaping the dynamics of galactic structure. The Red MSX Source (RMS) survey is a multi-wavelength investigation of ~1200 massive young stellar objects (MYSO) and ultra-compact HII (UCHII) regions identified from a sample of colour-selected sources from the Midcourse Space Experiment (MSX) point source catalog and Two Micron All Sky Survey. We present a study of over 900 MYSO and UCHII regions investigated by the RMS survey. We review the methods used to determine distances, and investigate the radial galactocentric distribution of these sources in context with the observed structure of the galaxy. The distribution of MYSO and UCHII regions is found to be spatially correlated with the spiral arms and galactic bar. We examine the radial distribution of MYSOs and UCHII regions and find variations in the star formation rate between the inner and outer Galaxy and discuss the implications for star formation throughout the galactic disc.

  15. Distributed source model for the full-wave electromagnetic simulation of nonlinear terahertz generation.

    PubMed

    Fumeaux, Christophe; Lin, Hungyen; Serita, Kazunori; Withayachumnankul, Withawat; Kaufmann, Thomas; Tonouchi, Masayoshi; Abbott, Derek

    2012-07-30

    The process of terahertz generation through optical rectification in a nonlinear crystal is modeled using discretized equivalent current sources. The equivalent terahertz sources are distributed in the active volume and computed based on a separately modeled near-infrared pump beam. This approach can be used to define an appropriate excitation for full-wave electromagnetic numerical simulations of the generated terahertz radiation. This enables predictive modeling of the near-field interactions of the terahertz beam with micro-structured samples, e.g. in a near-field time-resolved microscopy system. The distributed source model is described in detail, and an implementation in a particular full-wave simulation tool is presented. The numerical results are then validated through a series of measurements on square apertures. The general principle can be applied to other nonlinear processes with possible implementation in any full-wave numerical electromagnetic solver.

  16. Short-Term Load Forecasting Based Automatic Distribution Network Reconfiguration: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang; Ding, Fei; Zhang, Yingchen

    In the traditional dynamic network reconfiguration study, the optimal topology is determined at every scheduled time point by using the real load data measured at that time. The development of load forecasting technique can provide accurate prediction of load power that will happen in future time and provide more information about load changes. With the inclusion of load forecasting, the optimal topology can be determined based on the predicted load conditions during the longer time period instead of using the snapshot of load at the time when the reconfiguration happens, and thus it can provide information to the distribution systemmore » operator (DSO) to better operate the system reconfiguration to achieve optimal solutions. Thus, this paper proposes a short-term load forecasting based approach for automatically reconfiguring distribution systems in a dynamic and pre-event manner. Specifically, a short-term and high-resolution distribution system load forecasting approach is proposed with support vector regression (SVR) based forecaster and parallel parameters optimization. And the network reconfiguration problem is solved by using the forecasted load continuously to determine the optimal network topology with the minimum loss at the future time. The simulation results validate and evaluate the proposed approach.« less

  17. Source terms, shielding calculations and soil activation for a medical cyclotron.

    PubMed

    Konheiser, J; Naumann, B; Ferrari, A; Brachem, C; Müller, S E

    2016-12-01

    Calculations of the shielding and estimates of soil activation for a medical cyclotron are presented in this work. Based on the neutron source term from the 18 O(p,n) 18 F reaction produced by a 28 MeV proton beam, neutron and gamma dose rates outside the building were estimated with the Monte Carlo code MCNP6 (Goorley et al 2012 Nucl. Technol. 180 298-315). The neutron source term was calculated with the MCNP6 code and FLUKA (Ferrari et al 2005 INFN/TC_05/11, SLAC-R-773) code as well as with supplied data by the manufacturer. MCNP and FLUKA calculations yielded comparable results, while the neutron yield obtained using the manufacturer-supplied information is about a factor of 5 smaller. The difference is attributed to the missing channels in the manufacturer-supplied neutron source terms which considers only the 18 O(p,n) 18 F reaction, whereas the MCNP and FLUKA calculations include additional neutron reaction channels. Soil activation was performed using the FLUKA code. The estimated dose rate based on MCNP6 calculations in the public area is about 0.035 µSv h -1 and thus significantly below the reference value of 0.5 µSv h -1 (2011 Strahlenschutzverordnung, 9 Auflage vom 01.11.2011, Bundesanzeiger Verlag). After 5 years of continuous beam operation and a subsequent decay time of 30 d, the activity concentration of the soil is about 0.34 Bq g -1 .

  18. Plutonium isotopes and 241Am in the atmosphere of Lithuania: A comparison of different source terms

    NASA Astrophysics Data System (ADS)

    Lujanienė, G.; Valiulis, D.; Byčenkienė, S.; Šakalys, J.; Povinec, P. P.

    2012-12-01

    137Cs, 241Am and Pu isotopes collected in aerosol samples during 1994-2011 were analyzed with special emphasis on better understanding of Pu and Am behavior in the atmosphere. The results from long-term measurements of 240Pu/239Pu atom ratios showed a bimodal frequency distribution with median values of 0.195 and 0.253, indicating two main sources contributing to the Pu activities at the Vilnius sampling station. The low Pu atom ratio of 0.141 could be attributed to the weapon-grade plutonium derived from the nuclear weapon test sites. The frequency of air masses arriving from the North-West and North-East correlated with the Pu atom ratio indicating the input from the sources located in these regions (the Novaya Zemlya test site, Siberian nuclear plants), while no correlation with the Chernobyl region was observed. Measurements carried out during the Fukushima accident showed a negligible impact of this source with Pu activities by four orders of magnitude lower as compared to the Chernobyl accident. The activity concentration of actinides measured in the integrated sample collected in March-April, 2011 showed a small contribution of Pu with unusual activity and atom ratios indicating the presence of the spent fuel of different origin than that of the Chernobyl accident.

  19. Eccentric Black Hole Gravitational-wave Capture Sources in Galactic Nuclei: Distribution of Binary Parameters

    NASA Astrophysics Data System (ADS)

    Gondán, László; Kocsis, Bence; Raffai, Péter; Frei, Zsolt

    2018-06-01

    Mergers of binary black holes on eccentric orbits are among the targets for second-generation ground-based gravitational-wave detectors. These sources may commonly form in galactic nuclei due to gravitational-wave emission during close flyby events of single objects. We determine the distributions of initial orbital parameters for a population of these gravitational-wave sources. Our results show that the initial dimensionless pericenter distance systematically decreases with the binary component masses and the mass of the central supermassive black hole, and its distribution depends sensitively on the highest possible black hole mass in the nuclear star cluster. For a multi-mass black hole population with masses between 5 {M}ȯ and 80 {M}ȯ , we find that between ∼43–69% (68–94%) of 30 {M}ȯ –30 {M}ȯ (10 M ⊙–10 M ⊙) sources have an eccentricity greater than 0.1 when the gravitational-wave signal reaches 10 Hz, but less than ∼10% of the sources with binary component masses less than 30 {M}ȯ remain eccentric at this level near the last stable orbit (LSO). The eccentricity at LSO is typically between 0.005–0.05 for the lower-mass BHs, and 0.1–0.2 for the highest-mass BHs. Thus, due to the limited low-frequency sensitivity, the six currently known quasicircular LIGO/Virgo sources could still be compatible with this originally highly eccentric source population. However, at the design sensitivity of these instruments, the measurement of the eccentricity and mass distribution of merger events may be a useful diagnostic to identify the fraction of GW sources formed in this channel.

  20. Characterizing short-term stability for Boolean networks over any distribution of transfer functions

    DOE PAGES

    Seshadhri, C.; Smith, Andrew M.; Vorobeychik, Yevgeniy; ...

    2016-07-05

    Here we present a characterization of short-term stability of random Boolean networks under arbitrary distributions of transfer functions. Given any distribution of transfer functions for a random Boolean network, we present a formula that decides whether short-term chaos (damage spreading) will happen. We provide a formal proof for this formula, and empirically show that its predictions are accurate. Previous work only works for special cases of balanced families. Finally, it has been observed that these characterizations fail for unbalanced families, yet such families are widespread in real biological networks.

  1. Multiple sparse volumetric priors for distributed EEG source reconstruction.

    PubMed

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-10-15

    We revisit the multiple sparse priors (MSP) algorithm implemented in the statistical parametric mapping software (SPM) for distributed EEG source reconstruction (Friston et al., 2008). In the present implementation, multiple cortical patches are introduced as source priors based on a dipole source space restricted to a cortical surface mesh. In this note, we present a technique to construct volumetric cortical regions to introduce as source priors by restricting the dipole source space to a segmented gray matter layer and using a region growing approach. This extension allows to reconstruct brain structures besides the cortical surface and facilitates the use of more realistic volumetric head models including more layers, such as cerebrospinal fluid (CSF), compared to the standard 3-layered scalp-skull-brain head models. We illustrated the technique with ERP data and anatomical MR images in 12 subjects. Based on the segmented gray matter for each of the subjects, cortical regions were created and introduced as source priors for MSP-inversion assuming two types of head models. The standard 3-layered scalp-skull-brain head models and extended 4-layered head models including CSF. We compared these models with the current implementation by assessing the free energy corresponding with each of the reconstructions using Bayesian model selection for group studies. Strong evidence was found in favor of the volumetric MSP approach compared to the MSP approach based on cortical patches for both types of head models. Overall, the strongest evidence was found in favor of the volumetric MSP reconstructions based on the extended head models including CSF. These results were verified by comparing the reconstructed activity. The use of volumetric cortical regions as source priors is a useful complement to the present implementation as it allows to introduce more complex head models and volumetric source priors in future studies. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. 10 CFR 32.74 - Manufacture and distribution of sources or devices containing byproduct material for medical use.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Manufacture and distribution of sources or devices... SPECIFIC DOMESTIC LICENSES TO MANUFACTURE OR TRANSFER CERTAIN ITEMS CONTAINING BYPRODUCT MATERIAL Generally Licensed Items § 32.74 Manufacture and distribution of sources or devices containing byproduct material for...

  3. Detection prospects for high energy neutrino sources from the anisotropic matter distribution in the local Universe

    NASA Astrophysics Data System (ADS)

    Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene

    2017-03-01

    Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe. Assuming that the distribution of the neutrino sources follows that of matter, we look for correlations between ``warm'' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance (including that of IceCube-Gen2), we demonstrate that sources with local density exceeding 10-6 Mpc-3 and neutrino luminosity Lν lesssim 1042 erg s-1 (1041 erg s-1) will be efficiently revealed by our method using IceCube (IceCube-Gen2). At low luminosities such as will be probed by IceCube-Gen2, the sensitivity of this analysis is superior to requiring statistically significant direct observation of a point source.

  4. High resolution stationary digital breast tomosynthesis using distributed carbon nanotube x-ray source array.

    PubMed

    Qian, Xin; Tucker, Andrew; Gidcumb, Emily; Shan, Jing; Yang, Guang; Calderon-Colon, Xiomara; Sultana, Shabana; Lu, Jianping; Zhou, Otto; Spronk, Derrek; Sprenger, Frank; Zhang, Yiheng; Kennedy, Don; Farbizio, Tom; Jing, Zhenxue

    2012-04-01

    resolution along the scanning direction increased from 4.0 cycles/mm [at 10% modulation-transfer-function (MTF)] in DBT to 5.1 cycles/mm in s-DBT at magnification factor of 1.08. The improvement is more pronounced for faster scanning speeds, wider angular coverage, and smaller detector pixel sizes. The scanning speed depends on the detector, the number of views, and the imaging dose. With 240 ms detector readout time, the s-DBT system scanning time is 6.3 s for a 15-view, 100 mAs scan regardless of the angular coverage. The scanning speed can be reduced to less than 4 s when detectors become faster. Initial phantom studies showed good quality reconstructed images. A prototype s-DBT scanner has been developed and evaluated by retrofitting the Selenia rotating gantry DBT scanner with a spatially distributed CNT x-ray source array. Preliminary results show that it improves system spatial resolution substantially by eliminating image blur due to x-ray focal spot motion. The scanner speed of s-DBT system is independent of angular coverage and can be increased with faster detector without image degration. The accelerated lifetime measurement demonstrated the long term stability of CNT x-ray source array with typical clinical operation lifetime over 3 years.

  5. Open-source hardware for medical devices.

    PubMed

    Niezen, Gerrit; Eslambolchilar, Parisa; Thimbleby, Harold

    2016-04-01

    Open-source hardware is hardware whose design is made publicly available so anyone can study, modify, distribute, make and sell the design or the hardware based on that design. Some open-source hardware projects can potentially be used as active medical devices. The open-source approach offers a unique combination of advantages, including reducing costs and faster innovation. This article compares 10 of open-source healthcare projects in terms of how easy it is to obtain the required components and build the device.

  6. Semi-implicit and fully implicit shock-capturing methods for hyperbolic conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Shinn, J. L.

    1986-01-01

    Some numerical aspects of finite-difference algorithms for nonlinear multidimensional hyperbolic conservation laws with stiff nonhomogenous (source) terms are discussed. If the stiffness is entirely dominated by the source term, a semi-implicit shock-capturing method is proposed provided that the Jacobian of the soruce terms possesses certain properties. The proposed semi-implicit method can be viewed as a variant of the Bussing and Murman point-implicit scheme with a more appropriate numerical dissipation for the computation of strong shock waves. However, if the stiffness is not solely dominated by the source terms, a fully implicit method would be a better choice. The situation is complicated by problems that are higher than one dimension, and the presence of stiff source terms further complicates the solution procedures for alternating direction implicit (ADI) methods. Several alternatives are discussed. The primary motivation for constructing these schemes was to address thermally and chemically nonequilibrium flows in the hypersonic regime. Due to the unique structure of the eigenvalues and eigenvectors for fluid flows of this type, the computation can be simplified, thus providing a more efficient solution procedure than one might have anticipated.

  7. Sources, distribution, bioavailability, toxicity, and risk assessment of heavy metal(loid)s in complementary medicines.

    PubMed

    Bolan, Shiv; Kunhikrishnan, Anitha; Seshadri, Balaji; Choppala, Girish; Naidu, Ravi; Bolan, Nanthi S; Ok, Yong Sik; Zhang, Ming; Li, Chun-Guang; Li, Feng; Noller, Barry; Kirkham, Mary Beth

    2017-11-01

    The last few decades have seen the rise of alternative medical approaches including the use of herbal supplements, natural products, and traditional medicines, which are collectively known as 'Complementary medicines'. However, there are increasing concerns on the safety and health benefits of these medicines. One of the main hazards with the use of complementary medicines is the presence of heavy metal(loid)s such as arsenic (As), cadmium (Cd), lead (Pb), and mercury (Hg). This review deals with the characteristics of complementary medicines in terms of heavy metal(loid)s sources, distribution, bioavailability, toxicity, and human risk assessment. The heavy metal(loid)s in these medicines are derived from uptake by medicinal plants, cross-contamination during processing, and therapeutic input of metal(loid)s. This paper discusses the distribution of heavy metal(loid)s in these medicines, in terms of their nature, concentration, and speciation. The importance of determining bioavailability towards human health risk assessment was emphasized by the need to estimate daily intake of heavy metal(loid)s in complementary medicines. The review ends with selected case studies of heavy metal(loid) toxicity from complementary medicines with specific reference to As, Cd, Pb, and Hg. The future research opportunities mentioned in the conclusion of review will help researchers to explore new avenues, methodologies, and approaches to the issue of heavy metal(loid)s in complementary medicines, thereby generating new regulations and proposing fresh approach towards safe use of these medicines. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Source-term characterisation and solid speciation of plutonium at the Semipalatinsk NTS, Kazakhstan.

    PubMed

    Nápoles, H Jiménez; León Vintró, L; Mitchell, P I; Omarova, A; Burkitbayev, M; Priest, N D; Artemyev, O; Lukashenko, S

    2004-01-01

    New data on the concentrations of key fission/activation products and transuranium nuclides in samples of soil and water from the Semipalatinsk Nuclear Test Site are presented and interpreted. Sampling was carried out at Ground Zero, Lake Balapan, the Tel'kem craters and reference locations within the test site boundary well removed from localised sources. Radionuclide ratios have been used to characterise the source term(s) at each of these sites. The geochemical partitioning of plutonium has also been examined and it is shown that the bulk of the plutonium contamination at most of the sites examined is in a highly refractory, non-labile form.

  9. Distribution of tsunami interevent times

    NASA Astrophysics Data System (ADS)

    Geist, Eric L.; Parsons, Tom

    2008-01-01

    The distribution of tsunami interevent times is analyzed using global and site-specific (Hilo, Hawaii) tsunami catalogs. An empirical probability density distribution is determined by binning the observed interevent times during a period in which the observation rate is approximately constant. The empirical distributions for both catalogs exhibit non-Poissonian behavior in which there is an abundance of short interevent times compared to an exponential distribution. Two types of statistical distributions are used to model this clustering behavior: (1) long-term clustering described by a universal scaling law, and (2) Omori law decay of aftershocks and triggered sources. The empirical and theoretical distributions all imply an increased hazard rate after a tsunami, followed by a gradual decrease with time approaching a constant hazard rate. Examination of tsunami sources suggests that many of the short interevent times are caused by triggered earthquakes, though the triggered events are not necessarily on the same fault.

  10. Time-frequency approach to underdetermined blind source separation.

    PubMed

    Xie, Shengli; Yang, Liu; Yang, Jun-Mei; Zhou, Guoxu; Xiang, Yong

    2012-02-01

    This paper presents a new time-frequency (TF) underdetermined blind source separation approach based on Wigner-Ville distribution (WVD) and Khatri-Rao product to separate N non-stationary sources from M(M <; N) mixtures. First, an improved method is proposed for estimating the mixing matrix, where the negative value of the auto WVD of the sources is fully considered. Then after extracting all the auto-term TF points, the auto WVD value of the sources at every auto-term TF point can be found out exactly with the proposed approach no matter how many active sources there are as long as N ≤ 2M-1. Further discussion about the extraction of auto-term TF points is made and finally the numerical simulation results are presented to show the superiority of the proposed algorithm by comparing it with the existing ones.

  11. Using a topographic index to distribute variable source area runoff predicted with the SCS curve-number equation

    NASA Astrophysics Data System (ADS)

    Lyon, Steve W.; Walter, M. Todd; Gérard-Marchant, Pierre; Steenhuis, Tammo S.

    2004-10-01

    Because the traditional Soil Conservation Service curve-number (SCS-CN) approach continues to be used ubiquitously in water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed and tested a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Predicting the location of source areas is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point-source pollution. The method presented here used the traditional SCS-CN approach to predict runoff volume and spatial extent of saturated areas and a topographic index, like that used in TOPMODEL, to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was applied to two subwatersheds of the Delaware basin in the Catskill Mountains region of New York State and one watershed in south-eastern Australia to produce runoff-probability maps. Observed saturated area locations in the watersheds agreed with the distributed CN-VSA method. Results showed good agreement with those obtained from the previously validated soil moisture routing (SMR) model. When compared with the traditional SCS-CN method, the distributed CN-VSA method predicted a similar total volume of runoff, but vastly different locations of runoff generation. Thus, the distributed CN-VSA approach provides a physically based method that is simple enough to be incorporated into water quality models, and other tools that currently use the traditional SCS-CN method, while still adhering to the principles of VSA hydrology.

  12. Open-source hardware for medical devices

    PubMed Central

    2016-01-01

    Open-source hardware is hardware whose design is made publicly available so anyone can study, modify, distribute, make and sell the design or the hardware based on that design. Some open-source hardware projects can potentially be used as active medical devices. The open-source approach offers a unique combination of advantages, including reducing costs and faster innovation. This article compares 10 of open-source healthcare projects in terms of how easy it is to obtain the required components and build the device. PMID:27158528

  13. Use of source distributions for evaluating theoretical aerodynamics of thin finite wings at supersonic speeds

    NASA Technical Reports Server (NTRS)

    Evvard, John C

    1950-01-01

    A series of publications on the source-distribution methods for evaluating the aerodynamics of thin wings at supersonic speeds is summarized, extended, and unified. Included in the first part are the deviations of: (a) the linearized partial-differential equation for unsteady flow at a substantially constant Mach number. b) The source-distribution solution for the perturbation-velocity potential that satisfies the boundary conditions of tangential flow at the surface and in the plane of the wing; and (c) the integral equation for determining the strength and the location of sources to describe the interaction effects (as represented by upwash) of the bottom and top wing surfaces through the region between the finite wing boundary and the foremost Mach wave. The second part deals with steady-state thin-wing problems. The third part of the report approximates the integral equation for unsteady upwash and includes a solution of approximate equation. Expressions are then derived to evaluate the load distributions for time-dependent finite-wing motions.

  14. Acoustic Emission Source Location Using a Distributed Feedback Fiber Laser Rosette

    PubMed Central

    Huang, Wenzhu; Zhang, Wentao; Li, Fang

    2013-01-01

    This paper proposes an approach for acoustic emission (AE) source localization in a large marble stone using distributed feedback (DFB) fiber lasers. The aim of this study is to detect damage in structures such as those found in civil applications. The directional sensitivity of DFB fiber laser is investigated by calculating location coefficient using a method of digital signal analysis. In this, autocorrelation is used to extract the location coefficient from the periodic AE signal and wavelet packet energy is calculated to get the location coefficient of a burst AE source. Normalization is processed to eliminate the influence of distance and intensity of AE source. Then a new location algorithm based on the location coefficient is presented and tested to determine the location of AE source using a Delta (Δ) DFB fiber laser rosette configuration. The advantage of the proposed algorithm over the traditional methods based on fiber Bragg Grating (FBG) include the capability of: having higher strain resolution for AE detection and taking into account two different types of AE source for location. PMID:24141266

  15. Source apportionment of Baltimore aerosol from combined size distribution and chemical composition data

    NASA Astrophysics Data System (ADS)

    Ogulei, David; Hopke, Philip K.; Zhou, Liming; Patrick Pancras, J.; Nair, Narayanan; Ondov, John M.

    Several multivariate data analysis methods have been applied to a combination of particle size and composition measurements made at the Baltimore Supersite. Partial least squares (PLS) was used to investigate the relationship (linearity) between number concentrations and the measured PM2.5 mass concentrations of chemical species. The data were obtained at the Ponca Street site and consisted of six days' measurements: 6, 7, 8, 18, 19 July, and 21 August 2002. The PLS analysis showed that the covariance between the data could be explained by 10 latent variables (LVs), but only the first four of these were sufficient to establish the linear relationship between the two data sets. More LVs could not make the model better. The four LVs were found to better explain the covariance between the large sized particles and the chemical species. A bilinear receptor model, PMF2, was then used to simultaneously analyze the size distribution and chemical composition data sets. The resolved sources were identified using information from number and mass contributions from each source (source profiles) as well as meteorological data. Twelve sources were identified: oil-fired power plant emissions, secondary nitrate I, local gasoline traffic, coal-fired power plant, secondary nitrate II, secondary sulfate, diesel emissions/bus maintenance, Quebec wildfire episode, nucleation, incinerator, airborne soil/road-way dust, and steel plant emissions. Local sources were mostly characterized by bi-modal number distributions. Regional sources were characterized by transport mode particles (0.2- 0.5μm).

  16. On the inclusion of mass source terms in a single-relaxation-time lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Aursjø, Olav; Jettestuen, Espen; Vinningland, Jan Ludvig; Hiorth, Aksel

    2018-05-01

    We present a lattice Boltzmann algorithm for incorporating a mass source in a fluid flow system. The proposed mass source/sink term, included in the lattice Boltzmann equation, maintains the Galilean invariance and the accuracy of the overall method, while introducing a mass source/sink term in the fluid dynamical equations. The method can, for instance, be used to inject or withdraw fluid from any preferred lattice node in a system. This suggests that injection and withdrawal of fluid does not have to be introduced through cumbersome, and sometimes less accurate, boundary conditions. The method also suggests that, through a chosen equation of state relating mass density to pressure, the proposed mass source term will render it possible to set a preferred pressure at any lattice node in a system. We demonstrate how this model handles injection and withdrawal of a fluid. And we show how it can be used to incorporate pressure boundaries. The accuracy of the algorithm is identified through a Chapman-Enskog expansion of the model and supported by the numerical simulations.

  17. Unveiling the Gamma-Ray Source Count Distribution Below the Fermi Detection Limit with Photon Statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza

    The source-count distribution as a function of their flux, dN/dS, is one of the main quantities characterizing gamma-ray source populations. In this paper, we employ statistical properties of the Fermi Large Area Telescope (LAT) photon counts map to measure the composition of the extragalactic gamma-ray sky at high latitudes (|b| greater-than or slanted equal to 30°) between 1 and 10 GeV. We present a new method, generalizing the use of standard pixel-count statistics, to decompose the total observed gamma-ray emission into (a) point-source contributions, (b) the Galactic foreground contribution, and (c) a truly diffuse isotropic background contribution. Using the 6more » yr Fermi-LAT data set (P7REP), we show that the dN/dS distribution in the regime of so far undetected point sources can be consistently described with a power law with an index between 1.9 and 2.0. We measure dN/dS down to an integral flux of ~2 x 10 -11cm -2s -1, improving beyond the 3FGL catalog detection limit by about one order of magnitude. The overall dN/dS distribution is consistent with a broken power law, with a break at 2.1 +1.0 -1.3 x 10 -8cm -2s -1. The power-law index n 1 = 3.1 +0.7 -0.5 for bright sources above the break hardens to n 2 = 1.97 ± 0.03 for fainter sources below the break. A possible second break of the dN/dS distribution is constrained to be at fluxes below 6.4 x 10 -11cm -2s -1 at 95% confidence level. Finally, the high-latitude gamma-ray sky between 1 and 10 GeV is shown to be composed of ~25% point sources, ~69.3% diffuse Galactic foreground emission, and ~6% isotropic diffuse background.« less

  18. Unveiling the Gamma-Ray Source Count Distribution Below the Fermi Detection Limit with Photon Statistics

    DOE PAGES

    Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; ...

    2016-07-26

    The source-count distribution as a function of their flux, dN/dS, is one of the main quantities characterizing gamma-ray source populations. In this paper, we employ statistical properties of the Fermi Large Area Telescope (LAT) photon counts map to measure the composition of the extragalactic gamma-ray sky at high latitudes (|b| greater-than or slanted equal to 30°) between 1 and 10 GeV. We present a new method, generalizing the use of standard pixel-count statistics, to decompose the total observed gamma-ray emission into (a) point-source contributions, (b) the Galactic foreground contribution, and (c) a truly diffuse isotropic background contribution. Using the 6more » yr Fermi-LAT data set (P7REP), we show that the dN/dS distribution in the regime of so far undetected point sources can be consistently described with a power law with an index between 1.9 and 2.0. We measure dN/dS down to an integral flux of ~2 x 10 -11cm -2s -1, improving beyond the 3FGL catalog detection limit by about one order of magnitude. The overall dN/dS distribution is consistent with a broken power law, with a break at 2.1 +1.0 -1.3 x 10 -8cm -2s -1. The power-law index n 1 = 3.1 +0.7 -0.5 for bright sources above the break hardens to n 2 = 1.97 ± 0.03 for fainter sources below the break. A possible second break of the dN/dS distribution is constrained to be at fluxes below 6.4 x 10 -11cm -2s -1 at 95% confidence level. Finally, the high-latitude gamma-ray sky between 1 and 10 GeV is shown to be composed of ~25% point sources, ~69.3% diffuse Galactic foreground emission, and ~6% isotropic diffuse background.« less

  19. Statistical Measurement of the Gamma-Ray Source-count Distribution as a Function of Energy

    NASA Astrophysics Data System (ADS)

    Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; Fornengo, Nicolao; Regis, Marco

    2016-08-01

    Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. We employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ˜50 GeV. The index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index of {2.2}-0.3+0.7 in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain {83}-13+7% ({81}-19+52%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). The method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.

  20. Temperature distribution of a simplified rotor due to a uniform heat source

    NASA Astrophysics Data System (ADS)

    Welzenbach, Sarah; Fischer, Tim; Meier, Felix; Werner, Ewald; kyzy, Sonun Ulan; Munz, Oliver

    2018-03-01

    In gas turbines, high combustion efficiency as well as operational safety are required. Thus, labyrinth seal systems with honeycomb liners are commonly used. In the case of rubbing events in the seal system, the components can be damaged due to cyclic thermal and mechanical loads. Temperature differences occurring at labyrinth seal fins during rubbing events can be determined by considering a single heat source acting periodically on the surface of a rotating cylinder. Existing literature analysing the temperature distribution on rotating cylindrical bodies due to a stationary heat source is reviewed. The temperature distribution on the circumference of a simplified labyrinth seal fin is calculated using an available and easy to implement analytical approach. A finite element model of the simplified labyrinth seal fin is created and the numerical results are compared to the analytical results. The temperature distributions calculated by the analytical and the numerical approaches coincide for low sliding velocities, while there are discrepancies of the calculated maximum temperatures for higher sliding velocities. The use of the analytical approach allows the conservative estimation of the maximum temperatures arising in labyrinth seal fins during rubbing events. At the same time, high calculation costs can be avoided.

  1. Spatial distribution and sources of heavy metals in natural pasture soil around copper-molybdenum mine in Northeast China.

    PubMed

    Wang, Zhiqiang; Hong, Chen; Xing, Yi; Wang, Kang; Li, Yifei; Feng, Lihui; Ma, Silu

    2018-06-15

    The characterization of the content and source of heavy metals are essential to assess the potential threat of metals to human health. The present study collected 140 topsoil samples around a Cu-Mo mine (Wunugetushan, China) and investigated the concentrations and spatial distribution pattern of Cr, Ni, Zn, Cu, Mo and Cd in soil using multivariate and geostatistical analytical methods. Results indicated that the average concentrations of six heavy metals, especially Cu and Mo, were obviously higher than the local background values. Correlation analysis and principal component analysis divided these metals into three groups, including Cr and Ni, Cu and Mo, Zn and Cd. Meanwhile, the spatial distribution maps of heavy metals indicated that Cr and Ni in soil were no notable anthropogenic inputs and mainly controlled by natural factors because their spatial maps exhibited non-point source contamination. The concentrations of Cu and Mo gradually decreased with distance away from the mine area, suggesting that human mining activities may be crucial in the spreading of contaminants. Soil contamination of Zn were associated with livestock manure produced from grazing. In addition, the environmental risk of heavy metal pollution was assessed by geo-accumulation index. All the results revealed that the spatial distribution of heavy metals in soil were in agreement with the local human activities. Investigating and identifying the origin of heavy metals in pasture soil will lay the foundation for taking effective measures to preserve soil from the long-term accumulation of heavy metals. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Fukushima Daiichi reactor source term attribution using cesium isotope ratios from contaminated environmental samples

    DOE PAGES

    Snow, Mathew S.; Snyder, Darin C.; Delmore, James E.

    2016-01-18

    Source term attribution of environmental contamination following the Fukushima Daiichi Nuclear Power Plant (FDNPP) disaster is complicated by a large number of possible similar emission source terms (e.g. FDNPP reactor cores 1–3 and spent fuel ponds 1–4). Cesium isotopic analyses can be utilized to discriminate between environmental contamination from different FDNPP source terms and, if samples are sufficiently temporally resolved, potentially provide insights into the extent of reactor core damage at a given time. Rice, soil, mushroom, and soybean samples taken 100–250 km from the FDNPP site were dissolved using microwave digestion. Radiocesium was extracted and purified using two sequentialmore » ammonium molybdophosphate-polyacrylonitrile columns, following which 135Cs/ 137Cs isotope ratios were measured using thermal ionization mass spectrometry (TIMS). Results were compared with data reported previously from locations to the northwest of FDNPP and 30 km to the south of FDNPP. 135Cs/ 137Cs isotope ratios from samples 100–250 km to the southwest of the FDNPP site show a consistent value of 0.376 ± 0.008. 135Cs/ 137Cs versus 134Cs/ 137Cs correlation plots suggest that radiocesium to the southwest is derived from a mixture of FDNPP reactor cores 1, 2, and 3. Conclusions from the cesium isotopic data are in agreement with those derived independently based upon the event chronology combined with meteorological conditions at the time of the disaster. In conclusion, cesium isotopic analyses provide a powerful tool for source term discrimination of environmental radiocesium contamination at the FDNPP site. For higher precision source term attribution and forensic determination of the FDNPP core conditions based upon cesium, analyses of a larger number of samples from locations to the north and south of the FDNPP site (particularly time-resolved air filter samples) are needed. Published in 2016. This article is a U.S. Government work and is in the public

  3. Fukushima Daiichi reactor source term attribution using cesium isotope ratios from contaminated environmental samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snow, Mathew S.; Snyder, Darin C.; Delmore, James E.

    Source term attribution of environmental contamination following the Fukushima Daiichi Nuclear Power Plant (FDNPP) disaster is complicated by a large number of possible similar emission source terms (e.g. FDNPP reactor cores 1–3 and spent fuel ponds 1–4). Cesium isotopic analyses can be utilized to discriminate between environmental contamination from different FDNPP source terms and, if samples are sufficiently temporally resolved, potentially provide insights into the extent of reactor core damage at a given time. Rice, soil, mushroom, and soybean samples taken 100–250 km from the FDNPP site were dissolved using microwave digestion. Radiocesium was extracted and purified using two sequentialmore » ammonium molybdophosphate-polyacrylonitrile columns, following which 135Cs/ 137Cs isotope ratios were measured using thermal ionization mass spectrometry (TIMS). Results were compared with data reported previously from locations to the northwest of FDNPP and 30 km to the south of FDNPP. 135Cs/ 137Cs isotope ratios from samples 100–250 km to the southwest of the FDNPP site show a consistent value of 0.376 ± 0.008. 135Cs/ 137Cs versus 134Cs/ 137Cs correlation plots suggest that radiocesium to the southwest is derived from a mixture of FDNPP reactor cores 1, 2, and 3. Conclusions from the cesium isotopic data are in agreement with those derived independently based upon the event chronology combined with meteorological conditions at the time of the disaster. In conclusion, cesium isotopic analyses provide a powerful tool for source term discrimination of environmental radiocesium contamination at the FDNPP site. For higher precision source term attribution and forensic determination of the FDNPP core conditions based upon cesium, analyses of a larger number of samples from locations to the north and south of the FDNPP site (particularly time-resolved air filter samples) are needed. Published in 2016. This article is a U.S. Government work and is in the public

  4. Fukushima Daiichi reactor source term attribution using cesium isotope ratios from contaminated environmental samples.

    PubMed

    Snow, Mathew S; Snyder, Darin C; Delmore, James E

    2016-02-28

    Source term attribution of environmental contamination following the Fukushima Daiichi Nuclear Power Plant (FDNPP) disaster is complicated by a large number of possible similar emission source terms (e.g. FDNPP reactor cores 1-3 and spent fuel ponds 1-4). Cesium isotopic analyses can be utilized to discriminate between environmental contamination from different FDNPP source terms and, if samples are sufficiently temporally resolved, potentially provide insights into the extent of reactor core damage at a given time. Rice, soil, mushroom, and soybean samples taken 100-250 km from the FDNPP site were dissolved using microwave digestion. Radiocesium was extracted and purified using two sequential ammonium molybdophosphate-polyacrylonitrile columns, following which (135)Cs/(137) Cs isotope ratios were measured using thermal ionization mass spectrometry (TIMS). Results were compared with data reported previously from locations to the northwest of FDNPP and 30 km to the south of FDNPP. (135)Cs/(137)Cs isotope ratios from samples 100-250 km to the southwest of the FDNPP site show a consistent value of 0.376 ± 0.008. (135)Cs/(137)Cs versus (134)Cs/(137)Cs correlation plots suggest that radiocesium to the southwest is derived from a mixture of FDNPP reactor cores 1, 2, and 3. Conclusions from the cesium isotopic data are in agreement with those derived independently based upon the event chronology combined with meteorological conditions at the time of the disaster. Cesium isotopic analyses provide a powerful tool for source term discrimination of environmental radiocesium contamination at the FDNPP site. For higher precision source term attribution and forensic determination of the FDNPP core conditions based upon cesium, analyses of a larger number of samples from locations to the north and south of the FDNPP site (particularly time-resolved air filter samples) are needed. Published in 2016. This article is a U.S. Government work and is in the public domain in the USA.

  5. Distribution functions of air-scattered gamma rays above isotropic plane sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael, J A; Lamonds, H A

    1967-06-01

    Using the moments method of Spencer and Fano and a reconstruction technique suggested by Berger, the authors have calculated energy and angular distribution functions for air-scattered gamma rays emitied from infinite-plane isotropic monoenergetic sources as iunctions of source energy, radiation incidence angle at the detector, and detector altitude. Incremental and total buildup factors have been calculated for both number and exposure. The results are presented in tabular form for a detector located at altitudes of 3, 50, 100, 200, 300, 400, 500, and 1000 feet above source planes of 15 discrete energies spanning the range of 0.1 to 3.0 MeV.more » Calculational techniques including results of sensitivity studies are discussed and plots of typical results are presented. (auth)« less

  6. Distributed Sensing for Quickest Change Detection of Point Radiation Sources

    DTIC Science & Technology

    2017-02-01

    point occurs simultaneously at all sensor nodes, thus neglecting signal propagation delays. For nuclear radiation , the observation period, which is on... nuclear radiation using a sensor network,” in Homeland Security (HST), 2012 IEEE Conference on Technologies for. IEEE, 2012, pp. 648–653. [8] G. Lorden...Distributed Sensing for Quickest Change Detection of Point Radiation Sources Gene T. Whipps⋆† Emre Ertin† Randolph L. Moses† †The Ohio State

  7. North Slope, Alaska: Source rock distribution, richness, thermal maturity, and petroleum charge

    USGS Publications Warehouse

    Peters, K.E.; Magoon, L.B.; Bird, K.J.; Valin, Z.C.; Keller, M.A.

    2006-01-01

    Four key marine petroleum source rock units were identified, characterized, and mapped in the subsurface to better understand the origin and distribution of petroleum on the North Slope of Alaska. These marine source rocks, from oldest to youngest, include four intervals: (1) Middle-Upper Triassic Shublik Formation, (2) basal condensed section in the Jurassic-Lower Cretaceous Kingak Shale, (3) Cretaceous pebble shale unit, and (4) Cretaceous Hue Shale. Well logs for more than 60 wells and total organic carbon (TOC) and Rock-Eval pyrolysis analyses for 1183 samples in 125 well penetrations of the source rocks were used to map the present-day thickness of each source rock and the quantity (TOC), quality (hydrogen index), and thermal maturity (Tmax) of the organic matter. Based on assumptions related to carbon mass balance and regional distributions of TOC, the present-day source rock quantity and quality maps were used to determine the extent of fractional conversion of the kerogen to petroleum and to map the original TOC (TOCo) and the original hydrogen index (HIo) prior to thermal maturation. The quantity and quality of oil-prone organic matter in Shublik Formation source rock generally exceeded that of the other units prior to thermal maturation (commonly TOCo > 4 wt.% and HIo > 600 mg hydrocarbon/g TOC), although all are likely sources for at least some petroleum on the North Slope. We used Rock-Eval and hydrous pyrolysis methods to calculate expulsion factors and petroleum charge for each of the four source rocks in the study area. Without attempting to identify the correct methods, we conclude that calculations based on Rock-Eval pyrolysis overestimate expulsion factors and petroleum charge because low pressure and rapid removal of thermally cracked products by the carrier gas retards cross-linking and pyrobitumen formation that is otherwise favored by natural burial maturation. Expulsion factors and petroleum charge based on hydrous pyrolysis may also be high

  8. Measurement-device-independent quantum key distribution with multiple crystal heralded source with post-selection

    NASA Astrophysics Data System (ADS)

    Chen, Dong; Shang-Hong, Zhao; MengYi, Deng

    2018-03-01

    The multiple crystal heralded source with post-selection (MHPS), originally introduced to improve the single-photon character of the heralded source, has specific applications for quantum information protocols. In this paper, by combining decoy-state measurement-device-independent quantum key distribution (MDI-QKD) with spontaneous parametric downconversion process, we present a modified MDI-QKD scheme with MHPS where two architectures are proposed corresponding to symmetric scheme and asymmetric scheme. The symmetric scheme, which linked by photon switches in a log-tree structure, is adopted to overcome the limitation of the current low efficiency of m-to-1 optical switches. The asymmetric scheme, which shows a chained structure, is used to cope with the scalability issue with increase in the number of crystals suffered in symmetric scheme. The numerical simulations show that our modified scheme has apparent advances both in transmission distance and key generation rate compared to the original MDI-QKD with weak coherent source and traditional heralded source with post-selection. Furthermore, the recent advances in integrated photonics suggest that if built into a single chip, the MHPS might be a practical alternative source in quantum key distribution tasks requiring single photons to work.

  9. Elevated Natural Source Water Ammonia and Nitrification in the Distribution Systems of Four Water Utilities

    EPA Science Inventory

    Nitrification in drinking water distribution systems is a concern of many drinking water systems. Although chloramination as a source of nitrification (i.e., addition of excess ammonia or breakdown of chloramines) has drawn the most attention, many source waters contain signific...

  10. Light source distribution and scattering phase function influence light transport in diffuse multi-layered media

    NASA Astrophysics Data System (ADS)

    Vaudelle, Fabrice; L'Huillier, Jean-Pierre; Askoura, Mohamed Lamine

    2017-06-01

    Red and near-Infrared light is often used as a useful diagnostic and imaging probe for highly scattering media such as biological tissues, fruits and vegetables. Part of diffusively reflected light gives interesting information related to the tissue subsurface, whereas light recorded at further distances may probe deeper into the interrogated turbid tissues. However, modelling diffusive events occurring at short source-detector distances requires to consider both the distribution of the light sources and the scattering phase functions. In this report, a modified Monte Carlo model is used to compute light transport in curved and multi-layered tissue samples which are covered with a thin and highly diffusing tissue layer. Different light source distributions (ballistic, diffuse or Lambertian) are tested with specific scattering phase functions (modified or not modified Henyey-Greenstein, Gegenbauer and Mie) to compute the amount of backscattered and transmitted light in apple and human skin structures. Comparisons between simulation results and experiments carried out with a multispectral imaging setup confirm the soundness of the theoretical strategy and may explain the role of the skin on light transport in whole and half-cut apples. Other computational results show that a Lambertian source distribution combined with a Henyey-Greenstein phase function provides a higher photon density in the stratum corneum than in the upper dermis layer. Furthermore, it is also shown that the scattering phase function may affect the shape and the magnitude of the Bidirectional Reflectance Distribution (BRDF) exhibited at the skin surface.

  11. A Web-based open-source database for the distribution of hyperspectral signatures

    NASA Astrophysics Data System (ADS)

    Ferwerda, J. G.; Jones, S. D.; Du, Pei-Jun

    2006-10-01

    With the coming of age of field spectroscopy as a non-destructive means to collect information on the physiology of vegetation, there is a need for storage of signatures, and, more importantly, their metadata. Without the proper organisation of metadata, the signatures itself become limited. In order to facilitate re-distribution of data, a database for the storage & distribution of hyperspectral signatures and their metadata was designed. The database was built using open-source software, and can be used by the hyperspectral community to share their data. Data is uploaded through a simple web-based interface. The database recognizes major file-formats by ASD, GER and International Spectronics. The database source code is available for download through the hyperspectral.info web domain, and we happily invite suggestion for additions & modification for the database to be submitted through the online forums on the same website.

  12. Porous elastic system with nonlinear damping and sources terms

    NASA Astrophysics Data System (ADS)

    Freitas, Mirelson M.; Santos, M. L.; Langa, José A.

    2018-02-01

    We study the long-time behavior of porous-elastic system, focusing on the interplay between nonlinear damping and source terms. The sources may represent restoring forces, but may also be focusing thus potentially amplifying the total energy which is the primary scenario of interest. By employing nonlinear semigroups and the theory of monotone operators, we obtain several results on the existence of local and global weak solutions, and uniqueness of weak solutions. Moreover, we prove that such unique solutions depend continuously on the initial data. Under some restrictions on the parameters, we also prove that every weak solution to our system blows up in finite time, provided the initial energy is negative and the sources are more dominant than the damping in the system. Additional results are obtained via careful analysis involving the Nehari Manifold. Specifically, we prove the existence of a unique global weak solution with initial data coming from the "good" part of the potential well. For such a global solution, we prove that the total energy of the system decays exponentially or algebraically, depending on the behavior of the dissipation in the system near the origin. We also prove the existence of a global attractor.

  13. [Influence of water source switching on water quality in drinking water distribution system].

    PubMed

    Wang, Yang; Niu, Zhang-bin; Zhang, Xiao-jian; Chen, Chao; He, Wen-jie; Han, Hong-da

    2007-10-01

    This study investigates the regularity of the change on the physical and chemical water qualities in the distribution system during the process of water source switching in A city. Due to the water source switching, the water quality is chemical-astable. Because of the differences between the two water sources, pH reduced from 7.54 to 7.18, alkalinity reduced from 188 mg x L(-1) to 117 mg x L(-1), chloride (Cl(-)) reduced from 310 mg x L(-1) to 132 mg x L(-1), conductance reduced from 0.176 S x m(-1) to 0.087 S x m(-1) and the ions of calcium and magnesium reduced to 15 mg x L(-1) and 11 mg x L(-1) respectively. Residual chlorine changed while the increase of the chlorine demand and the water quantity decreasing at night, and the changes of pH, alkalinity and residual chlorine brought the iron increased to 0.4 mg x L(-1) at the tiptop, which was over the standard. The influence of the change of the water parameters on the water chemical-stability in the drinking water distribution system is analyzed, and the controlling countermeasure is advanced: increasing pH, using phosphate and enhancing the quality of the water in distribution system especially the residual chlorine.

  14. Polybrominated diphenyl ethers in residential and agricultural soils from an electronic waste polluted region in South China: distribution, compositional profile, and sources.

    PubMed

    Zhang, Shaohui; Xu, Xijin; Wu, Yousheng; Ge, Jingjing; Li, Weiqiu; Huo, Xia

    2014-05-01

    A detailed investigation was conducted to understand the concentration, distribution, profile and possible source of polybrominated diphenyl ethers (PBDEs) in residential and agricultural soils from Guiyu, Shantou, China, one of the largest electronic waste (e-waste) recycling and dismantling areas in the world. Ten PBDEs were analyzed in 46 surface soil samples in terms of individual and total concentrations, together with soil organic matter concentrations. Much higher concentrations of the total PBDEs were predicted in the residential areas (more than 2000 ng g(-1)), exhibiting a clear urban source, while in the agricultural areas, concentrations were lower than 1500 ng g(-1). PBDE-209 was the most dominant congener among the study sites, indicating the prevalence of commercial deca-PBDE. However signature congeners from commercial octa-PBDE were also found. The total PBDE concentrations were significantly correlated with each individual PBDE. Principal component analysis indicated that PBDEs were mainly distributed in three groups according to the number of bromine atoms on the phenyl rings, and potential source. This study showed that the informal e-waste recycling has already introduced PBDEs into surrounding areas as pollutant which thus warrants an urgent investigation into the transport of PBDEs in the soil-plant system of agricultural areas. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Long-term changes of tree species composition and distribution in Korean mountain forests

    NASA Astrophysics Data System (ADS)

    Lee, Boknam; Lee, Hoontaek; Cho, Sunhee; Yoon, Jongguk; Park, Jongyoung; Kim, Hyun Seok

    2017-04-01

    Long-term changes in the abundance and distribution of tree species in the temperate forests of South Korea remain poorly understood. We investigated how tree species composition and stand distribution change across temperate mountainous forests using the species composition and DBH size collected over the past 15 years (1998-2012) across 130 permanent forest plots of 0.1 ha in Jiri and Baegun mountains in South Korea. The overall net change of tree communities over the years showed positive in terms of stand density, richness, diversity, and evenness. At the species level, the change of relative species composition has been led by intermediate and shade-tolerant species, such as Quercus mongolica, Carpinus laxiflora, Quercus serrate, Quercus variabilis, Styrax japonicus, Lindera erythrocarpa, and Pinus densiflora and was categorized into five species communities, representing gradual increase or decrease, establishment, extinction, fluctuation of species population. At the community level, the change in species composition appeared to have consistent and directional patterns of increase in the annual rate of change in the mean species traits including species density, pole growth rate, adult growth rate, and adult stature. Based on the additive models, the distribution of species diversity was significantly related to topographical variables including elevation, latitude, longitude, slope, topographic wetness index, and curvature where elevation was the most significant driver, followed by latitude and longitude. However, the change in distribution of species diversity was only significantly influenced by latitude and longitude. This is the first study to reveal the long-term dynamics of change in tree species composition and distribution, which are important to broaden our understanding of temperate mountainous forest ecosystem in South Korea.

  16. Source and long-term behavior of transuranic aerosols in the WIPP environment.

    PubMed

    Thakur, P; Lemons, B G

    2016-10-01

    Source and long-term behavior transuranic aerosols ((239+240)Pu, (238)Pu, and (241)Am) in the ambient air samples collected at and near the Waste Isolation Pilot Plant (WIPP) deep geologic repository site were investigated using historical data from an independent monitoring program conducted by the Carlsbad Environmental Monitoring and Research Center and an oversight monitoring program conducted by the management and operating contractor for WIPP at and near the facility. An analysis of historical data indicates frequent detections of (239+240)Pu and (241)Am, whereas (238)Pu is detected infrequently. Peaks in (239+240)Pu and (241)Am concentrations in ambient air generally occur from March to June timeframe, which is when strong and gusty winds in the area frequently give rise to blowing dust. Long-term measurements of plutonium isotopes (1985-2015) in the WIPP environment suggest that the resuspension of previously contaminated soils is likely the primary source of plutonium in the ambient air samples from WIPP and its vicinity. There is no evidence that WIPP is a source of environmental contamination that can be considered significant by any health-based standard.

  17. Statistical measurement of the gamma-ray source-count distribution as a function of energy

    DOE PAGES

    Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; ...

    2016-07-29

    Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. Here, we employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ~50 GeV. Furthermore, the index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index ofmore » $${2.2}_{-0.3}^{+0.7}$$ in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain $${83}_{-13}^{+7}$$% ($${81}_{-19}^{+52}$$%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). Our method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.« less

  18. Statistical measurement of the gamma-ray source-count distribution as a function of energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza

    Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. Here, we employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ~50 GeV. Furthermore, the index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index ofmore » $${2.2}_{-0.3}^{+0.7}$$ in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain $${83}_{-13}^{+7}$$% ($${81}_{-19}^{+52}$$%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). Our method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.« less

  19. Volatile Organic Compounds: Characteristics, distribution and sources in urban schools

    NASA Astrophysics Data System (ADS)

    Mishra, Nitika; Bartsch, Jennifer; Ayoko, Godwin A.; Salthammer, Tunga; Morawska, Lidia

    2015-04-01

    Long term exposure to organic pollutants, both inside and outside school buildings may affect children's health and influence their learning performance. Since children spend significant amount of time in school, air quality, especially in classrooms plays a key role in determining the health risks associated with exposure at schools. Within this context, the present study investigated the ambient concentrations of Volatile Organic Compounds (VOCs) in 25 primary schools in Brisbane with the aim to quantify the indoor and outdoor VOCs concentrations, identify VOCs sources and their contribution, and based on these; propose mitigation measures to reduce VOCs exposure in schools. One of the most important findings is the occurrence of indoor sources, indicated by the I/O ratio >1 in 19 schools. Principal Component Analysis with Varimax rotation was used to identify common sources of VOCs and source contribution was calculated using an Absolute Principal Component Scores technique. The result showed that outdoor 47% of VOCs were contributed by petrol vehicle exhaust but the overall cleaning products had the highest contribution of 41% indoors followed by air fresheners and art and craft activities. These findings point to the need for a range of basic precautions during the selection, use and storage of cleaning products and materials to reduce the risk from these sources.

  20. Distribution and Sources of Black Carbon in the Arctic

    NASA Astrophysics Data System (ADS)

    Qi, Ling

    scavenging efficiency. In this dissertation, we relate WBF with temperature and ice mass fraction based on long-term observations in mixed-phase clouds. We find that WBF reduces BC scavenging efficiency globally, with larger decrease at higher latitude and altitude (from 8% in the tropics to 76% in the Arctic). WBF slows down and reduces wet deposition of BC and leave more BC in the atmosphere. Higher BC air results in larger dry deposition. The resulting total deposition is lower in mid-latitudes (by 12-34%) and higher in the Arctic (2-29%). Globally, including WBF significantly reduces the discrepancy of BCsnow (by 50%), BCair (by 50%), and washout ratios (by a factor of two to four). The remaining discrepancies in these variables suggest that in-cloud removal is likely still excessive over land. In the last part, we identify sources of surface atmospheric BC in the Arctic in springtime, when radiative forcing is the largest due to the high insolation and surface albedo. We find a large contribution from Asian anthropogenic sources (40-43%) and open biomass burning emissions from forest fires in South Siberia (29-41%). Outside the Arctic front, BC is strongly enhanced by episodic, direct transport events from Asia and Siberia after 12 days of transport. In contrast, in the Arctic front, a large fraction of the Asian contribution is in the form of 'chronic' pollution on 1-2 month timescale. As such, it is likely that previous studies using 5- or 10-day trajectory analyses strongly underestimated the contribution from Asia to surface BC in the Arctic. Our results point toward an urgent need for better characterization of flaring emissions of BC (e.g. the emission factors, temporal and spatial distribution), extensive measurements of both the dry deposition of BC over snow and ice, and the scavenging efficiency of BC in mixed-phase clouds, particularly over Ocean. More measurements of 14C are needed to better understand sources of BC (fossil fuel combustion versus biomass

  1. Annual Rates on Seismogenic Italian Sources with Models of Long-Term Predictability for the Time-Dependent Seismic Hazard Assessment In Italy

    NASA Astrophysics Data System (ADS)

    Murru, Maura; Falcone, Giuseppe; Console, Rodolfo

    2016-04-01

    The present study is carried out in the framework of the Center for Seismic Hazard (CPS) INGV, under the agreement signed in 2015 with the Department of Civil Protection for developing a new model of seismic hazard of the country that can update the current reference (MPS04-S1; zonesismiche.mi.ingv.it and esse1.mi.ingv.it) released between 2004 and 2006. In this initiative, we participate with the Long-Term Stress Transfer (LTST) Model to provide the annual occurrence rate of a seismic event on the entire Italian territory, from a Mw4.5 minimum magnitude, considering bins of 0.1 magnitude units on geographical cells of 0.1° x 0.1°. Our methodology is based on the fusion of a statistical time-dependent renewal model (Brownian Passage Time, BPT, Matthews at al., 2002) with a physical model which considers the permanent effect in terms of stress that undergoes a seismogenic source in result of the earthquakes that occur on surrounding sources. For each considered catalog (historical, instrumental and individual seismogenic sources) we determined a distinct rate value for each cell of 0.1° x 0.1° for the next 50 yrs. If the cell falls within one of the sources in question, we adopted the respective value of rate, which is referred only to the magnitude of the event characteristic. This value of rate is divided by the number of grid cells that fall on the horizontal projection of the source. If instead the cells fall outside of any seismic source we considered the average value of the rate obtained from the historical and the instrumental catalog, using the method of Frankel (1995). The annual occurrence rate was computed for any of the three considered distributions (Poisson, BPT and BPT with inclusion of stress transfer).

  2. Stability metrics for multi-source biomedical data based on simplicial projections from probability distribution distances.

    PubMed

    Sáez, Carlos; Robles, Montserrat; García-Gómez, Juan M

    2017-02-01

    Biomedical data may be composed of individuals generated from distinct, meaningful sources. Due to possible contextual biases in the processes that generate data, there may exist an undesirable and unexpected variability among the probability distribution functions (PDFs) of the source subsamples, which, when uncontrolled, may lead to inaccurate or unreproducible research results. Classical statistical methods may have difficulties to undercover such variabilities when dealing with multi-modal, multi-type, multi-variate data. This work proposes two metrics for the analysis of stability among multiple data sources, robust to the aforementioned conditions, and defined in the context of data quality assessment. Specifically, a global probabilistic deviation and a source probabilistic outlyingness metrics are proposed. The first provides a bounded degree of the global multi-source variability, designed as an estimator equivalent to the notion of normalized standard deviation of PDFs. The second provides a bounded degree of the dissimilarity of each source to a latent central distribution. The metrics are based on the projection of a simplex geometrical structure constructed from the Jensen-Shannon distances among the sources PDFs. The metrics have been evaluated and demonstrated their correct behaviour on a simulated benchmark and with real multi-source biomedical data using the UCI Heart Disease data set. The biomedical data quality assessment based on the proposed stability metrics may improve the efficiency and effectiveness of biomedical data exploitation and research.

  3. Organic aerosols over Indo-Gangetic Plain: Sources, distributions and climatic implications

    NASA Astrophysics Data System (ADS)

    Singh, Nandita; Mhawish, Alaa; Deboudt, Karine; Singh, R. S.; Banerjee, Tirthankar

    2017-05-01

    Organic aerosol (OA) constitutes a dominant fraction of airborne particulates over Indo-Gangetic Plain (IGP) especially during post-monsoon and winter. Its exposure has been associated with adverse health effects while there are evidences of its interference with Earth's radiation balance and cloud condensation (CC), resulting possible alteration of hydrological cycle. Therefore, presence and effects of OA directly link it with food security and thereby, sustainability issues. In these contexts, atmospheric chemistry involving formation, volatility and aging of primary OA (POA) and secondary OA (SOA) have been reviewed with specific reference to IGP. Systematic reviews on science of OA sources, evolution and climate perturbations are presented with databases collected from 82 publications available throughout IGP till 2016. Both gaseous and aqueous phase chemical reactions were studied in terms of their potential to form SOA. Efforts were made to recognize the regional variation of OA, its chemical constituents and sources throughout IGP and inferences were made on its possible impacts on regional air quality. Mass fractions of OA to airborne particulate showed spatial variation likewise in Lahore (37 and 44% in fine and coarse fractions, respectively), Patiala (28 and 37%), Delhi (25 and 38%), Kanpur (24 and 30%), Kolkata (11 and 21%) and Dhaka. Source apportionment studies indicate biomass burning, coal combustion and vehicular emissions as predominant OA sources. However, sources represent considerable seasonal variations with dominance of gasoline and diesel emissions during summer and coal and biomass based emissions during winter and post-monsoon. Crop residue burning over upper-IGP was also frequently held responsible for massive OA emission, mostly characterized by its hygroscopic nature, thus having potential to act as CC nuclei. Conclusively, climatic implication of particulate bound OA has been discussed in terms of its interaction with radiation balance.

  4. Source-term development for a contaminant plume for use by multimedia risk assessment models

    NASA Astrophysics Data System (ADS)

    Whelan, Gene; McDonald, John P.; Taira, Randal Y.; Gnanapragasam, Emmanuel K.; Yu, Charley; Lew, Christine S.; Mills, William B.

    2000-02-01

    Multimedia modelers from the US Environmental Protection Agency (EPA) and US Department of Energy (DOE) are collaborating to conduct a comprehensive and quantitative benchmarking analysis of four intermedia models: MEPAS, MMSOILS, PRESTO, and RESRAD. These models represent typical analytically based tools that are used in human-risk and endangerment assessments at installations containing radioactive and hazardous contaminants. The objective is to demonstrate an approach for developing an adequate source term by simplifying an existing, real-world, 90Sr plume at DOE's Hanford installation in Richland, WA, for use in a multimedia benchmarking exercise between MEPAS, MMSOILS, PRESTO, and RESRAD. Source characteristics and a release mechanism are developed and described; also described is a typical process and procedure that an analyst would follow in developing a source term for using this class of analytical tool in a preliminary assessment.

  5. Polycyclic aromatic hydrocarbons in the urban atmosphere of Nepal: Distribution, sources, seasonal trends, and cancer risk.

    PubMed

    Pokhrel, Balram; Gong, Ping; Wang, Xiaoping; Wang, Chuanfei; Gao, Shaoping

    2018-03-15

    Atmospheric polycyclic aromatic hydrocarbons (PAHs) in urban areas have always been a global concern, as these areas are considered to be the source region. Despite studies on the concentrations of PAHs in water, soils and sediments, knowledge of the distribution patterns, seasonality and sources of PAHs in urban areas of Nepal remains limited. In this study, polyurethane foam passive air samplers were used to measure gas-phase PAH concentrations over different land types in three major cities of Nepal-namely, Kathmandu (the capital) and Pokhara (both densely populated cities), and Hetauda (an agricultural city). The average concentrations of ∑15PAHs in ng/m 3 were 16.1±7.0 (6.4-28.6), 14.1±6.2 (6.8-29.4) and 11.1±9.0 (4.1-38.0) in Kathmandu, Pokhara and Hetauda, respectively. Molecular diagnostic ratio analysis suggested that fossil fuel combustion was a common PAH source for all three cities. In addition to this, coal combustion in Kathmandu, vehicle emissions in Pokhara, and grass/wood combustion in Hetauda were also possible sources of PAHs. In terms of cancer risk from PAH inhalation, a religious site with intense incense burning, a brick production area where extensive coal combustion is common, and a market place with heavy traffic emission, were associated with a higher risk than other areas. There were no clear seasonal trends in atmospheric PAHs. The estimated cancer risk due to inhalation of gas-phase PAHs exceeded the USEPA standard at >90% of the sites. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Inverse modelling of fluvial sediment connectivity identifies characteristics and spatial distribution of sediment sources in a large river network.

    NASA Astrophysics Data System (ADS)

    Schmitt, R. J. P.; Bizzi, S.; Kondolf, G. M.; Rubin, Z.; Castelletti, A.

    2016-12-01

    Field and laboratory evidence indicates that the spatial distribution of transport in both alluvial and bedrock rivers is an adaptation to sediment supply. Sediment supply, in turn, depends on spatial distribution and properties (e.g., grain sizes and supply rates) of individual sediment sources. Analyzing the distribution of transport capacity in a river network could hence clarify the spatial distribution and properties of sediment sources. Yet, challenges include a) identifying magnitude and spatial distribution of transport capacity for each of multiple grain sizes being simultaneously transported, and b) estimating source grain sizes and supply rates, both at network scales. Herein, we approach the problem of identifying the spatial distribution of sediment sources and the resulting network sediment fluxes in a major, poorly monitored tributary (80,000 km2) of the Mekong. Therefore, we apply the CASCADE modeling framework (Schmitt et al. (2016)). CASCADE calculates transport capacities and sediment fluxes for multiple grainsizes on the network scale based on remotely-sensed morphology and modelled hydrology. CASCADE is run in an inverse Monte Carlo approach for 7500 random initializations of source grain sizes. In all runs, supply of each source is inferred from the minimum downstream transport capacity for the source grain size. Results for each realization are compared to sparse available sedimentary records. Only 1 % of initializations reproduced the sedimentary record. Results for these realizations revealed a spatial pattern in source supply rates, grain sizes, and network sediment fluxes that correlated well with map-derived patterns in lithology and river-morphology. Hence, we propose that observable river hydro-morphology contains information on upstream source properties that can be back-calculated using an inverse modeling approach. Such an approach could be coupled to more detailed models of hillslope processes in future to derive integrated models

  7. Spurious Behavior of Shock-Capturing Methods: Problems Containing Stiff Source Terms and Discontinuities

    NASA Technical Reports Server (NTRS)

    Yee, Helen M. C.; Kotov, D. V.; Wang, Wei; Shu, Chi-Wang

    2013-01-01

    The goal of this paper is to relate numerical dissipations that are inherited in high order shock-capturing schemes with the onset of wrong propagation speed of discontinuities. For pointwise evaluation of the source term, previous studies indicated that the phenomenon of wrong propagation speed of discontinuities is connected with the smearing of the discontinuity caused by the discretization of the advection term. The smearing introduces a nonequilibrium state into the calculation. Thus as soon as a nonequilibrium value is introduced in this manner, the source term turns on and immediately restores equilibrium, while at the same time shifting the discontinuity to a cell boundary. The present study is to show that the degree of wrong propagation speed of discontinuities is highly dependent on the accuracy of the numerical method. The manner in which the smearing of discontinuities is contained by the numerical method and the overall amount of numerical dissipation being employed play major roles. Moreover, employing finite time steps and grid spacings that are below the standard Courant-Friedrich-Levy (CFL) limit on shockcapturing methods for compressible Euler and Navier-Stokes equations containing stiff reacting source terms and discontinuities reveals surprising counter-intuitive results. Unlike non-reacting flows, for stiff reactions with discontinuities, employing a time step and grid spacing that are below the CFL limit (based on the homogeneous part or non-reacting part of the governing equations) does not guarantee a correct solution of the chosen governing equations. Instead, depending on the numerical method, time step and grid spacing, the numerical simulation may lead to (a) the correct solution (within the truncation error of the scheme), (b) a divergent solution, (c) a wrong propagation speed of discontinuities solution or (d) other spurious solutions that are solutions of the discretized counterparts but are not solutions of the governing equations

  8. Long Term Leaching of Chlorinated Solvents from Source Zones in Low Permeability Settings with Fractures

    NASA Astrophysics Data System (ADS)

    Bjerg, P. L.; Chambon, J.; Troldborg, M.; Binning, P. J.; Broholm, M. M.; Lemming, G.; Damgaard, I.

    2008-12-01

    Groundwater contamination by chlorinated solvents, such as perchloroethylene (PCE), often occurs via leaching from complex sources located in low permeability sediments such as clayey tills overlying aquifers. Clayey tills are mostly fractured, and contamination migrating through the fractures spreads to the low permeability matrix by diffusion. This results in a long term source of contamination due to back-diffusion. Leaching from such sources is further complicated by microbial degradation under anaerobic conditions to sequentially form the daughter products trichloroethylene, cis-dichloroethylene (cis-DCE), vinyl chloride (VC) and ethene. This process can be enhanced by addition of electron donors and/or bioaugmentation and is termed Enhanced Reductive Dechlorination (ERD). This work aims to improve our understanding of the physical, chemical and microbial processes governing source behaviour under natural and enhanced conditions. That understanding is applied to risk assessment, and to determine the relationship and time frames of source clean up and plume response. To meet that aim, field and laboratory observations are coupled to state of the art models incorporating new insights of contaminant behaviour. The long term leaching of chlorinated ethenes from clay aquitards is currently being monitored at a number of Danish sites. The observed data is simulated using a coupled fracture flow and clay matrix diffusion model. Sequential degradation is represented by modified Monod kinetics accounting for competitive inhibition between the chlorinated ethenes. The model is constructed using Comsol Multiphysics, a generic finite- element partial differential equation solver. The model is applied at two well characterised field sites with respect to hydrogeology, fracture network, contaminant distribution and microbial processes (lab and field experiments). At the study sites (Sortebrovej and Vadsbyvej), the source areas are situated in a clayey till with fractures

  9. Herschel-ATLAS: Dust Temperature and Redshift Distribution of SPIRE and PACS Detected Sources Using Submillimetre Colours

    NASA Technical Reports Server (NTRS)

    Amblard, A.; Cooray, Asantha; Serra, P.; Temi, P.; Barton, E.; Negrello, M.; Auld, R.; Baes, M.; Baldry, I. K.; Bamford, S.; hide

    2010-01-01

    We present colour-colour diagrams of detected sources in the Herschel-ATLAS Science Demonstration Field from 100 to 500/microns using both PACS and SPIRE. We fit isothermal modified-blackbody spectral energy distribution (SED) models in order to extract the dust temperature of sources with counterparts in GAMA or SDSS with either a spectroscopic or a photometric redshift. For a subsample of 331 sources detected in at least three FIR bands with significance greater than 30 sigma, we find an average dust temperature of (28 plus or minus 8)K. For sources with no known redshifts, we populate the colour-colour diagram with a large number of SEDs generated with a broad range of dust temperatures and emissivity parameters and compare to colours of observed sources to establish the redshift distribution of those samples. For another subsample of 1686 sources with fluxes above 35 mJy at 350 microns and detected at 250 and 500 microns with a significance greater than 3sigma, we find an average redshift of 2.2 plus or minus 0.6.

  10. Ragweed (Ambrosia) pollen source inventory for Austria.

    PubMed

    Karrer, G; Skjøth, C A; Šikoparija, B; Smith, M; Berger, U; Essl, F

    2015-08-01

    This study improves the spatial coverage of top-down Ambrosia pollen source inventories for Europe by expanding the methodology to Austria, a country that is challenging in terms of topography and the distribution of ragweed plants. The inventory combines annual ragweed pollen counts from 19 pollen-monitoring stations in Austria (2004-2013), 657 geographical observations of Ambrosia plants, a Digital Elevation Model (DEM), local knowledge of ragweed ecology and CORINE land cover information from the source area. The highest mean annual ragweed pollen concentrations were generally recorded in the East of Austria where the highest densities of possible growth habitats for Ambrosia were situated. Approximately 99% of all observations of Ambrosia populations were below 745m. The European infection level varies from 0.1% at Freistadt in Northern Austria to 12.8% at Rosalia in Eastern Austria. More top-down Ambrosia pollen source inventories are required for other parts of Europe. A method for constructing top-down pollen source inventories for invasive ragweed plants in Austria, a country that is challenging in terms of topography and ragweed distribution. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.

  11. Detection prospects for high energy neutrino sources from the anisotropic matter distribution in the local Universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene, E-mail: mertsch@nbi.ku.dk, E-mail: mohamed.rameez@nbi.ku.dk, E-mail: tamborra@nbi.ku.dk

    Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe. Assuming that the distribution of the neutrino sources follows that of matter, we look for correlations between ''warm'' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance (including that of IceCube-Gen2), we demonstrate that sourcesmore » with local density exceeding 10{sup −6} Mpc{sup −3} and neutrino luminosity L {sub ν} ∼< 10{sup 42} erg s{sup −1} (10{sup 41} erg s{sup −1}) will be efficiently revealed by our method using IceCube (IceCube-Gen2). At low luminosities such as will be probed by IceCube-Gen2, the sensitivity of this analysis is superior to requiring statistically significant direct observation of a point source.« less

  12. Efficient measurement of large light source near-field color and luminance distributions for optical design and simulation

    NASA Astrophysics Data System (ADS)

    Kostal, Hubert; Kreysar, Douglas; Rykowski, Ronald

    2009-08-01

    The color and luminance distributions of large light sources are difficult to measure because of the size of the source and the physical space required for the measurement. We describe a method for the measurement of large light sources in a limited space that efficiently overcomes the physical limitations of traditional far-field measurement techniques. This method uses a calibrated, high dynamic range imaging colorimeter and a goniometric system to move the light source through an automated measurement sequence in the imaging colorimeter's field-of-view. The measurement is performed from within the near-field of the light source, enabling a compact measurement set-up. This method generates a detailed near-field color and luminance distribution model that can be directly converted to ray sets for optical design and that can be extrapolated to far-field distributions for illumination design. The measurements obtained show excellent correlation to traditional imaging colorimeter and photogoniometer measurement methods. The near-field goniometer approach that we describe is broadly applicable to general lighting systems, can be deployed in a compact laboratory space, and provides full near-field data for optical design and simulation.

  13. Poster - Thur Eve - 06: Comparison of an open source genetic algorithm to the commercially used IPSA for generation of seed distributions in LDR prostate brachytherapy.

    PubMed

    McGeachy, P; Khan, R

    2012-07-01

    In early stage prostate cancer, low dose rate (LDR) prostate brachytherapy is a favorable treatment modality, where small radioactive seeds are permanently implanted throughout the prostate. Treatment centres currently rely on a commercial optimization algorithm, IPSA, to generate seed distributions for treatment plans. However, commercial software does not allow the user access to the source code, thus reducing the flexibility for treatment planning and impeding any implementation of new and, perhaps, improved clinical techniques. An open source genetic algorithm (GA) has been encoded in MATLAB to generate seed distributions for a simplified prostate and urethra model. To assess the quality of the seed distributions created by the GA, both the GA and IPSA were used to generate seed distributions for two clinically relevant scenarios and the quality of the GA distributions relative to IPSA distributions and clinically accepted standards for seed distributions was investigated. The first clinically relevant scenario involved generating seed distributions for three different prostate volumes (19.2 cc, 32.4 cc, and 54.7 cc). The second scenario involved generating distributions for three separate seed activities (0.397 mCi, 0.455 mCi, and 0.5 mCi). Both GA and IPSA met the clinically accepted criteria for the two scenarios, where distributions produced by the GA were comparable to IPSA in terms of full coverage of the prostate by the prescribed dose, and minimized dose to the urethra, which passed straight through the prostate. Further, the GA offered improved reduction of high dose regions (i.e hot spots) within the planned target volume. © 2012 American Association of Physicists in Medicine.

  14. Black Carbon and Sulfate Aerosols in the Arctic: Long-term Trends, Radiative Impacts, and Source Attributions

    NASA Astrophysics Data System (ADS)

    Wang, H.; Zhang, R.; Yang, Y.; Smith, S.; Rasch, P. J.

    2017-12-01

    North America contributed significantly to the overall decreasing trend in Arctic BC and sulfate, especially, in the lower troposphere. The long-term changes in the spatial distributions of aerosols, their radiative impacts and source attributions, along with implications for the Arctic warming trend, will be discussed.

  15. Fire Source Localization Based on Distributed Temperature Sensing by a Dual-Line Optical Fiber System.

    PubMed

    Sun, Miao; Tang, Yuquan; Yang, Shuang; Li, Jun; Sigrist, Markus W; Dong, Fengzhong

    2016-06-06

    We propose a method for localizing a fire source using an optical fiber distributed temperature sensor system. A section of two parallel optical fibers employed as the sensing element is installed near the ceiling of a closed room in which the fire source is located. By measuring the temperature of hot air flows, the problem of three-dimensional fire source localization is transformed to two dimensions. The method of the source location is verified with experiments using burning alcohol as fire source, and it is demonstrated that the method represents a robust and reliable technique for localizing a fire source also for long sensing ranges.

  16. A methodology for efficiency optimization of betavoltaic cell design using an isotropic planar source having an energy dependent beta particle distribution.

    PubMed

    Theirrattanakul, Sirichai; Prelas, Mark

    2017-09-01

    Nuclear batteries based on silicon carbide betavoltaic cells have been studied extensively in the literature. This paper describes an analysis of design parameters, which can be applied to a variety of materials, but is specific to silicon carbide. In order to optimize the interface between a beta source and silicon carbide p-n junction, it is important to account for the specific isotope, angular distribution of the beta particles from the source, the energy distribution of the source as well as the geometrical aspects of the interface between the source and the transducer. In this work, both the angular distribution and energy distribution of the beta particles are modeled using a thin planar beta source (e.g., H-3, Ni-63, S-35, Pm-147, Sr-90, and Y-90) with GEANT4. Previous studies of betavoltaics with various source isotopes have shown that Monte Carlo based codes such as MCNPX, GEANT4 and Penelope generate similar results. GEANT4 is chosen because it has important strengths for the treatment of electron energies below one keV and it is widely available. The model demonstrates the effects of angular distribution, the maximum energy of the beta particle and energy distribution of the beta source on the betavoltaic and it is useful in determining the spatial profile of the power deposition in the cell. Copyright © 2017. Published by Elsevier Ltd.

  17. Management of Ultimate Risk of Nuclear Power Plants by Source Terms - Lessons Learned from the Chernobyl Accident

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Genn Saji

    2006-07-01

    The term 'ultimate risk' is used here to describe the probabilities and radiological consequences that should be incorporated in siting, containment design and accident management of nuclear power plants for hypothetical accidents. It is closely related with the source terms specified in siting criteria which assures an adequate separation of radioactive inventories of the plants from the public, in the event of a hypothetical and severe accident situation. The author would like to point out that current source terms which are based on the information from the Windscale accident (1957) through TID-14844 are very outdated and do not incorporate lessonsmore » learned from either the Three Miles Island (TMI, 1979) nor Chernobyl accident (1986), two of the most severe accidents ever experienced. As a result of the observations of benign radionuclides released at TMI, the technical community in the US felt that a more realistic evaluation of severe reactor accident source terms was necessary. In this background, the 'source term research project' was organized in 1984 to respond to these challenges. Unfortunately, soon after the time of the final report from this project was released, the Chernobyl accident occurred. Due to the enormous consequences induced by then accident, the one time optimistic perspectives in establishing a more realistic source term were completely shattered. The Chernobyl accident, with its human death toll and dispersion of a large part of the fission fragments inventories into the environment, created a significant degradation in the public's acceptance of nuclear energy throughout the world. In spite of this, nuclear communities have been prudent in responding to the public's anxiety towards the ultimate safety of nuclear plants, since there still remained many unknown points revolving around the mechanism of the Chernobyl accident. In order to resolve some of these mysteries, the author has performed a scoping study of the dispersion and

  18. Geochemistry of dissolved trace elements and heavy metals in the Dan River Drainage (China): distribution, sources, and water quality assessment.

    PubMed

    Meng, Qingpeng; Zhang, Jing; Zhang, Zhaoyu; Wu, Tairan

    2016-04-01

    Dissolved trace elements and heavy metals in the Dan River drainage basin, which is the drinking water source area of South-to-North Water Transfer Project (China), affect large numbers of people and should therefore be carefully monitored. To investigate the distribution, sources, and quality of river water, this study integrating catchment geology and multivariate statistical techniques was carried out in the Dan River drainage from 99 river water samples collected in 2013. The distribution of trace metal concentrations in the Dan River drainage was similar to that in the Danjiangkou Reservoir, indicating that the reservoir was significantly affected by the Dan River drainage. Moreover, our results suggested that As, Sb, Cd, Mn, and Ni were the major pollutants. We revealed extremely high concentrations of As and Sb in the Laoguan River, Cd in the Qingyou River, Mn, Ni, and Cd in the Yinhua River, As and Sb in the Laojun River, and Sb in the Dan River. According to the water quality index, water in the Dan River drainage was suitable for drinking; however, an exposure risk assessment model suggests that As and Sb in the Laojun and Laoguan rivers could pose a high risk to humans in terms of adverse health and potential non-carcinogenic effects.

  19. Spatial distribution of the RF power absorbed in a helicon plasma source

    NASA Astrophysics Data System (ADS)

    Aleksenko, O. V.; Miroshnichenko, V. I.; Mordik, S. N.

    2014-08-01

    The spatial distributions of the RF power absorbed by plasma electrons in an ion source operating in the helicon mode (ω ci < ω < ω ce < ω pe ) are studied numerically by using a simplified model of an RF plasma source in an external uniform magnetic field. The parameters of the source used in numerical simulations are determined by the necessity of the simultaneous excitation of two types of waves, helicons and Trivelpiece-Gould modes, for which the corresponding transparency diagrams are used. The numerical simulations are carried out for two values of the working gas (helium) pressure and two values of the discharge chamber length under the assumption that symmetric modes are excited. The parameters of the source correspond to those of the injector of the nuclear scanning microprobe operating at the Institute of Applied Physics, National Academy of Sciences of Ukraine. It is assumed that the mechanism of RF power absorption is based on the acceleration of plasma electrons in the field of a Trivelpiece-Gould mode, which is interrupted by pair collisions of plasma electrons with neutral atoms and ions of the working gas. The simulation results show that the total absorbed RF power at a fixed plasma density depends in a resonant manner on the magnetic field. The resonance is found to become smoother with increasing working gas pressure. The distributions of the absorbed RF power in the discharge chamber are presented. The achievable density of the extracted current is estimated using the Bohm criterion.

  20. Bacterial Composition in a Metropolitan Drinking Water Distribution System Utilizing Different Source Waters

    EPA Science Inventory

    The microbial community structure was investigated from bulk phase water samples of multiple collection sites from two service areas within the Cincinnati drinking water distribution system (DWDS). Each area is associated with a different primary source of water (i.e., groundwat...

  1. Performance metrics and variance partitioning reveal sources of uncertainty in species distribution models

    USGS Publications Warehouse

    Watling, James I.; Brandt, Laura A.; Bucklin, David N.; Fujisaki, Ikuko; Mazzotti, Frank J.; Romañach, Stephanie; Speroterra, Carolina

    2015-01-01

    Species distribution models (SDMs) are widely used in basic and applied ecology, making it important to understand sources and magnitudes of uncertainty in SDM performance and predictions. We analyzed SDM performance and partitioned variance among prediction maps for 15 rare vertebrate species in the southeastern USA using all possible combinations of seven potential sources of uncertainty in SDMs: algorithms, climate datasets, model domain, species presences, variable collinearity, CO2 emissions scenarios, and general circulation models. The choice of modeling algorithm was the greatest source of uncertainty in SDM performance and prediction maps, with some additional variation in performance associated with the comprehensiveness of the species presences used for modeling. Other sources of uncertainty that have received attention in the SDM literature such as variable collinearity and model domain contributed little to differences in SDM performance or predictions in this study. Predictions from different algorithms tended to be more variable at northern range margins for species with more northern distributions, which may complicate conservation planning at the leading edge of species' geographic ranges. The clear message emerging from this work is that researchers should use multiple algorithms for modeling rather than relying on predictions from a single algorithm, invest resources in compiling a comprehensive set of species presences, and explicitly evaluate uncertainty in SDM predictions at leading range margins.

  2. Distributed and Collaborative Software Analysis

    NASA Astrophysics Data System (ADS)

    Ghezzi, Giacomo; Gall, Harald C.

    Throughout the years software engineers have come up with a myriad of specialized tools and techniques that focus on a certain type of Term>software analysissoftware analysis Term> such as source code analysis, co-change analysis or bug prediction. However, easy and straight forward synergies between these analyses and tools rarely exist because of their stand-alone nature, their platform dependence, their different input and output formats and the variety of data to analyze. As a consequence, distributed and Term>collaborative software analysiscollaborative software analysis Term> scenarios and in particular interoperability are severely limited. We describe a distributed and collaborative software analysis platform that allows for a seamless interoperability of software analysis tools across platform, geographical and organizational boundaries. We realize software analysis tools as services that can be accessed and composed over the Internet. These distributed analysis services shall be widely accessible in our incrementally augmented Term> Software Analysis Broker software analysis broker Term> where organizations and tool providers can register and share their tools. To allow (semi-) automatic use and composition of these tools, they are classified and mapped into a software analysis taxonomy and adhere to specific meta-models and Term>ontologiesontologies Term> for their category of analysis.

  3. D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things

    PubMed Central

    Akan, Ozgur B.

    2018-01-01

    Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST). PMID:29538405

  4. D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things.

    PubMed

    Aktas, Metin; Kuscu, Murat; Dinc, Ergin; Akan, Ozgur B

    2018-01-01

    Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST).

  5. Multi-source analysis reveals latitudinal and altitudinal shifts in range of Ixodes ricinus at its northern distribution limit.

    PubMed

    Jore, Solveig; Viljugrein, Hildegunn; Hofshagen, Merete; Brun-Hansen, Hege; Kristoffersen, Anja B; Nygård, Karin; Brun, Edgar; Ottesen, Preben; Sævik, Bente K; Ytrehus, Bjørnar

    2011-05-19

    There is increasing evidence for a latitudinal and altitudinal shift in the distribution range of Ixodes ricinus. The reported incidence of tick-borne disease in humans is on the rise in many European countries and has raised political concern and attracted media attention. It is disputed which factors are responsible for these trends, though many ascribe shifts in distribution range to climate changes. Any possible climate effect would be most easily noticeable close to the tick's geographical distribution limits. In Norway- being the northern limit of this species in Europe- no documentation of changes in range has been published. The objectives of this study were to describe the distribution of I. ricinus in Norway and to evaluate if any range shifts have occurred relative to historical descriptions. Multiple data sources - such as tick-sighting reports from veterinarians, hunters, and the general public - and surveillance of human and animal tick-borne diseases were compared to describe the present distribution of I. ricinus in Norway. Correlation between data sources and visual comparison of maps revealed spatial consistency. In order to identify the main spatial pattern of tick abundance, a principal component analysis (PCA) was used to obtain a weighted mean of four data sources. The weighted mean explained 67% of the variation of the data sources covering Norway's 430 municipalities and was used to depict the present distribution of I. ricinus. To evaluate if any geographical range shift has occurred in recent decades, the present distribution was compared to historical data from 1943 and 1983. Tick-borne disease and/or observations of I. ricinus was reported in municipalities up to an altitude of 583 metres above sea level (MASL) and is now present in coastal municipalities north to approximately 69°N. I. ricinus is currently found further north and at higher altitudes than described in historical records. The approach used in this study, a multi-source

  6. Automated source term and wind parameter estimation for atmospheric transport and dispersion applications

    NASA Astrophysics Data System (ADS)

    Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.

    2015-12-01

    Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a ;first guess; source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the

  7. Algorithms and analytical solutions for rapidly approximating long-term dispersion from line and area sources

    NASA Astrophysics Data System (ADS)

    Barrett, Steven R. H.; Britter, Rex E.

    Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean

  8. A realistic multimodal modeling approach for the evaluation of distributed source analysis: application to sLORETA.

    PubMed

    Cosandier-Rimélé, D; Ramantani, G; Zentner, J; Schulze-Bonhage, A; Dümpelmann, M

    2017-10-01

    Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.

  9. A realistic multimodal modeling approach for the evaluation of distributed source analysis: application to sLORETA

    NASA Astrophysics Data System (ADS)

    Cosandier-Rimélé, D.; Ramantani, G.; Zentner, J.; Schulze-Bonhage, A.; Dümpelmann, M.

    2017-10-01

    Objective. Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. Approach. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. Main results. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. Significance. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.

  10. Energy & mass-charge distribution peculiarities of ion emitted from penning source

    NASA Astrophysics Data System (ADS)

    Mamedov, N. V.; Kolodko, D. V.; Sorokin, I. A.; Kanshin, I. A.; Sinelnikov, D. N.

    2017-05-01

    The optimization of hydrogen Penning sources used, in particular, in plasma chemical processing of materials and DLC deposition, is still very important. Investigations of mass-charge composition of these ion source emitted beams are particular relevant for miniature linear accelerators (neutron flux generators) nowadays. The Penning ion source energy and mass-charge ion distributions are presented. The relation between the discharge current abrupt jumps with increasing plasma density in the discharge center and increasing potential whipping (up to 50% of the anode voltage) is shown. Also the energy spectra in the discharge different modes as the pressure and anode potential functions are presented. It has been revealed that the atomic hydrogen ion concentration is about 5-10%, and it weakly depends on the pressure and the discharge current (in the investigated range from 1 to 10 mTorr and from 50 to 1000 μA) and increases with the anode voltage (up 1 to 3,5 kV).

  11. Trace elements in particulate matter from metropolitan regions of Northern China: Sources, concentrations and size distributions.

    PubMed

    Pan, Yuepeng; Tian, Shili; Li, Xingru; Sun, Ying; Li, Yi; Wentworth, Gregory R; Wang, Yuesi

    2015-12-15

    Public concerns over airborne trace elements (TEs) in metropolitan areas are increasing, but long-term and multi-site observations of size-resolved aerosol TEs in China are still lacking. Here, we identify highly elevated levels of atmospheric TEs in megacities and industrial sites in a Beijing-Tianjin-Hebei urban agglomeration relative to background areas, with the annual mean values of As, Pb, Ni, Cd and Mn exceeding the acceptable limits of the World Health Organization. Despite the spatial variability in concentrations, the size distribution pattern of each trace element was quite similar across the region. Crustal elements of Al and Fe were mainly found in coarse particles (2.1-9 μm), whereas the main fraction of toxic metals, such as Cu, Zn, As, Se, Cd and Pb, was found in submicron particles (<1.1 μm). These toxic metals were enriched by over 100-fold relative to the Earth's crust. The size distributions of Na, Mg, K, Ca, V, Cr, Mn, Ni, Mo and Ba were bimodal, with two peaks at 0.43-0.65 μm and 4.7-5.8 μm. The combination of the size distribution information, principal component analysis and air mass back trajectory model offered a robust technique for distinguishing the main sources for airborne TEs, e.g., soil dust, fossil fuel combustion and industrial emissions, at different sites. In addition, higher elemental concentrations coincided with westerly flow, indicating that polluted soil and fugitive dust were major sources of TEs on the regional scale. However, the contribution of coal burning, iron industry/oil combustion and non-ferrous smelters to atmospheric metal pollution in Northern China should be given more attention. Considering that the concentrations of heavy metals associated with fine particles in the target region were significantly higher than those in other Asian sites, the implementations of strict environmental standards in China are required to reduce the amounts of these hazardous pollutants released into the atmosphere. Copyright

  12. Testing contamination source identification methods for water distribution networks

    DOE PAGES

    Seth, Arpan; Klise, Katherine A.; Siirola, John D.; ...

    2016-04-01

    In the event of contamination in a water distribution network (WDN), source identification (SI) methods that analyze sensor data can be used to identify the source location(s). Knowledge of the source location and characteristics are important to inform contamination control and cleanup operations. Various SI strategies that have been developed by researchers differ in their underlying assumptions and solution techniques. The following manuscript presents a systematic procedure for testing and evaluating SI methods. The performance of these SI methods is affected by various factors including the size of WDN model, measurement error, modeling error, time and number of contaminant injections,more » and time and number of measurements. This paper includes test cases that vary these factors and evaluates three SI methods on the basis of accuracy and specificity. The tests are used to review and compare these different SI methods, highlighting their strengths in handling various identification scenarios. These SI methods and a testing framework that includes the test cases and analysis tools presented in this paper have been integrated into EPA’s Water Security Toolkit (WST), a suite of software tools to help researchers and others in the water industry evaluate and plan various response strategies in case of a contamination incident. Lastly, a set of recommendations are made for users to consider when working with different categories of SI methods.« less

  13. Distributions of clay minerals in surface sediments of the middle Bay of Bengal: Source and transport pattern

    NASA Astrophysics Data System (ADS)

    Li, Jingrui; Liu, Shengfa; Shi, Xuefa; Feng, Xiuli; Fang, Xisheng; Cao, Peng; Sun, Xingquan; Wenxing, Ye; Khokiattiwong, Somkiat; Kornkanitnan, Narumol

    2017-08-01

    The clay mineral contents in 110 surface sediment samples collected from the middle of the Bay of Bengal were analyzed by X-ray diffraction (XRD) to investigate the provenance and transport patterns. The illite content was highest, followed by chlorite, kaolinite and then smectite, with average weight percent distributions of 52%, 22%, 14% and 12%, respectively. Illite and chlorite had similar distribution pattern, with higher contents in the northern and central areas and lower contents in the southern area, whereas smectite showed the opposite distribution pattern. Kaolinite show no obvious higher or lower areas and the southern ;belt; was one of the highest content areas. Based on the spatial distribution characteristics and cluster analysis results, the study area can be classified into two provinces. Province I covers the southwestern area and contains high concentrations of illite and smectite sediments. Province II covers most sites and is also characterized by high concentrations of illite, but the weight percent of smectite is only half of that of province I. According to a quantitative estimate using end-member clay minerals contents, the relative contributions from the Himalayan source and the Indian source are 63% and 37% on average, respectively. Integrative analysis indicates that the hydrodynamic environment in the study area, especially the turbidity and surface monsoonal circulation, plays an important role in the spatial distribution and dispersal of the clay fraction in the sediments. The sediments in province I are mainly from the Indian source transported by the East Indian Coastal Current (EICC) and the surface monsoon circulation with minor contributions from the Himalayan source while the sediments in province II are mainly from the Himalayan source transported by turbidity and surface monsoonal circulation with little contribution from Indian river materials.

  14. Distribution and sources of carbon, nitrogen, phosphorus and biogenic silica in the sediments of Chilika lagoon

    NASA Astrophysics Data System (ADS)

    Nazneen, Sadaf; Raju, N. Janardhana

    2017-02-01

    The present study investigated the spatial and vertical distribution of organic carbon (OC), total nitrogen (TN), total phosphorus (TP) and biogenic silica (BSi) in the sedimentary environments of Asia's largest brackish water lagoon. Surface and core sediments were collected from various locations of the Chilika lagoon and were analysed for grain-size distribution and major elements in order to understand their distribution and sources. Sand is the dominant fraction followed by silt + clay. Primary production within the lagoon, terrestrial input from river discharge and anthropogenic activities in the vicinity of the lagoon control the distribution of OC, TN, TP and BSi in the surface as well as in the core sediments. Low C/N ratios in the surface sediments (3.49-3.41) and cores (4-11.86) suggest that phytoplankton and macroalgae may be major contributors of organic matter (OM) in the lagoon. BSi is mainly associated with the mud fraction. Core C5 from Balugaon region shows the highest concentration of OC ranging from 0.58-2.34%, especially in the upper 30 cm, due to direct discharge of large amounts of untreated sewage into the lagoon. The study highlights that Chilika is a dynamic ecosystem with a large contribution of OM by autochthonous sources with some input from anthropogenic sources as well.

  15. Integrating multiple data sources in species distribution modeling: A framework for data fusion

    USGS Publications Warehouse

    Pacifici, Krishna; Reich, Brian J.; Miller, David A.W.; Gardner, Beth; Stauffer, Glenn E.; Singh, Susheela; McKerrow, Alexa; Collazo, Jaime A.

    2017-01-01

    The last decade has seen a dramatic increase in the use of species distribution models (SDMs) to characterize patterns of species’ occurrence and abundance. Efforts to parameterize SDMs often create a tension between the quality and quantity of data available to fit models. Estimation methods that integrate both standardized and non-standardized data types offer a potential solution to the tradeoff between data quality and quantity. Recently several authors have developed approaches for jointly modeling two sources of data (one of high quality and one of lesser quality). We extend their work by allowing for explicit spatial autocorrelation in occurrence and detection error using a Multivariate Conditional Autoregressive (MVCAR) model and develop three models that share information in a less direct manner resulting in more robust performance when the auxiliary data is of lesser quality. We describe these three new approaches (“Shared,” “Correlation,” “Covariates”) for combining data sources and show their use in a case study of the Brown-headed Nuthatch in the Southeastern U.S. and through simulations. All three of the approaches which used the second data source improved out-of-sample predictions relative to a single data source (“Single”). When information in the second data source is of high quality, the Shared model performs the best, but the Correlation and Covariates model also perform well. When the information quality in the second data source is of lesser quality, the Correlation and Covariates model performed better suggesting they are robust alternatives when little is known about auxiliary data collected opportunistically or through citizen scientists. Methods that allow for both data types to be used will maximize the useful information available for estimating species distributions.

  16. Accident Source Terms for Pressurized Water Reactors with High-Burnup Cores Calculated using MELCOR 1.8.5.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gauntt, Randall O.; Goldmann, Andrew; Kalinich, Donald A.

    2016-12-01

    In this study, risk-significant pressurized-water reactor severe accident sequences are examined using MELCOR 1.8.5 to explore the range of fission product releases to the reactor containment building. Advances in the understanding of fission product release and transport behavior and severe accident progression are used to render best estimate analyses of selected accident sequences. Particular emphasis is placed on estimating the effects of high fuel burnup in contrast with low burnup on fission product releases to the containment. Supporting this emphasis, recent data available on fission product release from high-burnup (HBU) fuel from the French VERCOR project are used in thismore » study. The results of these analyses are treated as samples from a population of accident sequences in order to employ approximate order statistics characterization of the results. These trends and tendencies are then compared to the NUREG-1465 alternative source term prescription used today for regulatory applications. In general, greater differences are observed between the state-of-the-art calculations for either HBU or low-burnup (LBU) fuel and the NUREG-1465 containment release fractions than exist between HBU and LBU release fractions. Current analyses suggest that retention of fission products within the vessel and the reactor coolant system (RCS) are greater than contemplated in the NUREG-1465 prescription, and that, overall, release fractions to the containment are therefore lower across the board in the present analyses than suggested in NUREG-1465. The decreased volatility of Cs 2 MoO 4 compared to CsI or CsOH increases the predicted RCS retention of cesium, and as a result, cesium and iodine do not follow identical behaviors with respect to distribution among vessel, RCS, and containment. With respect to the regulatory alternative source term, greater differences are observed between the NUREG-1465 prescription and both HBU and LBU predictions than exist between HBU

  17. Basic repository source term and data sheet report: Lavender Canyon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1988-01-01

    This report is one of a series describing studies undertaken in support of the US Department of Energy Civilian Radioactive Waste Management (CRWM) Program. This study contains the derivation of values for environmental source terms and resources consumed for a CRWM repository. Estimates include heavy construction equipment; support equipment; shaft-sinking equipment; transportation equipment; and consumption of fuel, water, electricity, and natural gas. Data are presented for construction and operation at an assumed site in Lavender Canyon, Utah. 3 refs; 6 tabs.

  18. A review on the sources and spatial-temporal distributions of Pb in Jiaozhou Bay

    NASA Astrophysics Data System (ADS)

    Yang, Dongfang; Zhang, Jie; Wang, Ming; Zhu, Sixi; Wu, Yunjie

    2017-12-01

    This paper provided a review on the source, spatial-distribution, temporal variations of Pb in Jiaozhou Bay based on investigation of Pb in surface and waters in different seasons during 1979-1983. The source strengths of Pb sources in Jiaozhou Bay were showing increasing trends, and the pollution level of Pb in this bay was slight or moderate in the early stage of reform and opening-up. Pb contents in the marine bay were mainly determined by the strength and frequency of Pb inputs from human activities, and Pb could be moving from high content areas to low content areas in the ocean interior. Surface waters in the ocean was polluted by human activities, and bottom waters was polluted by means of vertical water’s effect. The process of spatial distribution of Pb in waters was including three steps, i.e., 1), Pb was transferring to surface waters in the bay, 2) Pb was transferring to surface waters, and 3) Pb was transferring to and accumulating in bottom waters.

  19. Sources of Uncertainty and the Interpretation of Short-Term Fluctuations

    NASA Astrophysics Data System (ADS)

    Lewandowsky, S.; Risbey, J.; Cowtan, K.; Rahmstorf, S.

    2016-12-01

    The alleged significant slowdown in global warming during the first decade of the 21st century, and the appearance of a discrepancy between models and observations, has attracted considerable research attention. We trace the history of this research and show how its conclusions were shaped by several sources of uncertainty and ambiguity about models and observations. We show that as those sources of uncertainty were gradually eliminated by further research, insufficient evidence remained to infer any discrepancy between models and observations or a significant slowing of warming. Specifically, we show that early research had to contend with uncertainties about coverage biases in the global temperature record and biases in the sea surface temperature observations which turned out to have exaggerated the extent of slowing. In addition, uncertainties in the observed forcings were found to have exaggerated the mismatch between models and observations. Further sources of uncertainty that were ultimately eliminated involved the use of incommensurate sea surface temperature data between models and observations and a tacit interpretation of model projections as predictions or forecasts. After all those sources of uncertainty were eliminated, the most recent research finds little evidence for an unusual slowdown or a discrepancy between models and observations. We discuss whether these different kinds of uncertainty could have been anticipated or managed differently, and how one can apply those lessons to future short-term fluctuations in warming.

  20. Distributed Water Pollution Source Localization with Mobile UV-Visible Spectrometer Probes in Wireless Sensor Networks.

    PubMed

    Ma, Junjie; Meng, Fansheng; Zhou, Yuexi; Wang, Yeyao; Shi, Ping

    2018-02-16

    Pollution accidents that occur in surface waters, especially in drinking water source areas, greatly threaten the urban water supply system. During water pollution source localization, there are complicated pollutant spreading conditions and pollutant concentrations vary in a wide range. This paper provides a scalable total solution, investigating a distributed localization method in wireless sensor networks equipped with mobile ultraviolet-visible (UV-visible) spectrometer probes. A wireless sensor network is defined for water quality monitoring, where unmanned surface vehicles and buoys serve as mobile and stationary nodes, respectively. Both types of nodes carry UV-visible spectrometer probes to acquire in-situ multiple water quality parameter measurements, in which a self-adaptive optical path mechanism is designed to flexibly adjust the measurement range. A novel distributed algorithm, called Dual-PSO, is proposed to search for the water pollution source, where one particle swarm optimization (PSO) procedure computes the water quality multi-parameter measurements on each node, utilizing UV-visible absorption spectra, and another one finds the global solution of the pollution source position, regarding mobile nodes as particles. Besides, this algorithm uses entropy to dynamically recognize the most sensitive parameter during searching. Experimental results demonstrate that online multi-parameter monitoring of a drinking water source area with a wide dynamic range is achieved by this wireless sensor network and water pollution sources are localized efficiently with low-cost mobile node paths.

  1. Distributed Water Pollution Source Localization with Mobile UV-Visible Spectrometer Probes in Wireless Sensor Networks

    PubMed Central

    Zhou, Yuexi; Wang, Yeyao; Shi, Ping

    2018-01-01

    Pollution accidents that occur in surface waters, especially in drinking water source areas, greatly threaten the urban water supply system. During water pollution source localization, there are complicated pollutant spreading conditions and pollutant concentrations vary in a wide range. This paper provides a scalable total solution, investigating a distributed localization method in wireless sensor networks equipped with mobile ultraviolet-visible (UV-visible) spectrometer probes. A wireless sensor network is defined for water quality monitoring, where unmanned surface vehicles and buoys serve as mobile and stationary nodes, respectively. Both types of nodes carry UV-visible spectrometer probes to acquire in-situ multiple water quality parameter measurements, in which a self-adaptive optical path mechanism is designed to flexibly adjust the measurement range. A novel distributed algorithm, called Dual-PSO, is proposed to search for the water pollution source, where one particle swarm optimization (PSO) procedure computes the water quality multi-parameter measurements on each node, utilizing UV-visible absorption spectra, and another one finds the global solution of the pollution source position, regarding mobile nodes as particles. Besides, this algorithm uses entropy to dynamically recognize the most sensitive parameter during searching. Experimental results demonstrate that online multi-parameter monitoring of a drinking water source area with a wide dynamic range is achieved by this wireless sensor network and water pollution sources are localized efficiently with low-cost mobile node paths. PMID:29462929

  2. Entanglement distribution in multi-particle systems in terms of unified entropy.

    PubMed

    Luo, Yu; Zhang, Fu-Gang; Li, Yongming

    2017-04-25

    We investigate the entanglement distribution in multi-particle systems in terms of unified (q, s)-entropy. We find that for any tripartite mixed state, the unified (q, s)-entropy entanglement of assistance follows a polygamy relation. This polygamy relation also holds in multi-particle systems. Furthermore, a generalized monogamy relation is provided for unified (q, s)-entropy entanglement in the multi-qubit system.

  3. Free-Space Quantum Key Distribution with a High Generation Rate KTP Waveguide Photon-Pair Source

    NASA Technical Reports Server (NTRS)

    Wilson, J.; Chaffee, D.; Wilson, N.; Lekki, J.; Tokars, R.; Pouch, J.; Lind, A.; Cavin, J.; Helmick, S.; Roberts, T.; hide

    2016-01-01

    NASA awarded Small Business Innovative Research (SBIR) contracts to AdvR, Inc to develop a high generation rate source of entangled photons that could be used to explore quantum key distribution (QKD) protocols. The final product, a photon pair source using a dual-element periodically- poled potassium titanyl phosphate (KTP) waveguide, was delivered to NASA Glenn Research Center in June of 2015. This paper describes the source, its characterization, and its performance in a B92 (Bennett, 1992) protocol QKD experiment.

  4. Non-Poissonian Distribution of Tsunami Waiting Times

    NASA Astrophysics Data System (ADS)

    Geist, E. L.; Parsons, T.

    2007-12-01

    Analysis of the global tsunami catalog indicates that tsunami waiting times deviate from an exponential distribution one would expect from a Poisson process. Empirical density distributions of tsunami waiting times were determined using both global tsunami origin times and tsunami arrival times at a particular site with a sufficient catalog: Hilo, Hawai'i. Most sources for the tsunamis in the catalog are earthquakes; other sources include landslides and volcanogenic processes. Both datasets indicate an over-abundance of short waiting times in comparison to an exponential distribution. Two types of probability models are investigated to explain this observation. Model (1) is a universal scaling law that describes long-term clustering of sources with a gamma distribution. The shape parameter (γ) for the global tsunami distribution is similar to that of the global earthquake catalog γ=0.63-0.67 [Corral, 2004]. For the Hilo catalog, γ is slightly greater (0.75-0.82) and closer to an exponential distribution. This is explained by the fact that tsunamis from smaller triggered earthquakes or landslides are less likely to be recorded at a far-field station such as Hilo in comparison to the global catalog, which includes a greater proportion of local tsunamis. Model (2) is based on two distributions derived from Omori's law for the temporal decay of triggered sources (aftershocks). The first is the ETAS distribution derived by Saichev and Sornette [2007], which is shown to fit the distribution of observed tsunami waiting times. The second is a simpler two-parameter distribution that is the exponential distribution augmented by a linear decay in aftershocks multiplied by a time constant Ta. Examination of the sources associated with short tsunami waiting times indicate that triggered events include both earthquake and landslide tsunamis that begin in the vicinity of the primary source. Triggered seismogenic tsunamis do not necessarily originate from the same fault zone

  5. Preliminary investigation of processes that affect source term identification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wickliff, D.S.; Solomon, D.K.; Farrow, N.D.

    Solid Waste Storage Area (SWSA) 5 is known to be a significant source of contaminants, especially tritium ({sup 3}H), to the White Oak Creek (WOC) watershed. For example, Solomon et al. (1991) estimated the total {sup 3}H discharge in Melton Branch (most of which originates in SWSA 5) for the 1988 water year to be 1210 Ci. A critical issue for making decisions concerning remedial actions at SWSA 5 is knowing whether the annual contaminant discharge is increasing or decreasing. Because (1) the magnitude of the annual contaminant discharge is highly correlated to the amount of annual precipitation (Solomon etmore » al., 1991) and (2) a significant lag may exist between the time of peak contaminant release from primary sources (i.e., waste trenches) and the time of peak discharge into streams, short-term stream monitoring by itself is not sufficient for predicting future contaminant discharges. In this study we use {sup 3}H to examine the link between contaminant release from primary waste sources and contaminant discharge into streams. By understanding and quantifying subsurface transport processes, realistic predictions of future contaminant discharge, along with an evaluation of the effectiveness of remedial action alternatives, will be possible. The objectives of this study are (1) to characterize the subsurface movement of contaminants (primarily {sup 3}H) with an emphasis on the effects of matrix diffusion; (2) to determine the relative strength of primary vs secondary sources; and (3) to establish a methodology capable of determining whether the {sup 3}H discharge from SWSA 5 to streams is increasing or decreasing.« less

  6. Quantifying the uncertainty of nonpoint source attribution in distributed water quality models: A Bayesian assessment of SWAT's sediment export predictions

    NASA Astrophysics Data System (ADS)

    Wellen, Christopher; Arhonditsis, George B.; Long, Tanya; Boyd, Duncan

    2014-11-01

    Spatially distributed nonpoint source watershed models are essential tools to estimate the magnitude and sources of diffuse pollution. However, little work has been undertaken to understand the sources and ramifications of the uncertainty involved in their use. In this study we conduct the first Bayesian uncertainty analysis of the water quality components of the SWAT model, one of the most commonly used distributed nonpoint source models. Working in Southern Ontario, we apply three Bayesian configurations for calibrating SWAT to Redhill Creek, an urban catchment, and Grindstone Creek, an agricultural one. We answer four interrelated questions: can SWAT determine suspended sediment sources with confidence when end of basin data is used for calibration? How does uncertainty propagate from the discharge submodel to the suspended sediment submodels? Do the estimated sediment sources vary when different calibration approaches are used? Can we combine the knowledge gained from different calibration approaches? We show that: (i) despite reasonable fit at the basin outlet, the simulated sediment sources are subject to uncertainty sufficient to undermine the typical approach of reliance on a single, best fit simulation; (ii) more than a third of the uncertainty of sediment load predictions may stem from the discharge submodel; (iii) estimated sediment sources do vary significantly across the three statistical configurations of model calibration despite end-of-basin predictions being virtually identical; and (iv) Bayesian model averaging is an approach that can synthesize predictions when a number of adequate distributed models make divergent source apportionments. We conclude with recommendations for future research to reduce the uncertainty encountered when using distributed nonpoint source models for source apportionment.

  7. Detection and Estimation of 2-D Distributions of Greenhouse Gas Source Concentrations and Emissions over Complex Urban Environments and Industrial Sites

    NASA Astrophysics Data System (ADS)

    Zaccheo, T. S.; Pernini, T.; Dobler, J. T.; Blume, N.; Braun, M.

    2017-12-01

    This work highlights the use of the greenhouse-gas laser imaging tomography experiment (GreenLITETM) data in conjunction with a sparse tomography approach to identify and quantify both urban and industrial sources of CO2 and CH4. The GreenLITETM system provides a user-defined set of time-sequenced intersecting chords or integrated column measurements at a fixed height through a quasi-horizontal plane of interest. This plane, with unobstructed views along the lines of sight, may range from complex industrial facilities to a small city scale or urban sector. The continuous time phased absorption measurements are converted to column concentrations and combined with a plume based model to estimate the 2-D distribution of gas concentration over extended areas ranging from 0.04-25 km2. Finally, these 2-D maps of concentration are combined with ancillary meteorological and atmospheric data to identify potential emission sources and provide first order estimates of their associated fluxes. In this presentation, we will provide a brief overview of the systems and results from both controlled release experiments and a long-term system deployment in Paris, FR. These results provide a quantitative assessment of the system's ability to detect and estimate CO2 and CH4 sources, and demonstrate its ability to perform long-term autonomous monitoring and quantification of either persistent or sporadic emissions that may have both health and safety as well as environmental impacts.

  8. Local tsunamis and earthquake source parameters

    USGS Publications Warehouse

    Geist, Eric L.; Dmowska, Renata; Saltzman, Barry

    1999-01-01

    This chapter establishes the relationship among earthquake source parameters and the generation, propagation, and run-up of local tsunamis. In general terms, displacement of the seafloor during the earthquake rupture is modeled using the elastic dislocation theory for which the displacement field is dependent on the slip distribution, fault geometry, and the elastic response and properties of the medium. Specifically, nonlinear long-wave theory governs the propagation and run-up of tsunamis. A parametric study is devised to examine the relative importance of individual earthquake source parameters on local tsunamis, because the physics that describes tsunamis from generation through run-up is complex. Analysis of the source parameters of various tsunamigenic earthquakes have indicated that the details of the earthquake source, namely, nonuniform distribution of slip along the fault plane, have a significant effect on the local tsunami run-up. Numerical methods have been developed to address the realistic bathymetric and shoreline conditions. The accuracy of determining the run-up on shore is directly dependent on the source parameters of the earthquake, which provide the initial conditions used for the hydrodynamic models.

  9. Advanced Beamforming Concepts: Source Localization Using the Bispectrum, Gabor Transform, Wigner-Ville Distribution, and Nonstationary Signal Representation

    DTIC Science & Technology

    1991-12-01

    TRANSFORM, WIGNER - VILLE DISTRIBUTION , AND NONSTATIONARY SIGNAL REPRESENTATIONS 6. AUTHOR(S) J. C. Allen 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS...bispectrum yields a bispectral direction finder. Estimates of time-frequency distributions produce Wigner - Ville and Gabor direction-finders. Some types...Beamforming Concepts: Source Localization Using the Bispectrum, Gabor Transform, Wigner - Ville Distribution , and Nonstationary Signal Representations

  10. Elemental composition and size distribution of particulates in Cleveland, Ohio

    NASA Technical Reports Server (NTRS)

    King, R. B.; Fordyce, J. S.; Neustadter, H. E.; Leibecki, H. F.

    1975-01-01

    Measurements were made of the elemental particle size distribution at five contrasting urban environments with different source-type distributions in Cleveland, Ohio. Air quality conditions ranged from normal to air pollution alert levels. A parallel network of high-volume cascade impactors (5-state) were used for simultaneous sampling on glass fiber surfaces for mass determinations and on Whatman-41 surfaces for elemental analysis by neutron activation for 25 elements. The elemental data are assessed in terms of distribution functions and interrelationships and are compared between locations as a function of resultant wind direction in an attempt to relate the findings to sources.

  11. Elemental composition and size distribution of particulates in Cleveland, Ohio

    NASA Technical Reports Server (NTRS)

    Leibecki, H. F.; King, R. B.; Fordyce, J. S.; Neustadter, H. E.

    1975-01-01

    Measurements have been made of the elemental particle size distribution at five contrasting urban environments with different source-type distributions in Cleveland, Ohio. Air quality conditions ranged from normal to air pollution alert levels. A parallel network of high-volume cascade impactors (5-stage) were used for simultaneous sampling on glass fiber surfaces for mass determinations and on Whatman-41 surfaces for elemental analysis by neutron activation for 25 elements. The elemental data are assessed in terms of distribution functions and interrelationships and are compared between locations as a function of resultant wind direction in an attempt to relate the findings to sources.

  12. Distributed watershed modeling of design storms to identify nonpoint source loading areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endreny, T.A.; Wood, E.F.

    1999-03-01

    Watershed areas that generate nonpoint source (NPS) polluted runoff need to be identified prior to the design of basin-wide water quality projects. Current watershed-scale NPS models lack a variable source area (VSA) hydrology routine, and are therefore unable to identify spatially dynamic runoff zones. The TOPLATS model used a watertable-driven VSA hydrology routine to identify runoff zones in a 17.5 km{sup 2} agricultural watershed in central Oklahoma. Runoff areas were identified in a static modeling framework as a function of prestorm watertable depth and also in a dynamic modeling framework by simulating basin response to 2, 10, and 25 yrmore » return period 6 h design storms. Variable source area expansion occurred throughout the duration of each 6 h storm and total runoff area increased with design storm intensity. Basin-average runoff rates of 1 mm h{sup {minus}1} provided little insight into runoff extremes while the spatially distributed analysis identified saturation excess zones with runoff rates equaling effective precipitation. The intersection of agricultural landcover areas with these saturation excess runoff zones targeted the priority potential NPS runoff zones that should be validated with field visits. These intersected areas, labeled as potential NPS runoff zones, were mapped within the watershed to demonstrate spatial analysis options available in TOPLATS for managing complex distributions of watershed runoff. TOPLATS concepts in spatial saturation excess runoff modelling should be incorporated into NPS management models.« less

  13. On the application of subcell resolution to conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Chang, Shih-Hung

    1989-01-01

    LeVeque and Yee recently investigated a one-dimensional scalar conservation law with stiff source terms modeling the reacting flow problems and discovered that for the very stiff case most of the current finite difference methods developed for non-reacting flows would produce wrong solutions when there is a propagating discontinuity. A numerical scheme, essentially nonoscillatory/subcell resolution - characteristic direction (ENO/SRCD), is proposed for solving conservation laws with stiff source terms. This scheme is a modification of Harten's ENO scheme with subcell resolution, ENO/SR. The locations of the discontinuities and the characteristic directions are essential in the design. Strang's time-splitting method is used and time evolutions are done by advancing along the characteristics. Numerical experiment using this scheme shows excellent results on the model problem of LeVeque and Yee. Comparisons of the results of ENO, ENO/SR, and ENO/SRCD are also presented.

  14. A functional magnetic resonance imaging investigation of short-term source and item memory for negative pictures.

    PubMed

    Mitchell, Karen J; Mather, Mara; Johnson, Marcia K; Raye, Carol L; Greene, Erich J

    2006-10-02

    We investigated the hypothesis that arousal recruits attention to item information, thereby disrupting working memory processes that help bind items to context. Using functional magnetic resonance imaging, we compared brain activity when participants remembered negative or neutral picture-location conjunctions (source memory) versus pictures only. Behaviorally, negative trials showed disruption of short-term source, but not picture, memory; long-term picture recognition memory was better for negative than for neutral pictures. Activity in areas involved in working memory and feature integration (precentral gyrus and its intersect with superior temporal gyrus) was attenuated on negative compared with neutral source trials relative to picture-only trials. Visual processing areas (middle occipital and lingual gyri) showed greater activity for negative than for neutral trials, especially on picture-only trials.

  15. MOVES-Matrix and distributed computing for microscale line source dispersion analysis.

    PubMed

    Liu, Haobing; Xu, Xiaodan; Rodgers, Michael O; Xu, Yanzhi Ann; Guensler, Randall L

    2017-07-01

    MOVES and AERMOD are the U.S. Environmental Protection Agency's recommended models for use in project-level transportation conformity and hot-spot analysis. However, the structure and algorithms involved in running MOVES make analyses cumbersome and time-consuming. Likewise, the modeling setup process, including extensive data requirements and required input formats, in AERMOD lead to a high potential for analysis error in dispersion modeling. This study presents a distributed computing method for line source dispersion modeling that integrates MOVES-Matrix, a high-performance emission modeling tool, with the microscale dispersion models CALINE4 and AERMOD. MOVES-Matrix was prepared by iteratively running MOVES across all possible iterations of vehicle source-type, fuel, operating conditions, and environmental parameters to create a huge multi-dimensional emission rate lookup matrix. AERMOD and CALINE4 are connected with MOVES-Matrix in a distributed computing cluster using a series of Python scripts. This streamlined system built on MOVES-Matrix generates exactly the same emission rates and concentration results as using MOVES with AERMOD and CALINE4, but the approach is more than 200 times faster than using the MOVES graphical user interface. Because AERMOD requires detailed meteorological input, which is difficult to obtain, this study also recommends using CALINE4 as a screening tool for identifying the potential area that may exceed air quality standards before using AERMOD (and identifying areas that are exceedingly unlikely to exceed air quality standards). CALINE4 worst case method yields consistently higher concentration results than AERMOD for all comparisons in this paper, as expected given the nature of the meteorological data employed. The paper demonstrates a distributed computing method for line source dispersion modeling that integrates MOVES-Matrix with the CALINE4 and AERMOD. This streamlined system generates exactly the same emission rates and

  16. Connecting source aggregating areas with distributive regions via Optimal Transportation theory.

    NASA Astrophysics Data System (ADS)

    Lanzoni, S.; Putti, M.

    2016-12-01

    We study the application of Optimal Transport (OT) theory to the transfer of water and sediments from a distributed aggregating source to a distributing area connected by a erodible hillslope. Starting from the Monge-Kantorovich equations, We derive a global energy functional that nonlinearly combines the cost of constructing the drainage network over the entire domain and the cost of water and sediment transportation through the network. It can be shown that the minimization of this functional is equivalent to the infinite time solution of a system of diffusion partial differential equations coupled with transient ordinary differential equations, that closely resemble the classical conservation laws of water and sediments mass and momentum. We present several numerical simulations applied to realstic test cases. For example, the solution of the proposed model forms network configurations that share strong similiratities with rill channels formed on an hillslope. At a larger scale, we obtain promising results in simulating the network patterns that ensure a progressive and continuous transition from a drainage drainage area to a distributive receiving region.

  17. Disentangling the major source areas for an intense aerosol advection in the Central Mediterranean on the basis of Potential Source Contribution Function modeling of chemical and size distribution measurements

    NASA Astrophysics Data System (ADS)

    Petroselli, Chiara; Crocchianti, Stefano; Moroni, Beatrice; Castellini, Silvia; Selvaggi, Roberta; Nava, Silvia; Calzolai, Giulia; Lucarelli, Franco; Cappelletti, David

    2018-05-01

    In this paper, we combined a Potential Source Contribution Function (PSCF) analysis of daily chemical aerosol composition data with hourly aerosol size distributions with the aim to disentangle the major source areas during a complex and fast modulating advection event impacting on Central Italy in 2013. Chemical data include an ample set of metals obtained by Proton Induced X-ray Emission (PIXE), main soluble ions from ionic chromatography and elemental and organic carbon (EC, OC) obtained by thermo-optical measurements. Size distributions have been recorded with an optical particle counter for eight calibrated size classes in the 0.27-10 μm range. We demonstrated the usefulness of the approach by the positive identification of two very different source areas impacting during the transport event. In particular, biomass burning from Eastern Europe and desert dust from Sahara sources have been discriminated based on both chemistry and size distribution time evolution. Hourly BT provided the best results in comparison to 6 h or 24 h based calculations.

  18. Empirical tests of Zipf's law mechanism in open source Linux distribution.

    PubMed

    Maillart, T; Sornette, D; Spaeth, S; von Krogh, G

    2008-11-21

    Zipf's power law is a ubiquitous empirical regularity found in many systems, thought to result from proportional growth. Here, we establish empirically the usually assumed ingredients of stochastic growth models that have been previously conjectured to be at the origin of Zipf's law. We use exceptionally detailed data on the evolution of open source software projects in Linux distributions, which offer a remarkable example of a growing complex self-organizing adaptive system, exhibiting Zipf's law over four full decades.

  19. PHENOstruct: Prediction of human phenotype ontology terms using heterogeneous data sources.

    PubMed

    Kahanda, Indika; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa

    2015-01-01

    The human phenotype ontology (HPO) was recently developed as a standardized vocabulary for describing the phenotype abnormalities associated with human diseases. At present, only a small fraction of human protein coding genes have HPO annotations. But, researchers believe that a large portion of currently unannotated genes are related to disease phenotypes. Therefore, it is important to predict gene-HPO term associations using accurate computational methods. In this work we demonstrate the performance advantage of the structured SVM approach which was shown to be highly effective for Gene Ontology term prediction in comparison to several baseline methods. Furthermore, we highlight a collection of informative data sources suitable for the problem of predicting gene-HPO associations, including large scale literature mining data.

  20. Multi-Sensor Integration to Map Odor Distribution for the Detection of Chemical Sources.

    PubMed

    Gao, Xiang; Acar, Levent

    2016-07-04

    This paper addresses the problem of mapping odor distribution derived from a chemical source using multi-sensor integration and reasoning system design. Odor localization is the problem of finding the source of an odor or other volatile chemical. Most localization methods require a mobile vehicle to follow an odor plume along its entire path, which is time consuming and may be especially difficult in a cluttered environment. To solve both of the above challenges, this paper proposes a novel algorithm that combines data from odor and anemometer sensors, and combine sensors' data at different positions. Initially, a multi-sensor integration method, together with the path of airflow was used to map the pattern of odor particle movement. Then, more sensors are introduced at specific regions to determine the probable location of the odor source. Finally, the results of odor source location simulation and a real experiment are presented.

  1. Neuroimaging Evidence for Agenda-Dependent Monitoring of Different Features during Short-Term Source Memory Tests

    ERIC Educational Resources Information Center

    Mitchell, Karen J.; Raye, Carol L.; McGuire, Joseph T.; Frankel, Hillary; Greene, Erich J.; Johnson, Marcia K.

    2008-01-01

    A short-term source monitoring procedure with functional magnetic resonance imaging assessed neural activity when participants made judgments about the format of 1 of 4 studied items (picture, word), the encoding task performed (cost, place), or whether an item was old or new. The results support findings from long-term memory studies showing that…

  2. Using natural archives to track sources and long-term trends of pollution: an introduction

    USGS Publications Warehouse

    Jules Blais,; Rosen, Michael R.; John Smol,

    2015-01-01

    This book explores the myriad ways that environmental archives can be used to study the distribution and long-term trajectories of contaminants. The volume first focuses on reviews that examine the integrity of the historic record, including factors related to hydrology, post-depositional diffusion, and mixing processes. This is followed by a series of chapters dealing with the diverse archives available for long-term studies of environmental pollution.

  3. 9C spectral-index distributions and source-count estimates from 15 to 93 GHz - a re-assessment

    NASA Astrophysics Data System (ADS)

    Waldram, E. M.; Bolton, R. C.; Riley, J. M.; Pooley, G. G.

    2018-01-01

    In an earlier paper (2007), we used follow-up observations of a sample of sources from the 9C survey at 15.2 GHz to derive a set of spectral-index distributions up to a frequency of 90 GHz. These were based on simultaneous measurements made at 15.2 GHz with the Ryle telescope and at 22 and 43 GHz with the Karl G. Jansky Very Large Array (VLA). We used these distributions to make empirical estimates of source counts at 22, 30, 43, 70 and 90 GHz. In a later paper (2013), we took data at 15.7 GHz from the Arcminute Microkelvin Imager (AMI) and data at 93.2 GHz from the Combined Array for Research in Millimetre-wave Astronomy (CARMA) and estimated the source count at 93.2 GHz. In this paper, we re-examine the data used in both papers and now believe that the VLA flux densities we measured at 43 GHz were significantly in error, being on average only about 70 per cent of their correct values. Here, we present strong evidence for this conclusion and discuss the effect on the source-count estimates made in the 2007 paper. The source-count prediction in the 2013 paper is also revised. We make comparisons with spectral-index distributions and source counts from other telescopes, in particular with a recent deep 95 GHz source count measured by the South Pole Telescope. We investigate reasons for the problem of the low VLA 43-GHz values and find a number of possible contributory factors, but none is sufficient on its own to account for such a large deficit.

  4. Theoretical and measured electric field distributions within an annular phased array: consideration of source antennas.

    PubMed

    Zhang, Y; Joines, W T; Jirtle, R L; Samulski, T V

    1993-08-01

    The magnitude of E-field patterns generated by an annular array prototype device has been calculated and measured. Two models were used to describe the radiating sources: a simple linear dipole and a stripline antenna model. The stripline model includes detailed geometry of the actual antennas used in the prototype and an estimate of the antenna current based on microstrip transmission line theory. This more detailed model yields better agreement with the measured field patterns, reducing the rms discrepancy by a factor of about 6 (from approximately 23 to 4%) in the central region of interest where the SEM is within 25% of the maximum. We conclude that accurate modeling of source current distributions is important for determining SEM distributions associated with such heating devices.

  5. Clinical data integration of distributed data sources using Health Level Seven (HL7) v3-RIM mapping

    PubMed Central

    2011-01-01

    Background Health information exchange and health information integration has become one of the top priorities for healthcare systems across institutions and hospitals. Most organizations and establishments implement health information exchange and integration in order to support meaningful information retrieval among their disparate healthcare systems. The challenges that prevent efficient health information integration for heterogeneous data sources are the lack of a common standard to support mapping across distributed data sources and the numerous and diverse healthcare domains. Health Level Seven (HL7) is a standards development organization which creates standards, but is itself not the standard. They create the Reference Information Model. RIM is developed by HL7's technical committees. It is a standardized abstract representation of HL7 data across all the domains of health care. In this article, we aim to present a design and a prototype implementation of HL7 v3-RIM mapping for information integration of distributed clinical data sources. The implementation enables the user to retrieve and search information that has been integrated using HL7 v3-RIM technology from disparate health care systems. Method and results We designed and developed a prototype implementation of HL7 v3-RIM mapping function to integrate distributed clinical data sources using R-MIM classes from HL7 v3-RIM as a global view along with a collaborative centralized web-based mapping tool to tackle the evolution of both global and local schemas. Our prototype was implemented and integrated with a Clinical Database management Systems CDMS as a plug-in module. We tested the prototype system with some use case scenarios for distributed clinical data sources across several legacy CDMS. The results have been effective in improving information delivery, completing tasks that would have been otherwise difficult to accomplish, and reducing the time required to finish tasks which are used in

  6. Inverse modeling of the Chernobyl source term using atmospheric concentration and deposition measurements

    NASA Astrophysics Data System (ADS)

    Evangeliou, Nikolaos; Hamburger, Thomas; Cozic, Anne; Balkanski, Yves; Stohl, Andreas

    2017-07-01

    This paper describes the results of an inverse modeling study for the determination of the source term of the radionuclides 134Cs, 137Cs and 131I released after the Chernobyl accident. The accident occurred on 26 April 1986 in the Former Soviet Union and released about 1019 Bq of radioactive materials that were transported as far away as the USA and Japan. Thereafter, several attempts to assess the magnitude of the emissions were made that were based on the knowledge of the core inventory and the levels of the spent fuel. More recently, when modeling tools were further developed, inverse modeling techniques were applied to the Chernobyl case for source term quantification. However, because radioactivity is a sensitive topic for the public and attracts a lot of attention, high-quality measurements, which are essential for inverse modeling, were not made available except for a few sparse activity concentration measurements far from the source and far from the main direction of the radioactive fallout. For the first time, we apply Bayesian inversion of the Chernobyl source term using not only activity concentrations but also deposition measurements from the most recent public data set. These observations refer to a data rescue attempt that started more than 10 years ago, with a final goal to provide available measurements to anyone interested. In regards to our inverse modeling results, emissions of 134Cs were estimated to be 80 PBq or 30-50 % higher than what was previously published. From the released amount of 134Cs, about 70 PBq were deposited all over Europe. Similar to 134Cs, emissions of 137Cs were estimated as 86 PBq, on the same order as previously reported results. Finally, 131I emissions of 1365 PBq were found, which are about 10 % less than the prior total releases. The inversion pushes the injection heights of the three radionuclides to higher altitudes (up to about 3 km) than previously assumed (≈ 2.2 km) in order to better match both

  7. Long distance measurement-device-independent quantum key distribution with entangled photon sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Feihu; Qi, Bing; Liao, Zhongfa

    2013-08-05

    We present a feasible method that can make quantum key distribution (QKD), both ultra-long-distance and immune, to all attacks in the detection system. This method is called measurement-device-independent QKD (MDI-QKD) with entangled photon sources in the middle. By proposing a model and simulating a QKD experiment, we find that MDI-QKD with one entangled photon source can tolerate 77 dB loss (367 km standard fiber) in the asymptotic limit and 60 dB loss (286 km standard fiber) in the finite-key case with state-of-the-art detectors. Our general model can also be applied to other non-QKD experiments involving entanglement and Bell state measurements.

  8. A well-balanced scheme for Ten-Moment Gaussian closure equations with source term

    NASA Astrophysics Data System (ADS)

    Meena, Asha Kumari; Kumar, Harish

    2018-02-01

    In this article, we consider the Ten-Moment equations with source term, which occurs in many applications related to plasma flows. We present a well-balanced second-order finite volume scheme. The scheme is well-balanced for general equation of state, provided we can write the hydrostatic solution as a function of the space variables. This is achieved by combining hydrostatic reconstruction with contact preserving, consistent numerical flux, and appropriate source discretization. Several numerical experiments are presented to demonstrate the well-balanced property and resulting accuracy of the proposed scheme.

  9. Wall-loss distribution of charge breeding ions in an electron cyclotron resonance ion source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeong, S. C.; Oyaizu, M.; Imai, N.

    2011-03-15

    The ion loss distribution in an electron cyclotron resonance ion source (ECRIS) was investigated to understand the element dependence of the charge breeding efficiency in an electron cyclotron resonance (ECR) charge breeder. The radioactive {sup 111}In{sup 1+} and {sup 140}Xe{sup 1+} ions (typical nonvolatile and volatile elements, respectively) were injected into the ECR charge breeder at the Tokai Radioactive Ion Accelerator Complex to breed their charge states. Their respective residual activities on the sidewall of the cylindrical plasma chamber of the source were measured after charge breeding as functions of the azimuthal angle and longitudinal position and two-dimensional distributions ofmore » ions lost during charge breeding in the ECRIS were obtained. These distributions had different azimuthal symmetries. The origins of these different azimuthal symmetries are qualitatively discussed by analyzing the differences and similarities in the observed wall-loss patterns. The implications for improving the charge breeding efficiencies of nonvolatile elements in ECR charge breeders are described. The similarities represent universal ion loss characteristics in an ECR charge breeder, which are different from the loss patterns of electrons on the ECRIS wall.« less

  10. Depth to the bottom of magnetic sources (DBMS) from aeromagnetic data of Central India using modified centroid method for fractal distribution of sources

    NASA Astrophysics Data System (ADS)

    Bansal, A. R.; Anand, S. P.; Rajaram, Mita; Rao, V. K.; Dimri, V. P.

    2013-09-01

    The depth to the bottom of the magnetic sources (DBMS) has been estimated from the aeromagnetic data of Central India. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on scaling distribution has been proposed. Shallower values of the DBMS are found for the south western region. The DBMS values are found as low as 22 km in the south west Deccan trap covered regions and as deep as 43 km in the Chhattisgarh Basin. In most of the places DBMS are much shallower than the Moho depth, earlier found from the seismic study and may be representing the thermal/compositional/petrological boundaries. The large variation in the DBMS indicates the complex nature of the Indian crust.

  11. Distributed least-squares estimation of a remote chemical source via convex combination in wireless sensor networks.

    PubMed

    Cao, Meng-Li; Meng, Qing-Hao; Zeng, Ming; Sun, Biao; Li, Wei; Ding, Cheng-Jun

    2014-06-27

    This paper investigates the problem of locating a continuous chemical source using the concentration measurements provided by a wireless sensor network (WSN). Such a problem exists in various applications: eliminating explosives or drugs, detecting the leakage of noxious chemicals, etc. The limited power and bandwidth of WSNs have motivated collaborative in-network processing which is the focus of this paper. We propose a novel distributed least-squares estimation (DLSE) method to solve the chemical source localization (CSL) problem using a WSN. The DLSE method is realized by iteratively conducting convex combination of the locally estimated chemical source locations in a distributed manner. Performance assessments of our method are conducted using both simulations and real experiments. In the experiments, we propose a fitting method to identify both the release rate and the eddy diffusivity. The results show that the proposed DLSE method can overcome the negative interference of local minima and saddle points of the objective function, which would hinder the convergence of local search methods, especially in the case of locating a remote chemical source.

  12. Isotopes, Inventories and Seasonality: Unraveling Methane Source Distribution in the Complex Landscapes of the United Kingdom.

    NASA Astrophysics Data System (ADS)

    Lowry, D.; Fisher, R. E.; Zazzeri, G.; Lanoisellé, M.; France, J.; Allen, G.; Nisbet, E. G.

    2017-12-01

    Unlike the big open landscapes of many continents with large area sources dominated by one particular methane emission type that can be isotopically characterized by flight measurements and sampling, the complex patchwork of urban, fossil and agricultural methane sources across NW Europe require detailed ground surveys for characterization (Zazzeri et al., 2017). Here we outline the findings from multiple seasonal urban and rural measurement campaigns in the United Kingdom. These surveys aim to: 1) Assess source distribution and baseline in regions of planned fracking, and relate to on-site continuous baseline climatology. 2) Characterize spatial and seasonal differences in the isotopic signatures of the UNFCCC source categories, and 3) Assess the spatial validity of the 1 x 1 km UK inventory for large continuous emitters, proposed point sources, and seasonal / ephemeral emissions. The UK inventory suggests that 90% of methane emissions are from 3 source categories, ruminants, landfill and gas distribution. Bag sampling and GC-IRMS delta13C analysis shows that landfill gives a constant signature of -57 ±3 ‰ throughout the year. Fugitive gas emissions are consistent regionally depending on the North Sea supply regions feeding the network (-41 ± 2 ‰ in N England, -37 ± 2 ‰ in SE England). Ruminant, mostly cattle, emissions are far more complex as these spend winters in barns and summers in fields, but are essentially a mix of 2 end members, breath at -68 ±3 ‰ and manure at -51 ±3 ‰, resulting in broad summer field emission plumes of -64 ‰ and point winter barn emission plumes of -58 ‰. The inventory correctly locates emission hotspots from landfill, larger sewage treatment plants and gas compressor stations, giving a broad overview of emission distribution for regional model validation. Mobile surveys are adding an extra layer of detail to this which, combined with isotopic characterization, has identified spatial distribution of gas pipe leaks

  13. DISTRIBUTION, TYPE, ACCUMULATION AND SOURCE OF MARINE DEBRIS IN THE UNITED STATES, 1989-93

    EPA Science Inventory

    Distribution, type, accumulation, & source of marine debris on coastal beaches and in harbors of the United States were examined from 1989 to 1993. nformation was compiled from annual beach cleanups coordinated by the Center for marine Conservation, quarterly beach surveys at eig...

  14. CUMPOIS- CUMULATIVE POISSON DISTRIBUTION PROGRAM

    NASA Technical Reports Server (NTRS)

    Bowerman, P. N.

    1994-01-01

    The Cumulative Poisson distribution program, CUMPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), can be used independently of one another. CUMPOIS determines the approximate cumulative binomial distribution, evaluates the cumulative distribution function (cdf) for gamma distributions with integer shape parameters, and evaluates the cdf for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. CUMPOIS calculates the probability that n or less events (ie. cumulative) will occur within any unit when the expected number of events is given as lambda. Normally, this probability is calculated by a direct summation, from i=0 to n, of terms involving the exponential function, lambda, and inverse factorials. This approach, however, eventually fails due to underflow for sufficiently large values of n. Additionally, when the exponential term is moved outside of the summation for simplification purposes, there is a risk that the terms remaining within the summation, and the summation itself, will overflow for certain values of i and lambda. CUMPOIS eliminates these possibilities by multiplying an additional exponential factor into the summation terms and the partial sum whenever overflow/underflow situations threaten. The reciprocal of this term is then multiplied into the completed sum giving the cumulative probability. The CUMPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting lambda and n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMPOIS was

  15. Distributional patterns of arsenic concentrations in contaminant plumes offer clues to the source of arsenic in groundwater at landfills

    USGS Publications Warehouse

    Harte, Philip T.

    2015-01-01

    The distributional pattern of dissolved arsenic concentrations from landfill plumes can provide clues to the source of arsenic contamination. Under simple idealized conditions, arsenic concentrations along flow paths in aquifers proximal to a landfill will decrease under anthropogenic sources but potentially increase under in situ sources. This paper presents several conceptual distributional patterns of arsenic in groundwater based on the arsenic source under idealized conditions. An example of advanced subsurface mapping of dissolved arsenic with geophysical surveys, chemical monitoring, and redox fingerprinting is presented for a landfill site in New Hampshire with a complex flow pattern. Tools to assist in the mapping of arsenic in groundwater ultimately provide information on the source of contamination. Once an understanding of the arsenic contamination is achieved, appropriate remedial strategies can then be formulated.

  16. ORIGAMI Automator Primer. Automated ORIGEN Source Terms and Spent Fuel Storage Pool Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wieselquist, William A.; Thompson, Adam B.; Bowman, Stephen M.

    2016-04-01

    Source terms and spent nuclear fuel (SNF) storage pool decay heat load analyses for operating nuclear power plants require a large number of Oak Ridge Isotope Generation and Depletion (ORIGEN) calculations. SNF source term calculations also require a significant amount of bookkeeping to track quantities such as core and assembly operating histories, spent fuel pool (SFP) residence times, heavy metal masses, and enrichments. The ORIGEN Assembly Isotopics (ORIGAMI) module in the SCALE code system provides a simple scheme for entering these data. However, given the large scope of the analysis, extensive scripting is necessary to convert formats and process datamore » to create thousands of ORIGAMI input files (one per assembly) and to process the results into formats readily usable by follow-on analysis tools. This primer describes a project within the SCALE Fulcrum graphical user interface (GUI) called ORIGAMI Automator that was developed to automate the scripting and bookkeeping in large-scale source term analyses. The ORIGAMI Automator enables the analyst to (1) easily create, view, and edit the reactor site and assembly information, (2) automatically create and run ORIGAMI inputs, and (3) analyze the results from ORIGAMI. ORIGAMI Automator uses the standard ORIGEN binary concentrations files produced by ORIGAMI, with concentrations available at all time points in each assembly’s life. The GUI plots results such as mass, concentration, activity, and decay heat using a powerful new ORIGEN Post-Processing Utility for SCALE (OPUS) GUI component. This document includes a description and user guide for the GUI, a step-by-step tutorial for a simplified scenario, and appendices that document the file structures used.« less

  17. Analysis of streamflow distribution of non-point source nitrogen export from long-term urban-rural catchments to guide watershed management in the Chesapeake Bay watershed

    NASA Astrophysics Data System (ADS)

    Duncan, J. M.; Band, L. E.; Groffman, P.

    2017-12-01

    Discharge, land use, and watershed management practices (stream restoration and stormwater control measures) have been found to be important determinants of nitrogen (N) export to receiving waters. We used long-term water quality stations from the Baltimore Ecosystem Study Long-Term Ecological Research (BES LTER) Site to quantify nitrogen export across streamflow conditions at the small watershed scale. We calculated nitrate and total nitrogen fluxes using methodology that allows for changes over time; weighted regressions on time, discharge, and seasonality. Here we tested the hypotheses that a) while the largest N stream fluxes occur during storm events, there is not a clear relationship between N flux and discharge and b) N export patterns are aseasonal in developed watersheds where sources are larger and retention capacity is lower. The goal is to scale understanding from small watersheds to larger ones. Developing a better understanding of hydrologic controls on nitrogen export is essential for successful adaptive watershed management at societally meaningful spatial scales.

  18. SU-F-T-24: Impact of Source Position and Dose Distribution Due to Curvature of HDR Transfer Tubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khan, A; Yue, N

    2016-06-15

    Purpose: Brachytherapy is a highly targeted from of radiotherapy. While this may lead to ideal dose distributions on the treatment planning system, a small error in source location can lead to change in the dose distribution. The purpose of this study is to quantify the impact on source position error due to curvature of the transfer tubes and the impact this may have on the dose distribution. Methods: Since the source travels along the midline of the tube, an estimate of the positioning error for various angles of curvature was determined using geometric properties of the tube. Based on themore » range of values a specific shift was chosen to alter the treatment plans for a number of cervical cancer patients who had undergone HDR brachytherapy boost using tandem and ovoids. Impact of dose to target and organs at risk were determined and checked against guidelines outlined by radiation oncologist. Results: The estimate of the positioning error was 2mm short of the expected position (the curved tube can only cause the source to not reach as far as with a flat tube). Quantitative impact on the dose distribution is still in the process of being analyzed. Conclusion: The accepted positioning tolerance for the source position of a HDR brachytherapy unit is plus or minus 1mm. If there is an additional 2mm discrepancy due to tube curvature, this can result in a source being 1mm to 3mm short of the expected location. While we do always attempt to keep the tubes straight, in some cases such as with tandem and ovoids, the tandem connector does not extend as far out from the patient so the ovoid tubes always contain some degree of curvature. The dose impact of this may be significant.« less

  19. Long-term trends in California mobile source emissions and ambient concentrations of black carbon and organic aerosol.

    PubMed

    McDonald, Brian C; Goldstein, Allen H; Harley, Robert A

    2015-04-21

    A fuel-based approach is used to assess long-term trends (1970-2010) in mobile source emissions of black carbon (BC) and organic aerosol (OA, including both primary emissions and secondary formation). The main focus of this analysis is the Los Angeles Basin, where a long record of measurements is available to infer trends in ambient concentrations of BC and organic carbon (OC), with OC used here as a proxy for OA. Mobile source emissions and ambient concentrations have decreased similarly, reflecting the importance of on- and off-road engines as sources of BC and OA in urban areas. In 1970, the on-road sector accounted for ∼90% of total mobile source emissions of BC and OA (primary + secondary). Over time, as on-road engine emissions have been controlled, the relative importance of off-road sources has grown. By 2010, off-road engines were estimated to account for 37 ± 20% and 45 ± 16% of total mobile source contributions to BC and OA, respectively, in the Los Angeles area. This study highlights both the success of efforts to control on-road emission sources, and the importance of considering off-road engine and other VOC source contributions when assessing long-term emission and ambient air quality trends.

  20. Long-Term Stability of the NIST Standard Ultrasonic Source.

    PubMed

    Fick, Steven E

    2008-01-01

    The National Institute of Standards and Technology (NIST) Standard Ultrasonic Source (SUS) is a system comprising a transducer capable of output power levels up to 1 W at multiple frequencies between 1 MHz and 30 MHz, and an electrical impedance-matching network that allows the system to be driven by a conventional 50 Ω rf (radio-frequency) source. It is designed to allow interlaboratory replication of ultrasonic power levels with high accuracy using inexpensive readily available ancillary equipment. The SUS was offered for sale for 14 years (1985 to 1999). Each system was furnished with data for the set of calibration points (combinations of power level and frequency) specified by the customer. Of the systems that had been ordered with some calibration points in common, three were returned more than once to NIST for recalibration. Another system retained at NIST has been recalibrated periodically since 1984. The collective data for these systems comprise 9 calibration points and 102 measurements spanning a 17 year interval ending in 2001, the last year NIST ultrasonic power measurement services were available to the public. These data have been analyzed to compare variations in output power with frequency, power level, and time elapsed since the first calibration. The results verify the claim, made in the instruction sheet furnished with every SUS, that "long-term drift, if any, in the calibration of NIST Standard Sources is insignificant compared to the uncertainties associated with a single measurement of ultrasonic power by any method available at NIST."

  1. Fast optical source for quantum key distribution based on semiconductor optical amplifiers.

    PubMed

    Jofre, M; Gardelein, A; Anzolin, G; Amaya, W; Capmany, J; Ursin, R; Peñate, L; Lopez, D; San Juan, J L; Carrasco, J A; Garcia, F; Torcal-Milla, F J; Sanchez-Brea, L M; Bernabeu, E; Perdigues, J M; Jennewein, T; Torres, J P; Mitchell, M W; Pruneri, V

    2011-02-28

    A novel integrated optical source capable of emitting faint pulses with different polarization states and with different intensity levels at 100 MHz has been developed. The source relies on a single laser diode followed by four semiconductor optical amplifiers and thin film polarizers, connected through a fiber network. The use of a single laser ensures high level of indistinguishability in time and spectrum of the pulses for the four different polarizations and three different levels of intensity. The applicability of the source is demonstrated in the lab through a free space quantum key distribution experiment which makes use of the decoy state BB84 protocol. We achieved a lower bound secure key rate of the order of 3.64 Mbps and a quantum bit error ratio as low as 1.14×10⁻² while the lower bound secure key rate became 187 bps for an equivalent attenuation of 35 dB. To our knowledge, this is the fastest polarization encoded QKD system which has been reported so far. The performance, reduced size, low power consumption and the fact that the components used can be space qualified make the source particularly suitable for secure satellite communication.

  2. Long-term variability in bright hard X-ray sources: 5+ years of BATSE data

    NASA Technical Reports Server (NTRS)

    Robinson, C. R.; Harmon, B. A.; McCollough, M. L.; Paciesas, W. S.; Sahi, M.; Scott, D. M.; Wilson, C. A.; Zhang, S. N.; Deal, K. J.

    1997-01-01

    The operation of the Compton Gamma Ray Observatory (CGRO)/burst and transient source experiment (BATSE) continues to provide data for inclusion into a data base for the analysis of long term variability in bright, hard X-ray sources. The all-sky capability of BATSE provides up to 30 flux measurements/day for each source. The long baseline and the various rising and setting occultation flux measurements allow searches for periodic and quasi-periodic signals with periods of between several hours to hundreds of days to be conducted. The preliminary results from an analysis of the hard X-ray variability in 24 of the brightest BATSE sources are presented. Power density spectra are computed for each source and profiles are presented of the hard X-ray orbital modulations in some X-ray binaries, together with amplitude modulations and variations in outburst durations and intensities in recurrent X-ray transients.

  3. Parameter optimization in biased decoy-state quantum key distribution with both source errors and statistical fluctuations

    NASA Astrophysics Data System (ADS)

    Zhu, Jian-Rong; Li, Jian; Zhang, Chun-Mei; Wang, Qin

    2017-10-01

    The decoy-state method has been widely used in commercial quantum key distribution (QKD) systems. In view of the practical decoy-state QKD with both source errors and statistical fluctuations, we propose a universal model of full parameter optimization in biased decoy-state QKD with phase-randomized sources. Besides, we adopt this model to carry out simulations of two widely used sources: weak coherent source (WCS) and heralded single-photon source (HSPS). Results show that full parameter optimization can significantly improve not only the secure transmission distance but also the final key generation rate. And when taking source errors and statistical fluctuations into account, the performance of decoy-state QKD using HSPS suffered less than that of decoy-state QKD using WCS.

  4. GEOCHEMISTRY OF PAHS IN AQUATIC ENVIRONMENTS: A SYNTHESIS OF DISTRIBUTION, SOURCE, PERSISTENCE, PARTITIONING AND BIOAVAILABILITY

    EPA Science Inventory

    On the basis of their distributions, sources, persistence, partitioning and bioavailability, polycyclic aromatic hydrocarbons (PAHs) are a unique class of persistent organic pollutants (POPs) contaminating the aquatic environment. They are of particular interest to geochemists an...

  5. Surface ozone in China: present-day distribution and long-term changes

    NASA Astrophysics Data System (ADS)

    Xu, X.; Lin, W.; Xu, W.

    2017-12-01

    Reliable knowledge of spatio-temporal variations of surface ozone is highly needed to assess the impacts of ozone on human health, ecosystem and climate. Although regional distributions and trends of surface ozone in European and North American countries have been well characterized, little is known about the variability of surface ozone in many other countries, including China, where emissions of ozone precursors have been changing rapidly in recent decades. Here we present the first comprehensive description of present-day (2013-2017) distribution and long-term changes of surface ozone in mainland China. Recent ozone measurements from China's air quality monitoring network (AQMN) are analyzed to show present-day distributions of a few ozone exposure metrics for urban environment. Long-term measurements of ozone at six background sites, a rural site and an urban are used to study the trends of ozone in background, rural and urban air, respectively. The average levels of ozone at the AQMN sites (mainly urban) are close to those found at many European and North American sites. However, ozone at most of the sites shows very large diurnal and seasonal variations so that ozone nonattainment can occur in many cities, particularly those in the North China Plain (NCP), the south of Northeast China (NEC), the Yangtze River Delta (YRD), the Pearl River Delta (PRD), and the Sichuan Basin-Chongqing region (SCB). In all these regions, particularly in the NCP, the maximum daily 8-h average (MDA8) ozone concentration can significantly exceed the national limit (75 ppb). High annual sum of ozone means over 35 ppb (SOMO35) exist mainly in the NCP, NEC and YRD, with regional averages over 4000 ppb·d. Surface ozone has significantly increased at Waliguan (a baseline site in western China) and Shangdianzi (a background site in the NCP), and decreased in winter and spring at Longfengshan (a background site in Northeast China). No clear trend can be derived from long-term measurements

  6. Distribution of Practice and Metacognition in Learning and Long-Term Retention of a Discrete Motor Task

    ERIC Educational Resources Information Center

    Dail, Teresa K.; Christina, Robert W.

    2004-01-01

    This study examined judgments of learning and the long-term retention of a discrete motor task (golf putting) as a function of practice distribution. The results indicated that participants in the distributed practice group performed more proficiently than those in the massed practice group during both acquisition and retention phases. No…

  7. Simultaneous reconstruction of emission activity and attenuation coefficient distribution from TOF data, acquired with external transmission source

    NASA Astrophysics Data System (ADS)

    Panin, V. Y.; Aykac, M.; Casey, M. E.

    2013-06-01

    The simultaneous PET data reconstruction of emission activity and attenuation coefficient distribution is presented, where the attenuation image is constrained by exploiting an external transmission source. Data are acquired in time-of-flight (TOF) mode, allowing in principle for separation of emission and transmission data. Nevertheless, here all data are reconstructed at once, eliminating the need to trace the position of the transmission source in sinogram space. Contamination of emission data by the transmission source and vice versa is naturally modeled. Attenuated emission activity data also provide additional information about object attenuation coefficient values. The algorithm alternates between attenuation and emission activity image updates. We also proposed a method of estimation of spatial scatter distribution from the transmission source by incorporating knowledge about the expected range of attenuation map values. The reconstruction of experimental data from the Siemens mCT scanner suggests that simultaneous reconstruction improves attenuation map image quality, as compared to when data are separated. In the presented example, the attenuation map image noise was reduced and non-uniformity artifacts that occurred due to scatter estimation were suppressed. On the other hand, the use of transmission data stabilizes attenuation coefficient distribution reconstruction from TOF emission data alone. The example of improving emission images by refining a CT-based patient attenuation map is presented, revealing potential benefits of simultaneous CT and PET data reconstruction.

  8. On Road Study of Colorado Front Range Greenhouse Gases Distribution and Sources

    NASA Astrophysics Data System (ADS)

    Petron, G.; Hirsch, A.; Trainer, M. K.; Karion, A.; Kofler, J.; Sweeney, C.; Andrews, A.; Kolodzey, W.; Miller, B. R.; Miller, L.; Montzka, S. A.; Kitzis, D. R.; Patrick, L.; Frost, G. J.; Ryerson, T. B.; Robers, J. M.; Tans, P.

    2008-12-01

    The Global Monitoring Division and Chemical Sciences Division of the NOAA Earth System Research Laboratory have teamed up over the summer 2008 to experiment with a new measurement strategy to characterize greenhouse gases distribution and sources in the Colorado Front Range. Combining expertise in greenhouse gases measurements and in local to regional scales air quality study intensive campaigns, we have built the 'Hybrid Lab'. A continuous CO2 and CH4 cavity ring down spectroscopic analyzer (Picarro, Inc.), a CO gas-filter correlation instrument (Thermo Environmental, Inc.) and a continuous UV absorption ozone monitor (2B Technologies, Inc., model 202SC) have been installed securely onboard a 2006 Toyota Prius Hybrid vehicle with an inlet bringing in outside air from a few meters above the ground. To better characterize point and distributed sources, air samples were taken with a Portable Flask Package (PFP) for later multiple species analysis in the lab. A GPS unit hooked up to the ozone analyzer and another one installed on the PFP kept track of our location allowing us to map measured concentrations on the driving route using Google Earth. The Hybrid Lab went out for several drives in the vicinity of the NOAA Boulder Atmospheric Observatory (BAO) tall tower located in Erie, CO and covering areas from Boulder, Denver, Longmont, Fort Collins and Greeley. Enhancements in CO2, CO and destruction of ozone mainly reflect emissions from traffic. Methane enhancements however are clearly correlated with nearby point sources (landfill, feedlot, natural gas compressor ...) or with larger scale air masses advected from the NE Colorado, where oil and gas drilling operations are widespread. The multiple species analysis (hydrocarbons, CFCs, HFCs) of the air samples collected along the way bring insightful information about the methane sources at play. We will present results of the analysis and interpretation of the Hybrid Lab Front Range Study and conclude with perspectives

  9. Identifying (subsurface) anthropogenic heat sources that influence temperature in the drinking water distribution system

    NASA Astrophysics Data System (ADS)

    Agudelo-Vera, Claudia M.; Blokker, Mirjam; de Kater, Henk; Lafort, Rob

    2017-09-01

    The water temperature in the drinking water distribution system and at customers' taps approaches the surrounding soil temperature at a depth of 1 m. Water temperature is an important determinant of water quality. In the Netherlands drinking water is distributed without additional residual disinfectant and the temperature of drinking water at customers' taps is not allowed to exceed 25 °C. In recent decades, the urban (sub)surface has been getting more occupied by various types of infrastructures, and some of these can be heat sources. Only recently have the anthropogenic sources and their influence on the underground been studied on coarse spatial scales. Little is known about the urban shallow underground heat profile on small spatial scales, of the order of 10 m × 10 m. Routine water quality samples at the tap in urban areas have shown up locations - so-called hotspots - in the city, with relatively high soil temperatures - up to 7 °C warmer - compared to the soil temperatures in the surrounding rural areas. Yet the sources and the locations of these hotspots have not been identified. It is expected that with climate change during a warm summer the soil temperature in the hotspots can be above 25 °C. The objective of this paper is to find a method to identify heat sources and urban characteristics that locally influence the soil temperature. The proposed method combines mapping of urban anthropogenic heat sources, retrospective modelling of the soil temperature, analysis of water temperature measurements at the tap, and extensive soil temperature measurements. This approach provided insight into the typical range of the variation of the urban soil temperature, and it is a first step to identifying areas with potential underground heat stress towards thermal underground management in cities.

  10. Chemotaxis Increases the Residence Time Distribution of Bacteria in Granular Media Containing Distributed Contaminant Sources

    NASA Astrophysics Data System (ADS)

    Adadevoh, J.; Triolo, S.; Ramsburg, C. A.; Ford, R.

    2015-12-01

    The use of chemotactic bacteria in bioremediation has the potential to increase access to, and biotransformation of, contaminant mass within the subsurface environment. This laboratory-scale study aimed to understand and quantify the influence of chemotaxis on residence times of pollutant-degrading bacteria within homogeneous treatment zones. Focus was placed on a continuous flow sand-packed column system in which a uniform distribution of naphthalene crystals created distributed sources of dissolved phase contaminant. A 10 mL pulse of Pseudomonas putida G7, which is chemotactic to naphthalene, and Pseudomonas putida G7 Y1, a non-chemotactic mutant strain, were simultaneously introduced into the sand-packed column at equal concentrations. Breakthrough curves obtained for the bacteria from column experiments conducted with and without naphthalene were used to quantify the effect of chemotaxis on transport parameters. In the presence of the chemoattractant, longitudinal dispersivity of PpG7 increased by a factor of 3 and percent recovery decreased from 21% to 12%. The results imply that pore-scale chemotaxis responses are evident at an interstitial fluid velocity of 1.7 m/d, which is within the range of typical groundwater flow. Within the context of bioremediation, chemotaxis may work to enhance bacterial residence times in zones of contamination thereby improving treatment.

  11. Finite-key security analysis of quantum key distribution with imperfect light sources

    DOE PAGES

    Mizutani, Akihiro; Curty, Marcos; Lim, Charles Ci Wen; ...

    2015-09-09

    In recent years, the gap between theory and practice in quantum key distribution (QKD) has been significantly narrowed, particularly for QKD systems with arbitrarily flawed optical receivers. The status for QKD systems with imperfect light sources is however less satisfactory, in the sense that the resulting secure key rates are often overly dependent on the quality of state preparation. This is especially the case when the channel loss is high. Very recently, to overcome this limitation, Tamaki et al proposed a QKD protocol based on the so-called 'rejected data analysis', and showed that its security in the limit of infinitelymore » long keys is almost independent of any encoding flaw in the qubit space, being this protocol compatible with the decoy state method. Here, as a step towards practical QKD, we show that a similar conclusion is reached in the finite-key regime, even when the intensity of the light source is unstable. More concretely, we derive security bounds for a wide class of realistic light sources and show that the bounds are also efficient in the presence of high channel loss. Our results strongly suggest the feasibility of long distance provably secure communication with imperfect light sources.« less

  12. Lead concentration distribution and source tracing of urban/suburban aquatic sediments in two typical famous tourist cities: Haikou and Sanya, China.

    PubMed

    Dong, Zhicheng; Bao, Zhengyu; Wu, Guoai; Fu, Yangrong; Yang, Yi

    2010-11-01

    The content and spatial distribution of lead in the aquatic systems in two Chinese tropical cities in Hainan province (Haikou and Sanyan) show an unequal distribution of lead between the urban and the suburban areas. The lead content is significantly higher (72.3 mg/kg) in the urban area than the suburbs (15.0 mg/kg) in Haikou, but quite equal in Sanya (41.6 and 43.9 mg/kg). The frequency distribution histograms suggest that the lead in Haikou and in Sanya derives from different natural and/or anthropogenic sources. The isotopic compositions indicate that urban sediment lead in Haikou originates mainly from anthropogenic sources (automobile exhaust, atmospheric deposition, etc.) which contribute much more than the natural sources, while natural lead (basalt and sea sands) is still dominant in the suburban areas in Haikou. In Sanya, the primary source is natural (soils and sea sands).

  13. Long-Term Temporal Trends of Polychlorinated Biphenyls and Their Controlling Sources in China.

    PubMed

    Zhao, Shizhen; Breivik, Knut; Liu, Guorui; Zheng, Minghui; Jones, Kevin C; Sweetman, Andrew J

    2017-03-07

    Polychlorinated biphenyls (PCBs) are industrial organic contaminants identified as persistent, bioaccumulative, toxic (PBT), and subject to long-range transport (LRT) with global scale significance. This study focuses on a reconstruction and prediction for China of long-term emission trends of intentionally and unintentionally produced (UP) ∑ 7 PCBs (UP-PCBs, from the manufacture of steel, cement and sinter iron) and their re-emissions from secondary sources (e.g., soils and vegetation) using a dynamic fate model (BETR-Global). Contemporary emission estimates combined with predictions from the multimedia fate model suggest that primary sources still dominate, although unintentional sources are predicted to become a main contributor from 2035 for PCB-28. Imported e-waste is predicted to play an increasing role until 2020-2030 on a national scale due to the decline of intentionally produced (IP) emissions. Hypothetical emission scenarios suggest that China could become a potential source to neighboring regions with a net output of ∼0.4 t year -1 by around 2050. However, future emission scenarios and hence model results will be dictated by the efficiency of control measures.

  14. Distribution, sources, and potential toxicological significance of PAHs in drinking water sources within the Pearl River Delta.

    PubMed

    An, Taicheng; Qiao, Meng; Li, Guiying; Sun, Hongwei; Zeng, Xiangying; Fu, Jiamo

    2011-05-01

    The Pearl River Delta (PRD) region is one of the most population-dense areas in China. The safety of its drinking source water is essential to human health. Polycyclic aromatic hydrocarbons (PAHs) have attracted attention from the scientific community and the general public due to their toxicity and wide distribution in the global environment. In this work, PAHs pollution levels from the drinking source water in nine main cities within the PRD were investigated. ∑15 PAHs concentrations during the wet season varied from 32.0 to 754.8 ng L(-1) in the dissolved phase, and from 13.4 to 3017.8 ng L(-1) in the particulate phase. During the dry season, dissolved PAHs ranged from 48.1 to 113.6 ng L(-1), and particulate PAHs from 8.6 to 69.6 ng L(-1). Overall, ∑15 PAHs concentrations were extremely high in the XC and ZHQ stations during the wet season in 2008 and 2009. In most sites, PAHs originated from mixed sources. Hazard ratios based on non-cancerous and cancerous risks were extremely higher in XC compared with the others during the wet season, though they were much less than 1. Nevertheless, risks caused by the combined toxicity of ∑15 PAHs and other organics should be seriously considered. PAHs toxic equivalent quantities ranged from 0.508 to 177.077 ng L(-1).

  15. Sources and distribution of aliphatic and polyaromatic hydrocarbons in sediments from the Neuquen River, Argentine Patagonia.

    PubMed

    Monza, Liliana B; Loewy, Ruth M; Savini, Mónica C; Pechen de d'Angelo, Ana M

    2013-01-01

    Spatial distribution and probable sources of aliphatic and polyaromatic hydrocarbons (AHs, PAHs) were investigated in surface sediments collected along the bank of the Neuquen River, Argentina. Total concentrations of aliphatic hydrocarbons ranged between 0.41 and 125 μg/g dw. Six stations presented low values of resolved aliphatic hydrocarbons and the n-alkane distribution indexes applied suggested a clear biogenic source. These values can be considered the baseline levels of aliphatic hydrocarbons for the river sediments. This constitutes important information for the assessment of future impacts since a strong impulse in the exploitation of shale gas and shale oil in these zones is nowadays undergoing. For the other 11 stations, a mixture of aliphatic hydrocarbons of petrogenic and biogenic origin was observed. The spatial distribution reflects local inputs of these pollutants with a significant increase in concentrations in the lower course, where two major cities are located. The highest values of total aliphatic hydrocarbons were found in this sector which, in turn, was the only one where individual PAHs were detected.

  16. Organophosphate ester flame retardants in Nepalese soil: Spatial distribution, source apportionment and air-soil exchange assessment.

    PubMed

    Yadav, Ishwar Chandra; Devi, Ningombam Linthoingambi; Li, Jun; Zhang, Gan

    2018-01-01

    Despite soil being the major terrestrial environmental reservoir and one of the significant sinks for many hydrophobic organic compounds including organophosphate ester flame retardants (OPFRs), limited information is available about concentration and fate of OPFRs contamination in urban soil in general and especially in case of Nepal. This study investigates the environmental concentration, spatial distribution and source apportionment of eight OPFRs in surface soil (n = 28) from four major cities of Nepal with special interest on air-soil exchange. Overall, significantly high concentrations of ∑ 8 OPFR were measured in soil ranging from 25-27,900 ng/g dw (median 248 ng/g dw). In terms of compositional pattern, tris(methyl phenyl) phosphate (TMPP) was the most abundant phosphorus chemical in soil, followed by tris(2-chloroisopropyl) phosphate (TCIPP), and accounted for 35-49% and 8-25% of ∑ 8 OPFRs, respectively. The high level of these OPFRs was attributed to local sources as opposed to transboundary influence from remote areas. A Spearman's rank correlation analysis exhibited weak correlation of ∑ 8 OPFRs with TOC (Rho = 0.117, p < 0.05) and BC (Rho = 0.007, p < 0.05), suggesting little or no influence of TOC and BC on the concentration of ∑ 8 OPFRs. The fugacity fraction (ff) results indicated a strong influence of soil contamination on atmospheric level of OPFRs via volatilization. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Neutrons Flux Distributions of the Pu-Be Source and its Simulation by the MCNP-4B Code

    NASA Astrophysics Data System (ADS)

    Faghihi, F.; Mehdizadeh, S.; Hadad, K.

    Neutron Fluence rate of a low intense Pu-Be source is measured by Neutron Activation Analysis (NAA) of 197Au foils. Also, the neutron fluence rate distribution versus energy is calculated using the MCNP-4B code based on ENDF/B-V library. Theoretical simulation as well as our experimental performance are a new experience for Iranians to make reliability with the code for further researches. In our theoretical investigation, an isotropic Pu-Be source with cylindrical volume distribution is simulated and relative neutron fluence rate versus energy is calculated using MCNP-4B code. Variation of the fast and also thermal neutrons fluence rate, which are measured by NAA method and MCNP code, are compared.

  18. Feature-Based Visual Short-Term Memory Is Widely Distributed and Hierarchically Organized.

    PubMed

    Dotson, Nicholas M; Hoffman, Steven J; Goodell, Baldwin; Gray, Charles M

    2018-06-15

    Feature-based visual short-term memory is known to engage both sensory and association cortices. However, the extent of the participating circuit and the neural mechanisms underlying memory maintenance is still a matter of vigorous debate. To address these questions, we recorded neuronal activity from 42 cortical areas in monkeys performing a feature-based visual short-term memory task and an interleaved fixation task. We find that task-dependent differences in firing rates are widely distributed throughout the cortex, while stimulus-specific changes in firing rates are more restricted and hierarchically organized. We also show that microsaccades during the memory delay encode the stimuli held in memory and that units modulated by microsaccades are more likely to exhibit stimulus specificity, suggesting that eye movements contribute to visual short-term memory processes. These results support a framework in which most cortical areas, within a modality, contribute to mnemonic representations at timescales that increase along the cortical hierarchy. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. The effect of the charge exchange source on the velocity and 'temperature' distributions and their anisotropies in the earth's exosphere

    NASA Technical Reports Server (NTRS)

    Hodges, R. R., Jr.; Rohrbaugh, R. P.; Tinsley, B. A.

    1981-01-01

    The velocity distribution of atomic hydrogen in the earth's exosphere is calculated as a function of altitude and direction taking into account both the classic exobase source and the higher-altitude plasmaspheric charge exchange source. Calculations are performed on the basis of a Monte Carlo technique in which random ballistic trajectories of individual atoms are traced through a three-dimensional grid of audit zones, at which relative concentrations and momentum or energy fluxes are obtained. In the case of the classical exobase source alone, the slope of the velocity distribution is constant only for the upward radial velocity component and increases dramatically with altitude for the incoming radial and transverse velocity components, resulting in a temperature decrease. The charge exchange source, which produces the satellite hydrogen component and the hot ballistic and escape components of the exosphere, is found to enhance the wings of the velocity distributions, however this effect is not sufficient to overcome the temperature decreases at altitudes above one earth radius. The resulting global model of the hydrogen exosphere may be used as a realistic basis for radiative transfer calculations.

  20. Qualitative analysis of precipiation distribution in Poland with use of different data sources

    NASA Astrophysics Data System (ADS)

    Walawender, J.; Dyras, I.; Łapeta, B.; Serafin-Rek, D.; Twardowski, A.

    2008-04-01

    Geographical Information Systems (GIS) can be used to integrate data from different sources and in different formats to perform innovative spatial and temporal analysis. GIS can be also applied for climatic research to manage, investigate and display all kinds of weather data. The main objective of this study is to demonstrate that GIS is a useful tool to examine and visualise precipitation distribution obtained from different data sources: ground measurements, satellite and radar data. Three selected days (30 cases) with convective rainfall situations were analysed. Firstly, scalable GRID-based approach was applied to store data from three different sources in comparable layout. Then, geoprocessing algorithm was created within ArcGIS 9.2 environment. The algorithm included: GRID definition, reclassification and raster algebra. All of the calculations and procedures were performed automatically. Finally, contingency tables and pie charts were created to show relationship between ground measurements and both satellite and radar derived data. The results were visualised on maps.

  1. Bayesian source term estimation of atmospheric releases in urban areas using LES approach.

    PubMed

    Xue, Fei; Kikumoto, Hideki; Li, Xiaofeng; Ooka, Ryozo

    2018-05-05

    The estimation of source information from limited measurements of a sensor network is a challenging inverse problem, which can be viewed as an assimilation process of the observed concentration data and the predicted concentration data. When dealing with releases in built-up areas, the predicted data are generally obtained by the Reynolds-averaged Navier-Stokes (RANS) equations, which yields building-resolving results; however, RANS-based models are outperformed by large-eddy simulation (LES) in the predictions of both airflow and dispersion. Therefore, it is important to explore the possibility of improving the estimation of the source parameters by using the LES approach. In this paper, a novel source term estimation method is proposed based on LES approach using Bayesian inference. The source-receptor relationship is obtained by solving the adjoint equations constructed using the time-averaged flow field simulated by the LES approach based on the gradient diffusion hypothesis. A wind tunnel experiment with a constant point source downwind of a single building model is used to evaluate the performance of the proposed method, which is compared with that of the existing method using a RANS model. The results show that the proposed method reduces the errors of source location and releasing strength by 77% and 28%, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. The non-radiating component of the field generated by a finite monochromatic scalar source distribution

    NASA Astrophysics Data System (ADS)

    Hoenders, Bernhard J.; Ferwerda, Hedzer A.

    1998-09-01

    We separate the field generated by a spherically symmetric bounded scalar monochromatic source into a radiative and non-radiative part. The non-radiative part is obtained by projecting the total field on the space spanned by the non-radiating inhomogeneous modes, i.e. the modes which satisfy the inhomogeneous wave equation. Using residue techniques, introduced by Cauchy, we obtain an explicit analytical expression for the non-radiating component. We also identify the part of the source distribution which corresponds to this non-radiating part. The analysis is based on the scalar wave equation.

  3. Simulating of the measurement-device independent quantum key distribution with phase randomized general sources

    PubMed Central

    Wang, Qin; Wang, Xiang-Bin

    2014-01-01

    We present a model on the simulation of the measurement-device independent quantum key distribution (MDI-QKD) with phase randomized general sources. It can be used to predict experimental observations of a MDI-QKD with linear channel loss, simulating corresponding values for the gains, the error rates in different basis, and also the final key rates. Our model can be applicable to the MDI-QKDs with arbitrary probabilistic mixture of different photon states or using any coding schemes. Therefore, it is useful in characterizing and evaluating the performance of the MDI-QKD protocol, making it a valuable tool in studying the quantum key distributions. PMID:24728000

  4. The eGo grid model: An open-source and open-data based synthetic medium-voltage grid model for distribution power supply systems

    NASA Astrophysics Data System (ADS)

    Amme, J.; Pleßmann, G.; Bühler, J.; Hülk, L.; Kötter, E.; Schwaegerl, P.

    2018-02-01

    The increasing integration of renewable energy into the electricity supply system creates new challenges for distribution grids. The planning and operation of distribution systems requires appropriate grid models that consider the heterogeneity of existing grids. In this paper, we describe a novel method to generate synthetic medium-voltage (MV) grids, which we applied in our DIstribution Network GeneratOr (DINGO). DINGO is open-source software and uses freely available data. Medium-voltage grid topologies are synthesized based on location and electricity demand in defined demand areas. For this purpose, we use GIS data containing demand areas with high-resolution spatial data on physical properties, land use, energy, and demography. The grid topology is treated as a capacitated vehicle routing problem (CVRP) combined with a local search metaheuristics. We also consider the current planning principles for MV distribution networks, paying special attention to line congestion and voltage limit violations. In the modelling process, we included power flow calculations for validation. The resulting grid model datasets contain 3608 synthetic MV grids in high resolution, covering all of Germany and taking local characteristics into account. We compared the modelled networks with real network data. In terms of number of transformers and total cable length, we conclude that the method presented in this paper generates realistic grids that could be used to implement a cost-optimised electrical energy system.

  5. Passive decoy-state quantum key distribution with practical light sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curty, Marcos; Ma, Xiongfeng; Qi, Bing

    2010-02-15

    Decoy states have been proven to be a very useful method for significantly enhancing the performance of quantum key distribution systems with practical light sources. Although active modulation of the intensity of the laser pulses is an effective way of preparing decoy states in principle, in practice passive preparation might be desirable in some scenarios. Typical passive schemes involve parametric down-conversion. More recently, it has been shown that phase-randomized weak coherent pulses (WCP) can also be used for the same purpose [M. Curty et al., Opt. Lett. 34, 3238 (2009).] This proposal requires only linear optics together with a simplemore » threshold photon detector, which shows the practical feasibility of the method. Most importantly, the resulting secret key rate is comparable to the one delivered by an active decoy-state setup with an infinite number of decoy settings. In this article we extend these results, now showing specifically the analysis for other practical scenarios with different light sources and photodetectors. In particular, we consider sources emitting thermal states, phase-randomized WCP, and strong coherent light in combination with several types of photodetectors, like, for instance, threshold photon detectors, photon number resolving detectors, and classical photodetectors. Our analysis includes as well the effect that detection inefficiencies and noise in the form of dark counts shown by current threshold detectors might have on the final secret key rate. Moreover, we provide estimations on the effects that statistical fluctuations due to a finite data size can have in practical implementations.« less

  6. Sources and distribution of NO(x) in the upper troposphere at northern midlatitudes

    NASA Technical Reports Server (NTRS)

    Rohrer, Franz; Ehhalt, Dieter H.; Wahner, Andreas

    1994-01-01

    A simple quasi 2-D model is used to study the zonal distribution of NO(x). The model includes vertical transport in form of eddy diffusion and deep convection, zonal transport by a vertically uniform wind, and a simplified chemistry of NO, NO2 and HNO3. The NO(x) sources considered are surface emissions (mostly from the combustion of fossil fuel), lightning, aircraft emissions, and downward transport from the stratosphere. The model is applied to the latitude band of 40 deg N to 50 deg N during the month of June; the contributions to the zonal NO(x) distribution from the individual sources and transport processes are investigated. The model predicted NO(x) concentration in the upper troposphere is dominated by air lofted from the polluted planetary boundary layer over the large industrial areas of Eastern North America and Europe. Aircraft emissions are also important and contribute on average 30 percent. Stratospheric input is minor about 10 percent, less even than that by lightning. The model provides a clear indication of intercontinental transport of NO(x) and HNO3 in the upper troposphere. Comparison of the modelled NO profiles over the Western Atlantic with those measured during STRATOZ 3 in 1984 shows good agreement at all altitudes.

  7. Using cross correlations to calibrate lensing source redshift distributions: Improving cosmological constraints from upcoming weak lensing surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Putter, Roland; Doré, Olivier; Das, Sudeep

    2014-01-10

    Cross correlations between the galaxy number density in a lensing source sample and that in an overlapping spectroscopic sample can in principle be used to calibrate the lensing source redshift distribution. In this paper, we study in detail to what extent this cross-correlation method can mitigate the loss of cosmological information in upcoming weak lensing surveys (combined with a cosmic microwave background prior) due to lack of knowledge of the source distribution. We consider a scenario where photometric redshifts are available and find that, unless the photometric redshift distribution p(z {sub ph}|z) is calibrated very accurately a priori (bias andmore » scatter known to ∼0.002 for, e.g., EUCLID), the additional constraint on p(z {sub ph}|z) from the cross-correlation technique to a large extent restores the cosmological information originally lost due to the uncertainty in dn/dz(z). Considering only the gain in photo-z accuracy and not the additional cosmological information, enhancements of the dark energy figure of merit of up to a factor of four (40) can be achieved for a SuMIRe-like (EUCLID-like) combination of lensing and redshift surveys, where SuMIRe stands for Subaru Measurement of Images and Redshifts). However, the success of the method is strongly sensitive to our knowledge of the galaxy bias evolution in the source sample and we find that a percent level bias prior is needed to optimize the gains from the cross-correlation method (i.e., to approach the cosmology constraints attainable if the bias was known exactly).« less

  8. Binary Source Microlensing Event OGLE-2016-BLG-0733: Interpretation of a Long-Term Asymmetric Perturbation

    NASA Technical Reports Server (NTRS)

    Jung, Y. K.; Udalski, A.; Yee, J. C.; Sumi, T.; Gould, A.; Han, C.; Albrow, M. D.; Lee, C.-U.; Bennett, D. P.; Suzuki, D.

    2017-01-01

    In the process of analyzing an observed light curve, one often confronts various scenarios that can mimic the planetary signals causing difficulties in the accurate interpretation of the lens system. In this paper, we present the analysis of the microlensing event OGLE-2016-BLG-0733. The light curve of the event shows a long-term asymmetric perturbation that would appear to be due to a planet. From the detailed modeling of the lensing light curve, however, we find that the perturbation originates from the binarity of the source rather than the lens. This result demonstrates that binary sources with roughly equal-luminosity components can mimic long-term perturbations induced by planets with projected separations near the Einstein ring. The result also represents the importance of the consideration of various interpretations in planet-like perturbations and of high-cadence observations for ensuring the unambiguous detection of the planet.

  9. The MIT/OSO 7 catalog of X-ray sources - Intensities, spectra, and long-term variability

    NASA Technical Reports Server (NTRS)

    Markert, T. H.; Laird, F. N.; Clark, G. W.; Hearn, D. R.; Sprott, G. F.; Li, F. K.; Bradt, H. V.; Lewin, W. H. G.; Schnopper, H. W.; Winkler, P. F.

    1979-01-01

    This paper is a summary of the observations of the cosmic X-ray sky performed by the MIT 1-40-keV X-ray detectors on OSO 7 between October 1971 and May 1973. Specifically, mean intensities or upper limits of all third Uhuru or OSO 7 cataloged sources (185 sources) in the 3-10-keV range are computed. For those sources for which a statistically significant (greater than 20) intensity was found in the 3-10-keV band (138 sources), further intensity determinations were made in the 1-15-keV, 1-6-keV, and 15-40-keV energy bands. Graphs and other simple techniques are provided to aid the user in converting the observed counting rates to convenient units and in determining spectral parameters. Long-term light curves (counting rates in one or more energy bands as a function of time) are plotted for 86 of the brighter sources.

  10. Dust temperature distributions in star-forming condensations

    NASA Technical Reports Server (NTRS)

    Xie, Taoling; Goldsmith, Paul F.; Snell, Ronald L.; Zhou, Weimin

    1993-01-01

    The FIR spectra of the central IR condensations in the dense cores of molecular clouds AFGL 2591. B335, L1551, Mon R2, and Sgr B2 are reanalyzed here in terms of the distribution of dust mass as a function of temperature. FIR spectra of these objects can be characterized reasonably well by a given functional form. The general shapes of the dust temperature distributions of these objects are similar and closely resemble the theoretical computations of de Muizon and Rouan (1985) for a sample of 'hot centered' clouds with active star formation. Specifically, the model yields a 'cutoff' temperature below which essentially no dust is needed to interpret the dust emission spectra, and most of the dust mass is distributed in a broad temperature range of a few tens of degrees above the cutoff temperature. Mass, luminosity, average temperature, and column density are obtained, and it is found that the physical quantities differ considerably from source to source in a meaningful way.

  11. Toward a Mechanistic Source Term in Advanced Reactors: Characterization of Radionuclide Transport and Retention in a Sodium Cooled Fast Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunett, Acacia J.; Bucknor, Matthew; Grabaskas, David

    A vital component of the U.S. reactor licensing process is an integrated safety analysis in which a source term representing the release of radionuclides during normal operation and accident sequences is analyzed. Historically, source term analyses have utilized bounding, deterministic assumptions regarding radionuclide release. However, advancements in technical capabilities and the knowledge state have enabled the development of more realistic and best-estimate retention and release models such that a mechanistic source term assessment can be expected to be a required component of future licensing of advanced reactors. Recently, as part of a Regulatory Technology Development Plan effort for sodium cooledmore » fast reactors (SFRs), Argonne National Laboratory has investigated the current state of knowledge of potential source terms in an SFR via an extensive review of previous domestic experiments, accidents, and operation. As part of this work, the significant sources and transport processes of radionuclides in an SFR have been identified and characterized. This effort examines all stages of release and source term evolution, beginning with release from the fuel pin and ending with retention in containment. Radionuclide sources considered in this effort include releases originating both in-vessel (e.g. in-core fuel, primary sodium, cover gas cleanup system, etc.) and ex-vessel (e.g. spent fuel storage, handling, and movement). Releases resulting from a primary sodium fire are also considered as a potential source. For each release group, dominant transport phenomena are identified and qualitatively discussed. The key product of this effort was the development of concise, inclusive diagrams that illustrate the release and retention mechanisms at a high level, where unique schematics have been developed for in-vessel, ex-vessel and sodium fire releases. This review effort has also found that despite the substantial range of phenomena affecting radionuclide

  12. heterogeneous mixture distributions for multi-source extreme rainfall

    NASA Astrophysics Data System (ADS)

    Ouarda, T.; Shin, J.; Lee, T. S.

    2013-12-01

    Mixture distributions have been used to model hydro-meteorological variables showing mixture distributional characteristics, e.g. bimodality. Homogeneous mixture (HOM) distributions (e.g. Normal-Normal and Gumbel-Gumbel) have been traditionally applied to hydro-meteorological variables. However, there is no reason to restrict the mixture distribution as the combination of one identical type. It might be beneficial to characterize the statistical behavior of hydro-meteorological variables from the application of heterogeneous mixture (HTM) distributions such as Normal-Gamma. In the present work, we focus on assessing the suitability of HTM distributions for the frequency analysis of hydro-meteorological variables. In the present work, in order to estimate the parameters of HTM distributions, the meta-heuristic algorithm (Genetic Algorithm) is employed to maximize the likelihood function. In the present study, a number of distributions are compared, including the Gamma-Extreme value type-one (EV1) HTM distribution, the EV1-EV1 HOM distribution, and EV1 distribution. The proposed distribution models are applied to the annual maximum precipitation data in South Korea. The Akaike Information Criterion (AIC), the root mean squared errors (RMSE) and the log-likelihood are used as measures of goodness-of-fit of the tested distributions. Results indicate that the HTM distribution (Gamma-EV1) presents the best fitness. The HTM distribution shows significant improvement in the estimation of quantiles corresponding to the 20-year return period. It is shown that extreme rainfall in the coastal region of South Korea presents strong heterogeneous mixture distributional characteristics. Results indicate that HTM distributions are a good alternative for the frequency analysis of hydro-meteorological variables when disparate statistical characteristics are presented.

  13. Spatial distribution and source apportionment of water pollution in different administrative zones of Wen-Rui-Tang (WRT) river watershed, China.

    PubMed

    Yang, Liping; Mei, Kun; Liu, Xingmei; Wu, Laosheng; Zhang, Minghua; Xu, Jianming; Wang, Fan

    2013-08-01

    Water quality degradation in river systems has caused great concerns all over the world. Identifying the spatial distribution and sources of water pollutants is the very first step for efficient water quality management. A set of water samples collected bimonthly at 12 monitoring sites in 2009 and 2010 were analyzed to determine the spatial distribution of critical parameters and to apportion the sources of pollutants in Wen-Rui-Tang (WRT) river watershed, near the East China Sea. The 12 monitoring sites were divided into three administrative zones of urban, suburban, and rural zones considering differences in land use and population density. Multivariate statistical methods [one-way analysis of variance, principal component analysis (PCA), and absolute principal component score-multiple linear regression (APCS-MLR) methods] were used to investigate the spatial distribution of water quality and to apportion the pollution sources. Results showed that most water quality parameters had no significant difference between the urban and suburban zones, whereas these two zones showed worse water quality than the rural zone. Based on PCA and APCS-MLR analysis, urban domestic sewage and commercial/service pollution, suburban domestic sewage along with fluorine point source pollution, and agricultural nonpoint source pollution with rural domestic sewage pollution were identified to the main pollution sources in urban, suburban, and rural zones, respectively. Understanding the water pollution characteristics of different administrative zones could put insights into effective water management policy-making especially in the area across various administrative zones.

  14. Dark Energy Survey Year 1 Results: Cross-Correlation Redshifts in the DES -- Calibration of the Weak Lensing Source Redshift Distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, C.; et al.

    We present the calibration of the Dark Energy Survey Year 1 (DES Y1) weak lensing source galaxy redshift distributions from clustering measurements. By cross-correlating the positions of source galaxies with luminous red galaxies selected by the redMaGiC algorithm we measure the redshift distributions of the source galaxies as placed into different tomographic bins. These measurements constrain any such shifts to an accuracy ofmore » $$\\sim0.02$$ and can be computed even when the clustering measurements do not span the full redshift range. The highest-redshift source bin is not constrained by the clustering measurements because of the minimal redshift overlap with the redMaGiC galaxies. We compare our constraints with those obtained from $$\\texttt{COSMOS}$$ 30-band photometry and find that our two very different methods produce consistent constraints.« less

  15. The occurrence and distribution of a group of organic micropollutants in Mexico City's water sources.

    PubMed

    Félix-Cañedo, Thania E; Durán-Álvarez, Juan C; Jiménez-Cisneros, Blanca

    2013-06-01

    The occurrence and distribution of a group of 17 organic micropollutants in surface and groundwater sources from Mexico City was determined. Water samples were taken from 7 wells, 4 dams and 15 tanks where surface and groundwater are mixed and stored before distribution. Results evidenced the occurrence of seven of the target compounds in groundwater: salicylic acid, diclofenac, di-2-ethylhexylphthalate (DEHP), butylbenzylphthalate (BBP), triclosan, bisphenol A (BPA) and 4-nonylphenol (4-NP). In surface water, 11 target pollutants were detected: same found in groundwater as well as naproxen, ibuprofen, ketoprofen and gemfibrozil. In groundwater, concentration ranges of salicylic acid, 4-NP and DEHP, the most frequently found compounds, were 1-464, 1-47 and 19-232 ng/L, respectively; while in surface water, these ranges were 29-309, 89-655 and 75-2,282 ng/L, respectively. Eleven target compounds were detected in mixed water. Concentrations in mixed water were higher than those determined in groundwater but lower than the detected in surface water. Different to that found in ground and surface water, the pesticide 2,4-D was found in mixed water, indicating that some pollutants can reach areas where they are not originally present in the local water sources. Concentration of the organic micropollutants found in this study showed similar to lower to those reported in water sources from developed countries. This study provides information that enriches the state of the art on the occurrence of organic micropollutants in water sources worldwide, notably in megacities of developing countries. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Stable source reconstruction from a finite number of measurements in the multi-frequency inverse source problem

    NASA Astrophysics Data System (ADS)

    Karamehmedović, Mirza; Kirkeby, Adrian; Knudsen, Kim

    2018-06-01

    We consider the multi-frequency inverse source problem for the scalar Helmholtz equation in the plane. The goal is to reconstruct the source term in the equation from measurements of the solution on a surface outside the support of the source. We study the problem in a certain finite dimensional setting: from measurements made at a finite set of frequencies we uniquely determine and reconstruct sources in a subspace spanned by finitely many Fourier–Bessel functions. Further, we obtain a constructive criterion for identifying a minimal set of measurement frequencies sufficient for reconstruction, and under an additional, mild assumption, the reconstruction method is shown to be stable. Our analysis is based on a singular value decomposition of the source-to-measurement forward operators and the distribution of positive zeros of the Bessel functions of the first kind. The reconstruction method is implemented numerically and our theoretical findings are supported by numerical experiments.

  17. Short-Term State Forecasting-Based Optimal Voltage Regulation in Distribution Systems: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Rui; Jiang, Huaiguang; Zhang, Yingchen

    2017-05-17

    A novel short-term state forecasting-based optimal power flow (OPF) approach for distribution system voltage regulation is proposed in this paper. An extreme learning machine (ELM) based state forecaster is developed to accurately predict system states (voltage magnitudes and angles) in the near future. Based on the forecast system states, a dynamically weighted three-phase AC OPF problem is formulated to minimize the voltage violations with higher penalization on buses which are forecast to have higher voltage violations in the near future. By solving the proposed OPF problem, the controllable resources in the system are optimally coordinated to alleviate the potential severemore » voltage violations and improve the overall voltage profile. The proposed approach has been tested in a 12-bus distribution system and simulation results are presented to demonstrate the performance of the proposed approach.« less

  18. A Comprehensive Probabilistic Tsunami Hazard Assessment: Multiple Sources and Short-Term Interactions

    NASA Astrophysics Data System (ADS)

    Anita, G.; Selva, J.; Laura, S.

    2011-12-01

    We develop a comprehensive and total probabilistic tsunami hazard assessment (TotPTHA), in which many different possible source types concur to the definition of the total tsunami hazard at given target sites. In a multi-hazard and multi-risk perspective, such an innovative approach allows, in principle, to consider all possible tsunamigenic sources, from seismic events, to slides, asteroids, volcanic eruptions, etc. In this respect, we also formally introduce and discuss the treatment of interaction/cascade effects in the TotPTHA analysis. We demonstrate how external triggering events may induce significant temporary variations in the tsunami hazard. Because of this, such effects should always be considered, at least in short-term applications, to obtain unbiased analyses. Finally, we prove the feasibility of the TotPTHA and of the treatment of interaction/cascade effects by applying this methodology to an ideal region with realistic characteristics (Neverland).

  19. Assessment and application of clustering techniques to atmospheric particle number size distribution for the purpose of source apportionment

    NASA Astrophysics Data System (ADS)

    Salimi, F.; Ristovski, Z.; Mazaheri, M.; Laiman, R.; Crilley, L. R.; He, C.; Clifford, S.; Morawska, L.

    2014-06-01

    Long-term measurements of particle number size distribution (PNSD) produce a very large number of observations and their analysis requires an efficient approach in order to produce results in the least possible time and with maximum accuracy. Clustering techniques are a family of sophisticated methods which have been recently employed to analyse PNSD data, however, very little information is available comparing the performance of different clustering techniques on PNSD data. This study aims to apply several clustering techniques (i.e. K-means, PAM, CLARA and SOM) to PNSD data, in order to identify and apply the optimum technique to PNSD data measured at 25 sites across Brisbane, Australia. A new method, based on the Generalised Additive Model (GAM) with a basis of penalised B-splines, was proposed to parameterise the PNSD data and the temporal weight of each cluster was also estimated using the GAM. In addition, each cluster was associated with its possible source based on the results of this parameterisation, together with the characteristics of each cluster. The performances of four clustering techniques were compared using the Dunn index and silhouette width validation values and the K-means technique was found to have the highest performance, with five clusters being the optimum. Therefore, five clusters were found within the data using the K-means technique. The diurnal occurrence of each cluster was used together with other air quality parameters, temporal trends and the physical properties of each cluster, in order to attribute each cluster to its source and origin. The five clusters were attributed to three major sources and origins, including regional background particles, photochemically induced nucleated particles and vehicle generated particles. Overall, clustering was found to be an effective technique for attributing each particle size spectra to its source and the GAM was suitable to parameterise the PNSD data. These two techniques can help

  20. Assessment and application of clustering techniques to atmospheric particle number size distribution for the purpose of source apportionment

    NASA Astrophysics Data System (ADS)

    Salimi, F.; Ristovski, Z.; Mazaheri, M.; Laiman, R.; Crilley, L. R.; He, C.; Clifford, S.; Morawska, L.

    2014-11-01

    Long-term measurements of particle number size distribution (PNSD) produce a very large number of observations and their analysis requires an efficient approach in order to produce results in the least possible time and with maximum accuracy. Clustering techniques are a family of sophisticated methods that have been recently employed to analyse PNSD data; however, very little information is available comparing the performance of different clustering techniques on PNSD data. This study aims to apply several clustering techniques (i.e. K means, PAM, CLARA and SOM) to PNSD data, in order to identify and apply the optimum technique to PNSD data measured at 25 sites across Brisbane, Australia. A new method, based on the Generalised Additive Model (GAM) with a basis of penalised B-splines, was proposed to parameterise the PNSD data and the temporal weight of each cluster was also estimated using the GAM. In addition, each cluster was associated with its possible source based on the results of this parameterisation, together with the characteristics of each cluster. The performances of four clustering techniques were compared using the Dunn index and Silhouette width validation values and the K means technique was found to have the highest performance, with five clusters being the optimum. Therefore, five clusters were found within the data using the K means technique. The diurnal occurrence of each cluster was used together with other air quality parameters, temporal trends and the physical properties of each cluster, in order to attribute each cluster to its source and origin. The five clusters were attributed to three major sources and origins, including regional background particles, photochemically induced nucleated particles and vehicle generated particles. Overall, clustering was found to be an effective technique for attributing each particle size spectrum to its source and the GAM was suitable to parameterise the PNSD data. These two techniques can help

  1. Round-robin differential-phase-shift quantum key distribution with heralded pair-coherent sources

    NASA Astrophysics Data System (ADS)

    Wang, Le; Zhao, Shengmei

    2017-04-01

    Round-robin differential-phase-shift (RRDPS) quantum key distribution (QKD) scheme provides an effective way to overcome the signal disturbance from the transmission process. However, most RRDPS-QKD schemes use weak coherent pulses (WCPs) as the replacement of the perfect single-photon source. Considering the heralded pair-coherent source (HPCS) can efficiently remove the shortcomings of WCPs, we propose a RRDPS-QKD scheme with HPCS in this paper. Both infinite-intensity decoy-state method and practical three-intensity decoy-state method are adopted to discuss the tight bound of the key rate of the proposed scheme. The results show that HPCS is a better candidate for the replacement of the perfect single-photon source, and both the key rate and the transmission distance are greatly increased in comparison with those results with WCPs when the length of the pulse trains is small. Simultaneously, the performance of the proposed scheme using three-intensity decoy states is close to that result using infinite-intensity decoy states when the length of pulse trains is small.

  2. On the Vertical Distribution of Local and Remote Sources of Water for Precipitation

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.

    2001-01-01

    The vertical distribution of local and remote sources of water for precipitation and total column water over the United States are evaluated in a general circulation model simulation. The Goddard Earth Observing System (GEOS) general circulation model (GCM) includes passive constituent tracers to determine the geographical sources of the water in the column. Results show that the local percentage of precipitable water and local percentage of precipitation can be very different. The transport of water vapor from remote oceanic sources at mid and upper levels is important to the total water in the column over the central United States, while the access of locally evaporated water in convective precipitation processes is important to the local precipitation ratio. This result resembles the conceptual formulation of the convective parameterization. However, the formulations of simple models of precipitation recycling include the assumption that the ratio of the local water in the column is equal to the ratio of the local precipitation. The present results demonstrate the uncertainty in that assumption, as locally evaporated water is more concentrated near the surface.

  3. Source term evaluation model for high-level radioactive waste repository with decay chain build-up.

    PubMed

    Chopra, Manish; Sunny, Faby; Oza, R B

    2016-09-18

    A source term model based on two-component leach flux concept is developed for a high-level radioactive waste repository. The long-lived radionuclides associated with high-level waste may give rise to the build-up of activity because of radioactive decay chains. The ingrowths of progeny are incorporated in the model using Bateman decay chain build-up equations. The model is applied to different radionuclides present in the high-level radioactive waste, which form a part of decay chains (4n to 4n + 3 series), and the activity of the parent and daughter radionuclides leaching out of the waste matrix is estimated. Two cases are considered: one when only parent is present initially in the waste and another where daughters are also initially present in the waste matrix. The incorporation of in situ production of daughter radionuclides in the source is important to carry out realistic estimates. It is shown that the inclusion of decay chain build-up is essential to avoid underestimation of the radiological impact assessment of the repository. The model can be a useful tool for evaluating the source term of the radionuclide transport models used for the radiological impact assessment of high-level radioactive waste repositories.

  4. Developing an Open Source, Reusable Platform for Distributed Collaborative Information Management in the Early Detection Research Network

    NASA Technical Reports Server (NTRS)

    Hart, Andrew F.; Verma, Rishi; Mattmann, Chris A.; Crichton, Daniel J.; Kelly, Sean; Kincaid, Heather; Hughes, Steven; Ramirez, Paul; Goodale, Cameron; Anton, Kristen; hide

    2012-01-01

    For the past decade, the NASA Jet Propulsion Laboratory, in collaboration with Dartmouth University has served as the center for informatics for the Early Detection Research Network (EDRN). The EDRN is a multi-institution research effort funded by the U.S. National Cancer Institute (NCI) and tasked with identifying and validating biomarkers for the early detection of cancer. As the distributed network has grown, increasingly formal processes have been developed for the acquisition, curation, storage, and dissemination of heterogeneous research information assets, and an informatics infrastructure has emerged. In this paper we discuss the evolution of EDRN informatics, its success as a mechanism for distributed information integration, and the potential sustainability and reuse benefits of emerging efforts to make the platform components themselves open source. We describe our experience transitioning a large closed-source software system to a community driven, open source project at the Apache Software Foundation, and point to lessons learned that will guide our present efforts to promote the reuse of the EDRN informatics infrastructure by a broader community.

  5. Long-term monitoring of molecular markers can distinguish different seasonal patterns of fecal indicating bacteria sources.

    PubMed

    Riedel, Timothy E; Thulsiraj, Vanessa; Zimmer-Faust, Amity G; Dagit, Rosi; Krug, Jenna; Hanley, Kaitlyn T; Adamek, Krista; Ebentier, Darcy L; Torres, Robert; Cobian, Uriel; Peterson, Sophie; Jay, Jennifer A

    2015-03-15

    Elevated levels of fecal indicator bacteria (FIB) have been observed at Topanga Beach, CA, USA. To identify the FIB sources, a microbial source tracking study using a dog-, a gull- and two human-associated molecular markers was conducted at 10 sites over 21 months. Historical data suggest that episodic discharge from the lagoon at the mouth of Topanga Creek is the main source of bacteria to the beach. A decline in creek FIB/markers downstream from upper watershed development and a sharp increase in FIB/markers at the lagoon sites suggest sources are local to the lagoon. At the lagoon and beach, human markers are detected sporadically, dog marker peaks in abundance mid-winter, and gull marker is chronically elevated. Varied seasonal patterns of FIB and source markers were identified showing the importance of applying a suite of markers over long-term spatial and temporal sampling to identify a complex combination of sources of contamination. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Localization Accuracy of Distributed Inverse Solutions for Electric and Magnetic Source Imaging of Interictal Epileptic Discharges in Patients with Focal Epilepsy.

    PubMed

    Heers, Marcel; Chowdhury, Rasheda A; Hedrich, Tanguy; Dubeau, François; Hall, Jeffery A; Lina, Jean-Marc; Grova, Christophe; Kobayashi, Eliane

    2016-01-01

    Distributed inverse solutions aim to realistically reconstruct the origin of interictal epileptic discharges (IEDs) from noninvasively recorded electroencephalography (EEG) and magnetoencephalography (MEG) signals. Our aim was to compare the performance of different distributed inverse solutions in localizing IEDs: coherent maximum entropy on the mean (cMEM), hierarchical Bayesian implementations of independent identically distributed sources (IID, minimum norm prior) and spatially coherent sources (COH, spatial smoothness prior). Source maxima (i.e., the vertex with the maximum source amplitude) of IEDs in 14 EEG and 19 MEG studies from 15 patients with focal epilepsy were analyzed. We visually compared their concordance with intracranial EEG (iEEG) based on 17 cortical regions of interest and their spatial dispersion around source maxima. Magnetic source imaging (MSI) maxima from cMEM were most often confirmed by iEEG (cMEM: 14/19, COH: 9/19, IID: 8/19 studies). COH electric source imaging (ESI) maxima co-localized best with iEEG (cMEM: 8/14, COH: 11/14, IID: 10/14 studies). In addition, cMEM was less spatially spread than COH and IID for ESI and MSI (p < 0.001 Bonferroni-corrected post hoc t test). Highest positive predictive values for cortical regions with IEDs in iEEG could be obtained with cMEM for MSI and with COH for ESI. Additional realistic EEG/MEG simulations confirmed our findings. Accurate spatially extended sources, as found in cMEM (ESI and MSI) and COH (ESI) are desirable for source imaging of IEDs because this might influence surgical decision. Our simulations suggest that COH and IID overestimate the spatial extent of the generators compared to cMEM.

  7. Distribution and geological sources of selenium in environmental materials in Taoyuan County, Hunan Province, China.

    PubMed

    Ni, Runxiang; Luo, Kunli; Tian, Xinglei; Yan, Songgui; Zhong, Jitai; Liu, Maoqiu

    2016-06-01

    The selenium (Se) distribution and geological sources in Taoyuan County, China, were determined by using hydride generation atomic fluorescence spectrometry on rock, soil, and food crop samples collected from various geological regions within the county. The results show Se contents of 0.02-223.85, 0.18-7.05, and 0.006-5.374 mg/kg in the rock, soil, and food crops in Taoyuan County, respectively. The region showing the highest Se content is western Taoyuan County amid the Lower Cambrian and Ediacaran black rock series outcrop, which has banding distributed west to east. A relatively high-Se environment is found in the central and southern areas of Taoyuan County, where Quaternary Limnetic sedimentary facies and Neoproterozoic metamorphic volcanic rocks outcrop, respectively. A relatively low-Se environment includes the central and northern areas of Taoyuan County, where Middle and Upper Cambrian and Ordovician carbonate rocks and Cretaceous sandstones and conglomerates outcrop. These results indicate that Se distribution in Taoyuan County varies markedly and is controlled by the Se content of the bedrock. The Se-enriched Lower Cambrian and Ediacaran black rock series is the primary source of the seleniferous environment observed in Taoyuan County. Potential seleniferous environments are likely to be found near outcrops of the Lower Cambrian and Ediacaran black rock series in southern China.

  8. Femtosecond timing distribution and control for next generation accelerators and light sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Li -Jin

    Femtosecond Timing Distribution At LCLS Free-electron-lasers (FEL) have the capability of producing high photon flux from the IR to the hard x-ray wavelength range and to emit femtosecond and eventually even attosecond pulses. This makes them an ideal tool for fundamental as well as applied re-search. Timing precision at the Stanford Linear Coherent Light Source (LCLS) between the x-ray FEL (XFEL) and ultrafast optical lasers is currently no better than 100 fs RMS. Ideally this precision should be much better and could be limited only by the x-ray pulse duration, which can be as short as a few femtoseconds. Anmore » increasing variety of science problems involving electron and nuclear dynamics in chemical and material systems will become accessible as the timing improves to a few femtoseconds. Advanced methods of electron beam conditioning or pulse injection could allow the FEL to achieve pulse durations less than one femtosecond. The objective of the work described in this proposal is to set up an optical timing distribution system based on mode locked Erbium doped fiber lasers at LCLS facility to improve the timing precision in the facility and allow time stamping with a 10 fs precision. The primary commercial applications for optical timing distributions systems are seen in the worldwide accelerator facilities and next generation light sources community. It is reasonable to expect that at least three major XFELs will be built in the next decade. In addition there will be up to 10 smaller machines, such as FERMI in Italy and Maxlab in Sweden, plus the market for upgrading already existing facilities like Jefferson Lab. The total market is estimated to be on the order of a 100 Million US Dollars. The company owns the exclusive rights to the IP covering the technology enabling sub-10 fs synchronization systems. Testing this technology, which has set records in a lab environment, at LCLS, hence in a real world scenario, is an important corner stone of bringing the

  9. Quantum key distribution with passive decoy state selection

    NASA Astrophysics Data System (ADS)

    Mauerer, Wolfgang; Silberhorn, Christine

    2007-05-01

    We propose a quantum key distribution scheme which closely matches the performance of a perfect single photon source. It nearly attains the physical upper bound in terms of key generation rate and maximally achievable distance. Our scheme relies on a practical setup based on a parametric downconversion source and present day, nonideal photon-number detection. Arbitrary experimental imperfections which lead to bit errors are included. We select decoy states by classical postprocessing. This allows one to improve the effective signal statistics and achievable distance.

  10. Discrimination of particulate matter emission sources using stochastic methods

    NASA Astrophysics Data System (ADS)

    Szczurek, Andrzej; Maciejewska, Monika; Wyłomańska, Agnieszka; Sikora, Grzegorz; Balcerek, Michał; Teuerle, Marek

    2016-12-01

    Particulate matter (PM) is one of the criteria pollutants which has been determined as harmful to public health and the environment. For this reason the ability to recognize its emission sources is very important. There are a number of measurement methods which allow to characterize PM in terms of concentration, particles size distribution, and chemical composition. All these information are useful to establish a link between the dust found in the air, its emission sources and influence on human as well as the environment. However, the methods are typically quite sophisticated and not applicable outside laboratories. In this work, we considered PM emission source discrimination method which is based on continuous measurements of PM concentration with a relatively cheap instrument and stochastic analysis of the obtained data. The stochastic analysis is focused on the temporal variation of PM concentration and it involves two steps: (1) recognition of the category of distribution for the data i.e. stable or the domain of attraction of stable distribution and (2) finding best matching distribution out of Gaussian, stable and normal-inverse Gaussian (NIG). We examined six PM emission sources. They were associated with material processing in industrial environment, namely machining and welding aluminum, forged carbon steel and plastic with various tools. As shown by the obtained results, PM emission sources may be distinguished based on statistical distribution of PM concentration variations. Major factor responsible for the differences detectable with our method was the type of material processing and the tool applied. In case different materials were processed by the same tool the distinction of emission sources was difficult. For successful discrimination it was crucial to consider size-segregated mass fraction concentrations. In our opinion the presented approach is very promising. It deserves further study and development.

  11. Optimal operation management of fuel cell/wind/photovoltaic power sources connected to distribution networks

    NASA Astrophysics Data System (ADS)

    Niknam, Taher; Kavousifard, Abdollah; Tabatabaei, Sajad; Aghaei, Jamshid

    2011-10-01

    In this paper a new multiobjective modified honey bee mating optimization (MHBMO) algorithm is presented to investigate the distribution feeder reconfiguration (DFR) problem considering renewable energy sources (RESs) (photovoltaics, fuel cell and wind energy) connected to the distribution network. The objective functions of the problem to be minimized are the electrical active power losses, the voltage deviations, the total electrical energy costs and the total emissions of RESs and substations. During the optimization process, the proposed algorithm finds a set of non-dominated (Pareto) optimal solutions which are stored in an external memory called repository. Since the objective functions investigated are not the same, a fuzzy clustering algorithm is utilized to handle the size of the repository in the specified limits. Moreover, a fuzzy-based decision maker is adopted to select the 'best' compromised solution among the non-dominated optimal solutions of multiobjective optimization problem. In order to see the feasibility and effectiveness of the proposed algorithm, two standard distribution test systems are used as case studies.

  12. Species selection under long-term experimental warming and drought explained by climatic distributions.

    PubMed

    Liu, Daijun; Peñuelas, Josep; Ogaya, Romà; Estiarte, Marc; Tielbörger, Katja; Slowik, Fabian; Yang, Xiaohong; Bilton, Mark C

    2018-03-01

    Global warming and reduced precipitation may trigger large-scale species losses and vegetation shifts in ecosystems around the world. However, currently lacking are practical ways to quantify the sensitivity of species and community composition to these often-confounded climatic forces. Here we conducted long-term (16 yr) nocturnal-warming (+0.6°C) and reduced precipitation (-20% soil moisture) experiments in a Mediterranean shrubland. Climatic niche groups (CNGs) - species ranked or classified by similar temperature or precipitation distributions - informatively described community responses under experimental manipulations. Under warming, CNGs revealed that only those species distributed in cooler regions decreased. Correspondingly, under reduced precipitation, a U-shaped treatment effect observed in the total community was the result of an abrupt decrease in wet-distributed species, followed by a delayed increase in dry-distributed species. Notably, while partially correlated, CNG explanations of community response were stronger for their respective climate parameter, suggesting some species possess specific adaptations to either warming or drought that may lead to independent selection to the two climatic variables. Our findings indicate that when climatic distributions are combined with experiments, the resulting incorporation of local plant evolutionary strategies and their changing dynamics over time leads to predictable and informative shifts in community structure under independent climate change scenarios. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.

  13. Polycyclic aromatic hydrocarbons in soils from urban to rural areas in Nanjing: Concentration, source, spatial distribution, and potential human health risk.

    PubMed

    Wang, Chunhui; Wu, Shaohua; Zhou, Sheng Lu; Wang, Hui; Li, Baojie; Chen, Hao; Yu, Yanna; Shi, Yaxing

    2015-09-15

    Polycyclic aromatic hydrocarbons (PAHs) have become a major type of pollutant in urban areas and their degree of pollution and characteristics of spatial distribution differ between various regions. We conducted a comprehensive study about the concentration, source, spatial distribution, and health risk of 16 PAHs from urban to rural soils in Nanjing. The mean total concentrations of 16 PAHs (∑16PAHs) were 3330 ng g(-1) for urban soils, 1680 ng g(-1) for suburban soils, and 1060 ng g(-1) for rural soils. Five sources in urban, suburban, and rural areas of Nanjing were identified by positive matrix factorization. Their relative contributions of sources to the total soil PAH burden in descending order was coal combustion, vehicle emissions, biomass burning, coke tar, and oil in urban areas; in suburban areas the main sources of soil PAHs were gasoline engine and diesel engine, whereas in rural areas the main sources were creosote and biomass burning. The spatial distribution of soil PAH concentrations shows that old urban districts and commercial centers were the most contaminated of all areas in Nanjing. The distribution pattern of heavier PAHs was in accordance with ∑16PAHs, whereas lighter PAHs show some special characteristics. Health risk assessment based on toxic equivalency factors of benzo[a]pyrene indicated a low concentration of PAHs in most areas in Nanjing, but some sensitive sites should draw considerable attention. We conclude that urbanization has accelerated the accumulation of soil PAHs and increased the environmental risk for urban residents. Copyright © 2015. Published by Elsevier B.V.

  14. Distribution, Source and Fate of Dissolved Organic Matter in Shelf Seas

    NASA Astrophysics Data System (ADS)

    Carr, N.; Mahaffey, C.; Hopkins, J.; Sharples, J.; Williams, R. G.; Davis, C. E.

    2016-02-01

    Dissolved organic matter (DOM) is a complex array of molecules containing carbon (DOC), nitrogen (DON) and phosphorous (DOP), and represents the largest pool of organic matter in the marine environment. DOM in the sea originates from a variety of sources, including allochthonous inputs of terrestrial DOM from land via rivers, and autochthonous inputs through in-situ biotic processes that include phytoplankton exudation, grazing and cell lysis. Marine DOM is a substrate for bacterial growth and can act as a source of nutrients for autotrophs. However, a large component of DOM is biologically refractory. This pool is carbon-rich and nutrient-poor, and can transport and store its compositional elements over large areas and on long time scales. The role of DOM in the shelf seas is currently unclear, despite these regions acting as conduits between the land and open ocean, and also being highly productive ecosystems. Using samples collected across the Northwest European Shelf Sea, we studied the distribution, source, seasonality and potential fate of DOM using a combination of analytical tools, including analysis of amino acids, DOM absorbance spectra and excitation emission matrices, in conjunction with parallel factor analysis (PARAFAC). Strong cross shelf and seasonal gradients in DOM source and lability were found. We observed a strong seasonally dependent significant correlation between salinity and terrestrial DOM in the bottom mixed layer, an enrichment of DOM at the shelf edge in winter and a three-fold increase in fresh marine DOM coinciding with the timing of a spring bloom. Together, our findings illustrate the dynamic nature of DOM in shelf seas over a seasonal cycle and, highlight the potential for DOM to play a key role in the carbon cycle in these regions.

  15. MEG (Magnetoencephalography) multipolar modeling of distributed sources using RAP-MUSIC (Recursively Applied and Projected Multiple Signal Characterization)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, J. C.; Baillet, S.; Jerbi, K.

    2001-01-01

    We describe the use of truncated multipolar expansions for producing dynamic images of cortical neural activation from measurements of the magnetoencephalogram. We use a signal-subspace method to find the locations of a set of multipolar sources, each of which represents a region of activity in the cerebral cortex. Our method builds up an estimate of the sources in a recursive manner, i.e. we first search for point current dipoles, then magnetic dipoles, and finally first order multipoles. The dynamic behavior of these sources is then computed using a linear fit to the spatiotemporal data. The final step in the proceduremore » is to map each of the multipolar sources into an equivalent distributed source on the cortical surface. The method is illustrated through an application to epileptic interictal MEG data.« less

  16. Computation of marginal distributions of peak-heights in electropherograms for analysing single source and mixture STR DNA samples.

    PubMed

    Cowell, Robert G

    2018-05-04

    Current models for single source and mixture samples, and probabilistic genotyping software based on them used for analysing STR electropherogram data, assume simple probability distributions, such as the gamma distribution, to model the allelic peak height variability given the initial amount of DNA prior to PCR amplification. Here we illustrate how amplicon number distributions, for a model of the process of sample DNA collection and PCR amplification, may be efficiently computed by evaluating probability generating functions using discrete Fourier transforms. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. A post-implementation evaluation of ceramic water filters distributed to tsunami-affected communities in Sri Lanka.

    PubMed

    Casanova, Lisa M; Walters, Adam; Naghawatte, Ajith; Sobsey, Mark D

    2012-06-01

    Sri Lanka was devastated by the 2004 Indian Ocean tsunami. During recovery, the Red Cross distributed approximately 12,000 free ceramic water filters. This cross-sectional study was an independent post-implementation assessment of 452 households that received filters, to determine the proportion still using filters, household characteristics associated with use, and quality of household drinking water. The proportion of continued users was high (76%). The most common household water sources were taps or shallow wells. The majority (82%) of users used filtered water for drinking only. Mean filter flow rate was 1.12 L/hr (0.80 L/hr for households with taps and 0.71 for those with wells). Water quality varied by source; households using tap water had source water of high microbial quality. Filters improved water quality, reducing Escherichia coli for households (largely well users) with high levels in their source water. Households were satisfied with filters and are potentially long-term users. To promote sustained use, recovery filter distribution efforts should try to identify households at greatest long-term risk, particularly those who have not moved to safer water sources during recovery. They should be joined with long-term commitment to building supply chains and local production capacity to ensure safe water access.

  18. Urban dust in the Guanzhong Basin of China, part I: A regional distribution of dust sources retrieved using satellite data.

    PubMed

    Long, Xin; Li, Nan; Tie, Xuexi; Cao, Junji; Zhao, Shuyu; Huang, Rujin; Zhao, Mudan; Li, Guohui; Feng, Tian

    2016-01-15

    Urban dust pollution has been becoming an outstanding environmental problem due to rapid urbanization in China. However, it is very difficult to construct an urban dust inventory, owing to its small horizontal scale and strong temporal/spatial variability. With the analysis of visual interpretation, maximum likelihood classification, extrapolation and spatial overlaying, we quantified dust source distributions of urban constructions, barrens and croplands in the Guanzhong Basin using various satellite data, including VHR (0.5m), Lansat-8 OLI (30 m) and MCD12Q1 (500 m). The croplands were the dominant dust sources, accounting for 40% (17,913 km(2)) of the study area in summer and 36% (17,913 km(2)) in winter, followed by barrens, accounting for 5% in summer and 10% in winter. Moreover, the total constructions were 126 km(2), including 84% of active and 16% inactive. In addition, 59% of the constructions aggregated on the only megacity of the study area, Xi'an. With high accuracy exceeding 88%, the proposed satellite-data based method is feasible and valuable to quantify distributions of dust sources. This study provides a new perspective to evaluate regional urban dust, which is seldom quantified and reported. In a companied paper (Part-2 of the study), the detailed distribution of the urban dust sources is applied in a dynamical/aerosol model (WRF-Dust) to assess the effect of dust sources on aerosol pollution. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Short-Term Memory Stages in Sign vs. Speech: The Source of the Serial Span Discrepancy

    ERIC Educational Resources Information Center

    Hall, Matthew L.; Bavelier, Daphne

    2011-01-01

    Speakers generally outperform signers when asked to recall a list of unrelated verbal items. This phenomenon is well established, but its source has remained unclear. In this study, we evaluate the relative contribution of the three main processing stages of short-term memory--perception, encoding, and recall--in this effect. The present study…

  20. A critical assessment of flux and source term closures in shallow water models with porosity for urban flood simulations

    NASA Astrophysics Data System (ADS)

    Guinot, Vincent

    2017-11-01

    The validity of flux and source term formulae used in shallow water models with porosity for urban flood simulations is assessed by solving the two-dimensional shallow water equations over computational domains representing periodic building layouts. The models under assessment are the Single Porosity (SP), the Integral Porosity (IP) and the Dual Integral Porosity (DIP) models. 9 different geometries are considered. 18 two-dimensional initial value problems and 6 two-dimensional boundary value problems are defined. This results in a set of 96 fine grid simulations. Analysing the simulation results leads to the following conclusions: (i) the DIP flux and source term models outperform those of the SP and IP models when the Riemann problem is aligned with the main street directions, (ii) all models give erroneous flux closures when is the Riemann problem is not aligned with one of the main street directions or when the main street directions are not orthogonal, (iii) the solution of the Riemann problem is self-similar in space-time when the street directions are orthogonal and the Riemann problem is aligned with one of them, (iv) a momentum balance confirms the existence of the transient momentum dissipation model presented in the DIP model, (v) none of the source term models presented so far in the literature allows all flow configurations to be accounted for(vi) future laboratory experiments aiming at the validation of flux and source term closures should focus on the high-resolution, two-dimensional monitoring of both water depth and flow velocity fields.

  1. Probing the Spatial Distribution of the Interstellar Dust Medium by High Angular Resolution X-ray Halos of Point Sources

    NASA Astrophysics Data System (ADS)

    Xiang, Jingen

    X-rays are absorbed and scattered by dust grains when they travel through the interstellar medium. The scattering within small angles results in an X-ray ``halo''. The halo properties are significantly affected by the energy of radiation, the optical depth of the scattering, the grain size distributions and compositions, and the spatial distribution of dust along the line of sight (LOS). Therefore analyzing the X-ray halo properties is an important tool to study the size distribution and spatial distribution of interstellar grains, which plays a central role in the astrophysical study of the interstellar medium, such as the thermodynamics and chemistry of the gas and the dynamics of star formation. With excellent angular resolution, good energy resolution and broad energy band, the Chandra ACIS is so far the best instrument for studying the X-ray halos. But the direct images of bright sources obtained with ACIS usually suffer from severe pileup which prevents us from obtaining the halos in small angles. We first improve the method proposed by Yao et al to resolve the X-ray dust scattering halos of point sources from the zeroth order data in CC-mode or the first order data in TE mode with Chandra HETG/ACIS. Using this method we re-analyze the Cygnus X-1 data observed with Chandra. Then we studied the X-ray dust scattering halos around 17 bright X-ray point sources using Chandra data. All sources were observed with the HETG/ACIS in CC-mode or TE-mode. Using the interstellar grain models of WD01 model and MRN model to fit the halo profiles, we get the hydrogen column densities and the spatial distributions of the scattering dust grains along the line of sights (LOS) to these sources. We find there is a good linear correlation not only between the scattering hydrogen column density from WD01 model and the one from MRN model, but also between N_{H} derived from spectral fits and the one derived from the grain models WD01 and MRN (except for GX 301-2 and Vela X-1): N

  2. The phonological-distributional coherence hypothesis: cross-linguistic evidence in language acquisition.

    PubMed

    Monaghan, Padraic; Christiansen, Morten H; Chater, Nick

    2007-12-01

    Several phonological and prosodic properties of words have been shown to relate to differences between grammatical categories. Distributional information about grammatical categories is also a rich source in the child's language environment. In this paper we hypothesise that such cues operate in tandem for developing the child's knowledge about grammatical categories. We term this the Phonological-Distributional Coherence Hypothesis (PDCH). We tested the PDCH by analysing phonological and distributional information in distinguishing open from closed class words and nouns from verbs in four languages: English, Dutch, French, and Japanese. We found an interaction between phonological and distributional cues for all four languages indicating that when distributional cues were less reliable, phonological cues were stronger. This provides converging evidence that language is structured such that language learning benefits from the integration of information about category from contextual and sound-based sources, and that the child's language environment is less impoverished than we might suspect.

  3. Sky distribution of artificial sources in the galactic belt of advanced cosmic life.

    PubMed

    Heidmann, J

    1994-12-01

    In line with the concept of the galactic belt of advanced life, we evaluate the sky distribution of detectable artificial sources, using a simple astrophysical model. The best region to search is the median band of the Milky Way in the Vulpecula-Cygnus region, together with a narrower one in Carina. Although this work was done in view of a proposal to send a SETI probe at a gravitational focus of the Sun, we recommend these sky regions particularly for the searches of the sky survey type.

  4. Models for Deploying Open Source and Commercial Software to Support Earth Science Data Processing and Distribution

    NASA Astrophysics Data System (ADS)

    Yetman, G.; Downs, R. R.

    2011-12-01

    Software deployment is needed to process and distribute scientific data throughout the data lifecycle. Developing software in-house can take software development teams away from other software development projects and can require efforts to maintain the software over time. Adopting and reusing software and system modules that have been previously developed by others can reduce in-house software development and maintenance costs and can contribute to the quality of the system being developed. A variety of models are available for reusing and deploying software and systems that have been developed by others. These deployment models include open source software, vendor-supported open source software, commercial software, and combinations of these approaches. Deployment in Earth science data processing and distribution has demonstrated the advantages and drawbacks of each model. Deploying open source software offers advantages for developing and maintaining scientific data processing systems and applications. By joining an open source community that is developing a particular system module or application, a scientific data processing team can contribute to aspects of the software development without having to commit to developing the software alone. Communities of interested developers can share the work while focusing on activities that utilize in-house expertise and addresses internal requirements. Maintenance is also shared by members of the community. Deploying vendor-supported open source software offers similar advantages to open source software. However, by procuring the services of a vendor, the in-house team can rely on the vendor to provide, install, and maintain the software over time. Vendor-supported open source software may be ideal for teams that recognize the value of an open source software component or application and would like to contribute to the effort, but do not have the time or expertise to contribute extensively. Vendor-supported software may

  5. Soil Aggregates and Organic Carbon Distribution in Red Soils after Long-term Fertilization with Different Fertilizer Treatments

    NASA Astrophysics Data System (ADS)

    Tang, J.; Wang, Y.

    2013-12-01

    Red soils, a typical Udic Ferrosols, widespread throughout the subtropical and tropical region in southern China, support the majority of grain production in this region. The red soil is naturally low in pH values, cation exchange capacity, fertility, and compaction, resulting in low organic matter contents and soil aggregation. Application of chemical fertilizers and a combination of organic-chemical fertilizers are two basic approaches to improve soil structure and organic matter contents. We studied the soil aggregation and the distribution of aggregate-associated organic carbon in red soils with a long-term fertilization experiment during 1988-2009. We established treatments including 1) NPK and NK in the chemical fertilizer plots, 2) CK (Control), and 3) CK+ Peanut Straw (PS), CK+ Rice Straw (RS), CK+ Fresh Radish (FR), and CK + Pig Manure (PM) in the organic-chemical fertilizer plots. Soil samples were fractionated into 6 different sized aggregate particles through the dry-wet sieving method according to the hierarchical model of aggregation. Organic carbon in the aggregate/size classes was analyzed. The results showed that the distribution of mechanically stable aggregates in red soils after long-term fertilization decreased with the size, from > 5mm, 5 ~ 2 mm, 2 ~ 1 mm, 1~ 0.25 mm, to < 0.25 mm, but the distribution of water-stable aggregates did not follow this pattern. Compared with the chemical fertilizer application alone, the addition of pig manure and green manure can significantly improve the distribution of aggregates in the 5-2 mm, 2-1 mm and 1-0.25 mm classes. The organic carbon (OC) contents in red soils were all increased after the long-term fertilization. Compared with Treatment NK, soil OC in Treatment NPK was increased by 45.4%. Compared with Treatment CK (low chemical fertilizer), organic fertilizer addition increased soil OC. The OC in the different particle of water-stable aggregates were all significantly increased after long-term

  6. Analysis of jet-airfoil interaction noise sources by using a microphone array technique

    NASA Astrophysics Data System (ADS)

    Fleury, Vincent; Davy, Renaud

    2016-03-01

    The paper is concerned with the characterization of jet noise sources and jet-airfoil interaction sources by using microphone array data. The measurements were carried-out in the anechoic open test section wind tunnel of Onera, Cepra19. The microphone array technique relies on the convected, Lighthill's and Ffowcs-Williams and Hawkings' acoustic analogy equation. The cross-spectrum of the source term of the analogy equation is sought. It is defined as the optimal solution to a minimal error equation using the measured microphone cross-spectra as reference. This inverse problem is ill-posed yet. A penalty term based on a localization operator is therefore added to improve the recovery of jet noise sources. The analysis of isolated jet noise data in subsonic regime shows the contribution of the conventional mixing noise source in the low frequency range, as expected, and of uniformly distributed, uncorrelated noise sources in the jet flow at higher frequencies. In underexpanded supersonic regime, a shock-associated noise source is clearly identified, too. An additional source is detected in the vicinity of the nozzle exit both in supersonic and subsonic regimes. In the presence of the airfoil, the distribution of the noise sources is deeply modified. In particular, a strong noise source is localized on the flap. For high Strouhal numbers, higher than about 2 (based on the jet mixing velocity and diameter), a significant contribution from the shear-layer near the flap is observed, too. Indications of acoustic reflections on the airfoil are also discerned.

  7. Sources and distribution of aromatic hydrocarbons in a tropical marine protected area estuary under influence of sugarcane cultivation.

    PubMed

    Arruda-Santos, Roxanny Helen de; Schettini, Carlos Augusto França; Yogui, Gilvan Takeshi; Maciel, Daniele Claudino; Zanardi-Lamardo, Eliete

    2018-05-15

    Goiana estuary is a well preserved marine protected area (MPA) located on the northeastern coast of Brazil. Despite its current state, human activities in the watershed represent a potential threat to long term local preservation. Dissolved/dispersed aromatic hydrocarbons and polycyclic aromatic hydrocarbons (PAHs) were investigated in water and sediments across the estuarine salt gradient. Concentration of aromatic hydrocarbons was low in all samples. According to results, aromatic hydrocarbons are associated to suspended particulate matter (SPM) carried to the estuary by river waters. An estuarine turbidity maximum (ETM) was identified in the upper estuary, indicating that both sediments and contaminants are trapped prior to an occasional export to the adjacent sea. PAHs distribution in sediments were associated with organic matter and mud content. Diagnostic ratios indicated pyrolytic processes as the main local source of PAHs that are probably associated with sugarcane burning and combustion engines. Low PAH concentrations probably do not cause adverse biological effects to the local biota although their presence indicate anthropogenic contamination and pressure on the Goiana estuary MPA. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Distributed chemical computing using ChemStar: an open source java remote method invocation architecture applied to large scale molecular data from PubChem.

    PubMed

    Karthikeyan, M; Krishnan, S; Pandey, Anil Kumar; Bender, Andreas; Tropsha, Alexander

    2008-04-01

    We present the application of a Java remote method invocation (RMI) based open source architecture to distributed chemical computing. This architecture was previously employed for distributed data harvesting of chemical information from the Internet via the Google application programming interface (API; ChemXtreme). Due to its open source character and its flexibility, the underlying server/client framework can be quickly adopted to virtually every computational task that can be parallelized. Here, we present the server/client communication framework as well as an application to distributed computing of chemical properties on a large scale (currently the size of PubChem; about 18 million compounds), using both the Marvin toolkit as well as the open source JOELib package. As an application, for this set of compounds, the agreement of log P and TPSA between the packages was compared. Outliers were found to be mostly non-druglike compounds and differences could usually be explained by differences in the underlying algorithms. ChemStar is the first open source distributed chemical computing environment built on Java RMI, which is also easily adaptable to user demands due to its "plug-in architecture". The complete source codes as well as calculated properties along with links to PubChem resources are available on the Internet via a graphical user interface at http://moltable.ncl.res.in/chemstar/.

  9. Open Data, Open Specifications and Free and Open Source Software: A powerful mix to create distributed Web-based water information systems

    NASA Astrophysics Data System (ADS)

    Arias, Carolina; Brovelli, Maria Antonia; Moreno, Rafael

    2015-04-01

    We are in an age when water resources are increasingly scarce and the impacts of human activities on them are ubiquitous. These problems don't respect administrative or political boundaries and they must be addressed integrating information from multiple sources at multiple spatial and temporal scales. Communication, coordination and data sharing are critical for addressing the water conservation and management issues of the 21st century. However, different countries, provinces, local authorities and agencies dealing with water resources have diverse organizational, socio-cultural, economic, environmental and information technology (IT) contexts that raise challenges to the creation of information systems capable of integrating and distributing information across their areas of responsibility in an efficient and timely manner. Tight and disparate financial resources, and dissimilar IT infrastructures (data, hardware, software and personnel expertise) further complicate the creation of these systems. There is a pressing need for distributed interoperable water information systems that are user friendly, easily accessible and capable of managing and sharing large volumes of spatial and non-spatial data. In a distributed system, data and processes are created and maintained in different locations each with competitive advantages to carry out specific activities. Open Data (data that can be freely distributed) is available in the water domain, and it should be further promoted across countries and organizations. Compliance with Open Specifications for data collection, storage and distribution is the first step toward the creation of systems that are capable of interacting and exchanging data in a seamlessly (interoperable) way. The features of Free and Open Source Software (FOSS) offer low access cost that facilitate scalability and long-term viability of information systems. The World Wide Web (the Web) will be the platform of choice to deploy and access these systems

  10. Spatial Distribution of Iron Within the Normal Human Liver Using Dual-Source Dual-Energy CT Imaging.

    PubMed

    Abadia, Andres F; Grant, Katharine L; Carey, Kathleen E; Bolch, Wesley E; Morin, Richard L

    2017-11-01

    Explore the potential of dual-source dual-energy (DSDE) computed tomography (CT) to retrospectively analyze the uniformity of iron distribution and establish iron concentration ranges and distribution patterns found in healthy livers. Ten mixtures consisting of an iron nitrate solution and deionized water were prepared in test tubes and scanned using a DSDE 128-slice CT system. Iron images were derived from a 3-material decomposition algorithm (optimized for the quantification of iron). A conversion factor (mg Fe/mL per Hounsfield unit) was calculated from this phantom study as the quotient of known tube concentrations and their corresponding CT values. Retrospective analysis was performed of patients who had undergone DSDE imaging for renal stones. Thirty-seven patients with normal liver function were randomly selected (mean age, 52.5 years). The examinations were processed for iron concentration. Multiple regions of interest were analyzed, and iron concentration (mg Fe/mL) and distribution was reported. The mean conversion factor obtained from the phantom study was 0.15 mg Fe/mL per Hounsfield unit. Whole-liver mean iron concentrations yielded a range of 0.0 to 2.91 mg Fe/mL, with 94.6% (35/37) of the patients exhibiting mean concentrations below 1.0 mg Fe/mL. The most important finding was that iron concentration was not uniform and patients exhibited regionally high concentrations (36/37). These regions of higher concentration were observed to be dominant in the middle-to-upper part of the liver (75%), medially (72.2%), and anteriorly (83.3%). Dual-source dual-energy CT can be used to assess the uniformity of iron distribution in healthy subjects. Applying similar techniques to unhealthy livers, future research may focus on the impact of hepatic iron content and distribution for noninvasive assessment in diseased subjects.

  11. An Ultradeep Chandra Catalog of X-Ray Point Sources in the Galactic Center Star Cluster

    NASA Astrophysics Data System (ADS)

    Zhu, Zhenlin; Li, Zhiyuan; Morris, Mark R.

    2018-04-01

    We present an updated catalog of X-ray point sources in the inner 500″ (∼20 pc) of the Galactic center (GC), where the nuclear star cluster (NSC) stands, based on a total of ∼4.5 Ms of Chandra observations taken from 1999 September to 2013 April. This ultradeep data set offers unprecedented sensitivity for detecting X-ray sources in the GC, down to an intrinsic 2–10 keV luminosity of 1.0 × 1031 erg s‑1. A total of 3619 sources are detected in the 2–8 keV band, among which ∼3500 are probable GC sources and ∼1300 are new identifications. The GC sources collectively account for ∼20% of the total 2–8 keV flux from the inner 250″ region where detection sensitivity is the greatest. Taking advantage of this unprecedented sample of faint X-ray sources that primarily traces the old stellar populations in the NSC, we revisit global source properties, including long-term variability, cumulative spectra, luminosity function, and spatial distribution. Based on the equivalent width and relative strength of the iron lines, we suggest that in addition to the arguably predominant population of magnetic cataclysmic variables (CVs), nonmagnetic CVs contribute substantially to the detected sources, especially in the lower-luminosity group. On the other hand, the X-ray sources have a radial distribution closely following the stellar mass distribution in the NSC, but much flatter than that of the known X-ray transients, which are presumably low-mass X-ray binaries (LMXBs) caught in outburst. This, together with the very modest long-term variability of the detected sources, strongly suggests that quiescent LMXBs are a minor (less than a few percent) population.

  12. Comparison of Particle-Associated Bacteria from a Drinking Water Treatment Plant and Distribution Reservoirs with Different Water Sources.

    PubMed

    Liu, G; Ling, F Q; van der Mark, E J; Zhang, X D; Knezev, A; Verberk, J Q J C; van der Meer, W G J; Medema, G J; Liu, W T; van Dijk, J C

    2016-02-02

    This study assessed the characteristics of and changes in the suspended particles and the associated bacteria in an unchlorinated drinking water distribution system and its reservoirs with different water sources. The results show that particle-associated bacteria (PAB) were present at a level of 0.8-4.5 × 10(3) cells ml(-1) with a biological activity of 0.01-0.04 ng l(-1) ATP. Different PAB communities in the waters produced from different sources were revealed by a 16S rRNA-based pyrosequencing analysis. The quantified biomass underestimation due to the multiple cells attached per particle was ≥ 85%. The distribution of the biologically stable water increased the number of cells per particle (from 48 to 90) but had minor effects on the PAB community. Significant changes were observed at the mixing reservoir. Our results show the characteristics of and changes in suspended PAB during distribution, and highlight the significance of suspended PAB in the distribution system, because suspended PAB can lead to a considerable underestimation of biomass, and because they exist as biofilm, which has a greater mobility than pipe-wall biofilm and therefore presents a greater risk, given the higher probability that it will reach the customers' taps and be ingested.

  13. Comparison of Particle-Associated Bacteria from a Drinking Water Treatment Plant and Distribution Reservoirs with Different Water Sources

    PubMed Central

    Liu, G.; Ling, F. Q.; van der Mark, E. J.; Zhang, X. D.; Knezev, A.; Verberk, J. Q. J. C.; van der Meer, W. G. J.; Medema, G. J.; Liu, W. T.; van Dijk, J. C.

    2016-01-01

    This study assessed the characteristics of and changes in the suspended particles and the associated bacteria in an unchlorinated drinking water distribution system and its reservoirs with different water sources. The results show that particle-associated bacteria (PAB) were present at a level of 0.8–4.5 × 103 cells ml−1 with a biological activity of 0.01–0.04 ng l−1 ATP. Different PAB communities in the waters produced from different sources were revealed by a 16S rRNA-based pyrosequencing analysis. The quantified biomass underestimation due to the multiple cells attached per particle was ≥ 85%. The distribution of the biologically stable water increased the number of cells per particle (from 48 to 90) but had minor effects on the PAB community. Significant changes were observed at the mixing reservoir. Our results show the characteristics of and changes in suspended PAB during distribution, and highlight the significance of suspended PAB in the distribution system, because suspended PAB can lead to a considerable underestimation of biomass, and because they exist as biofilm, which has a greater mobility than pipe-wall biofilm and therefore presents a greater risk, given the higher probability that it will reach the customers’ taps and be ingested. PMID:26832989

  14. X-ray emission from galaxies - The distribution of low-luminosity X-ray sources in the Galactic Centre region

    NASA Astrophysics Data System (ADS)

    Heard, Victoria; Warwick, Robert

    2012-09-01

    We report a study of the extended X-ray emission observed in the Galactic Centre (GC) region based on archival XMM-Newton data. The GC diffuse emission can be decomposed into three distinct components: the emission from low-luminosity point sources; the fluorescence of (and reflection from) dense molecular material; and soft (kT ~1 keV), diffuse thermal plasma emission most likely energised by supernova explosions. Here, we examine the emission due to unresolved point sources. We show that this source component accounts for the bulk of the 6.7-keV and 6.9-keV line emission. We fit the surface brightness distribution evident in these lines with an empirical 2-d model, which we then compare with a prediction derived from a 3-d mass model for the old stellar population in the GC region. We find that the X-ray surface brightness declines more rapidly with angular offset from Sgr A* than the mass-model prediction. One interpretation is that the X-ray luminosity per solar mass characterising the GC source population is increasing towards the GC. Alternatively, some refinement of the mass-distribution within the nuclear stellar disc may be required. The unresolved X-ray source population is most likely dominated by magnetic CVs. We use the X-ray observations to set constraints on the number density of such sources in the GC region. Our analysis does not support the premise that the GC is pervaded by very hot (~ 7.5 keV) thermal plasma, which is truly diffuse in nature.

  15. The Earth's mantle in a microwave oven: thermal convection driven by a heterogeneous distribution of heat sources

    NASA Astrophysics Data System (ADS)

    Fourel, Loïc; Limare, Angela; Jaupart, Claude; Surducan, Emanoil; Farnetani, Cinzia G.; Kaminski, Edouard C.; Neamtu, Camelia; Surducan, Vasile

    2017-08-01

    Convective motions in silicate planets are largely driven by internal heat sources and secular cooling. The exact amount and distribution of heat sources in the Earth are poorly constrained and the latter is likely to change with time due to mixing and to the deformation of boundaries that separate different reservoirs. To improve our understanding of planetary-scale convection in these conditions, we have designed a new laboratory setup allowing a large range of heat source distributions. We illustrate the potential of our new technique with a study of an initially stratified fluid involving two layers with different physical properties and internal heat production rates. A modified microwave oven is used to generate a uniform radiation propagating through the fluids. Experimental fluids are solutions of hydroxyethyl cellulose and salt in water, such that salt increases both the density and the volumetric heating rate. We determine temperature and composition fields in 3D with non-invasive techniques. Two fluorescent dyes are used to determine temperature. A Nd:YAG planar laser beam excites fluorescence, and an optical system, involving a beam splitter and a set of colour filters, captures the fluorescence intensity distribution on two separate spectral bands. The ratio between the two intensities provides an instantaneous determination of temperature with an uncertainty of 5% (typically 1K). We quantify mixing processes by precisely tracking the interfaces separating the two fluids. These novel techniques allow new insights on the generation, morphology and evolution of large-scale heterogeneities in the Earth's lower mantle.

  16. The Distribution of Cosmic-Ray Sources in the Galaxy, Gamma-Rays and the Gradient in the CO-to-H2 Relation

    NASA Technical Reports Server (NTRS)

    Strong, A. W.; Moskalenko, I. V.; Reimer, O.; Diehl, S.; Diehl, R.

    2004-01-01

    We present a solution to the apparent discrepancy between the radial gradient in the diffuse Galactic gamma-ray emissivity and the distribution of supernova remnants, believed to be the sources of cosmic rays. Recent determinations of the pulsar distribution have made the discrepancy even more apparent. The problem is shown to be plausibly solved by a variation in the Wco-to-N(H2) scaling factor. If this factor increases by a factor of 5-10 from the inner to the outer Galaxy, as expected from the Galactic metallicity gradient and supported by other evidence, we show that the source distribution required to match the radial gradient of gamma-rays can be reconciled with the distribution of supernova remnants as traced by current studies of pulsars. The resulting model fits the EGRET gamma-ray profiles extremely well in longitude, and reproduces the mid-latitude inner Galaxy intensities better than previous models.

  17. Release of accumulated arsenic from distribution pipes into tap water after arsenic treatment of source water- presentation

    EPA Science Inventory

    Toxic arsenic (As) is known to incorporate from source well water onto the scales of distribution system pipes such as iron, copper, galvanized steel and even plastic containing internal buildup of iron coatings (Lytle et al., 2010, 2004; Schock, 2015; Reiber and Dostal, 2000). W...

  18. Estimation of the caesium-137 source term from the Fukushima Daiichi nuclear power plant using a consistent joint assimilation of air concentration and deposition observations

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Duhanyan, Nora; Roustan, Yelva; Saunier, Olivier; Mathieu, Anne

    2014-01-01

    Inverse modelling techniques can be used to estimate the amount of radionuclides and the temporal profile of the source term released in the atmosphere during the accident of the Fukushima Daiichi nuclear power plant in March 2011. In Winiarek et al. (2012b), the lower bounds of the caesium-137 and iodine-131 source terms were estimated with such techniques, using activity concentration measurements. The importance of an objective assessment of prior errors (the observation errors and the background errors) was emphasised for a reliable inversion. In such critical context where the meteorological conditions can make the source term partly unobservable and where only a few observations are available, such prior estimation techniques are mandatory, the retrieved source term being very sensitive to this estimation. We propose to extend the use of these techniques to the estimation of prior errors when assimilating observations from several data sets. The aim is to compute an estimate of the caesium-137 source term jointly using all available data about this radionuclide, such as activity concentrations in the air, but also daily fallout measurements and total cumulated fallout measurements. It is crucial to properly and simultaneously estimate the background errors and the prior errors relative to each data set. A proper estimation of prior errors is also a necessary condition to reliably estimate the a posteriori uncertainty of the estimated source term. Using such techniques, we retrieve a total released quantity of caesium-137 in the interval 11.6-19.3 PBq with an estimated standard deviation range of 15-20% depending on the method and the data sets. The “blind” time intervals of the source term have also been strongly mitigated compared to the first estimations with only activity concentration data.

  19. Characterization of trapped charges distribution in terms of mirror plot curve.

    PubMed

    Al-Obaidi, Hassan N; Mahdi, Ali S; Khaleel, Imad H

    2018-01-01

    Accumulation of charges (electrons) at the specimen surface in scanning electron microscope (SEM) lead to generate an electrostatic potential. By using the method of image charges, this potential is defined in the chamber's space of such apparatus. The deduced formula is expressed in terms a general volumetric distribution which proposed to be an infinitesimal spherical extension. With aid of a binomial theorem the defined potential is expanded to a multipolar form. Then resultant formula is adopted to modify a novel mirror plot equation so as to detect the real distribution of trapped charges. Simulation results reveal that trapped charges may take a various sort of arrangement such as monopole, quadruple and octuple. But existence of any of these arrangements alone may never be take place, rather are some a formations of a mix of them. Influence of each type of these profiles depends on the distance between the incident electron and surface of a sample. Result also shows that trapped charge's amount of trapped charges can refer to a threshold for failing of point charge approximation. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Parameterized source term in the diffusion approximation for enhanced near-field modeling of collimated light

    NASA Astrophysics Data System (ADS)

    Jia, Mengyu; Wang, Shuang; Chen, Xueying; Gao, Feng; Zhao, Huijuan

    2016-03-01

    Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we have reported on an improved explicit model, referred to as "Virtual Source" (VS) diffuse approximation (DA), to inherit the mathematical simplicity of the DA while considerably extend its validity in modeling the near-field photon migration in low-albedo medium. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the nearfield to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. The proposed VS-DA model is validated by comparing with the Monte Carlo simulations, and further introduced in the image reconstruction of the Laminar Optical Tomography system.

  1. Isotropic source terms of San Jacinto fault zone earthquakes based on waveform inversions with a generalized CAP method

    NASA Astrophysics Data System (ADS)

    Ross, Z. E.; Ben-Zion, Y.; Zhu, L.

    2015-02-01

    We analyse source tensor properties of seven Mw > 4.2 earthquakes in the complex trifurcation area of the San Jacinto Fault Zone, CA, with a focus on isotropic radiation that may be produced by rock damage in the source volumes. The earthquake mechanisms are derived with generalized `Cut and Paste' (gCAP) inversions of three-component waveforms typically recorded by >70 stations at regional distances. The gCAP method includes parameters ζ and χ representing, respectively, the relative strength of the isotropic and CLVD source terms. The possible errors in the isotropic and CLVD components due to station variability is quantified with bootstrap resampling for each event. The results indicate statistically significant explosive isotropic components for at least six of the events, corresponding to ˜0.4-8 per cent of the total potency/moment of the sources. In contrast, the CLVD components for most events are not found to be statistically significant. Trade-off and correlation between the isotropic and CLVD components are studied using synthetic tests with realistic station configurations. The associated uncertainties are found to be generally smaller than the observed isotropic components. Two different tests with velocity model perturbation are conducted to quantify the uncertainty due to inaccuracies in the Green's functions. Applications of the Mann-Whitney U test indicate statistically significant explosive isotropic terms for most events consistent with brittle damage production at the source.

  2. Measuring trace gas emission from multi-distributed sources using vertical radial plume mapping (VRPM) and backward Lagrangian stochastic (bLS) techniques

    USDA-ARS?s Scientific Manuscript database

    Two micrometeorological techniques for measuring trace gas emission rates from distributed area sources were evaluated using a variety of synthetic area sources. The accuracy of the vertical radial plume mapping (VRPM) and the backward Lagrangian (bLS) techniques with an open-path optical spectrosco...

  3. Generating high-accuracy urban distribution map for short-term change monitoring based on convolutional neural network by utilizing SAR imagery

    NASA Astrophysics Data System (ADS)

    Iino, Shota; Ito, Riho; Doi, Kento; Imaizumi, Tomoyuki; Hikosaka, Shuhei

    2017-10-01

    In the developing countries, urban areas are expanding rapidly. With the rapid developments, a short term monitoring of urban changes is important. A constant observation and creation of urban distribution map of high accuracy and without noise pollution are the key issues for the short term monitoring. SAR satellites are highly suitable for day or night and regardless of atmospheric weather condition observations for this type of study. The current study highlights the methodology of generating high-accuracy urban distribution maps derived from the SAR satellite imagery based on Convolutional Neural Network (CNN), which showed the outstanding results for image classification. Several improvements on SAR polarization combinations and dataset construction were performed for increasing the accuracy. As an additional data, Digital Surface Model (DSM), which are useful to classify land cover, were added to improve the accuracy. From the obtained result, high-accuracy urban distribution map satisfying the quality for short-term monitoring was generated. For the evaluation, urban changes were extracted by taking the difference of urban distribution maps. The change analysis with time series of imageries revealed the locations of urban change areas for short-term. Comparisons with optical satellites were performed for validating the results. Finally, analysis of the urban changes combining X-band, L-band and C-band SAR satellites was attempted to increase the opportunity of acquiring satellite imageries. Further analysis will be conducted as future work of the present study

  4. Size distributions of polycyclic aromatic hydrocarbons in urban atmosphere: sorption mechanism and source contributions to respiratory deposition

    NASA Astrophysics Data System (ADS)

    Lv, Yan; Li, Xiang; Xu, Ting Ting; Cheng, Tian Tao; Yang, Xin; Chen, Jian Min; Iinuma, Yoshiteru; Herrmann, Hartmut

    2016-03-01

    In order to better understand the particle size distribution of polycyclic aromatic hydrocarbons (PAHs) and their source contribution to human respiratory system, size-resolved PAHs have been studied in ambient aerosols at a megacity Shanghai site during a 1-year period (2012-2013). The results showed the PAHs had a bimodal distribution with one mode peak in the fine-particle size range (0.4-2.1 µm) and another mode peak in the coarse-particle size range (3.3-9.0 µm). Along with the increase in ring number of PAHs, the intensity of the fine-mode peak increased, while the coarse-mode peak decreased. Plotting of log(PAH / PM) against log(Dp) showed that all slope values were above -1, suggesting that multiple mechanisms (adsorption and absorption) controlled the particle size distribution of PAHs. The total deposition flux of PAHs in the respiratory tract was calculated as being 8.8 ± 2.0 ng h-1. The highest lifetime cancer risk (LCR) was estimated at 1.5 × 10-6, which exceeded the unit risk of 10-6. The LCR values presented here were mainly influenced by accumulation mode PAHs which came from biomass burning (24 %), coal combustion (25 %), and vehicular emission (27 %). The present study provides us with a mechanistic understanding of the particle size distribution of PAHs and their transport in the human respiratory system, which can help develop better source control strategies.

  5. Characterisation of organic matter source and sediment distribution in Ashtamudi Estuary, southern India

    NASA Astrophysics Data System (ADS)

    Kumar, Prem; Ankit, Yadav; Mishra, Praveen K.; Jha, Deepak Kumar; Anoop, Ambili

    2017-04-01

    In the present study we have focussed on the surface sediments of Ashtamudi Estuary (southern India) to understand (i) the fate and sources of organic matter by investigating lipid biomarker (n-alkanes) distribution in modern sediments and vegetation samples and (ii) the processes controlling the sediment distribution into the lake basin using end-member modelling approach. The sediment n-alkanes from the Ashtamudi Estuary exhibit a pronounced odd over even predominance with maxima at C29 and C31 chain length indicative of a dominant terrestrial contribution. A number of n-alkane indices have been calculated to illustrate the variability in space by considering separately the river dominated northern reaches and tidal influenced southern part of Ashtamudi Estuary. The highest terrigenous organic contents were found in sediments from the river and upper bay sites, with smaller contributions to the lower parts of the estuary. The Paq and TAR (terrigenous/aquatic ratio) indices demonstrate maximum aquatic productivity (plankton growth and submerged macrophytes) in the tidal dominated region of the Ashtamudi Estuary. The carbon preference index (CPI) and average chain length (ACL) provide evidence for high petrogenic organic inputs in the tidal zone, whereas dominant biogenic contribution have been observed in the riverine zone. In addition, the end member modeling of the grain size distribution of the surface sediment samples enabled us to decipher significant sedimentological processes affecting the sediment distribution in the estuarine settings. The end-member distribution showing highest loading with the coarser fraction is maximum where estuary debouches into the sea. However, the samples near the mouth of the river shows finer fraction of the end-member.

  6. Inverse modelling-based reconstruction of the Chernobyl source term available for long-range transport

    NASA Astrophysics Data System (ADS)

    Davoine, X.; Bocquet, M.

    2007-03-01

    The reconstruction of the Chernobyl accident source term has been previously carried out using core inventories, but also back and forth confrontations between model simulations and activity concentration or deposited activity measurements. The approach presented in this paper is based on inverse modelling techniques. It relies both on the activity concentration measurements and on the adjoint of a chemistry-transport model. The location of the release is assumed to be known, and one is looking for a source term available for long-range transport that depends both on time and altitude. The method relies on the maximum entropy on the mean principle and exploits source positivity. The inversion results are mainly sensitive to two tuning parameters, a mass scale and the scale of the prior errors in the inversion. To overcome this hardship, we resort to the statistical L-curve method to estimate balanced values for these two parameters. Once this is done, many of the retrieved features of the source are robust within a reasonable range of parameter values. Our results favour the acknowledged three-step scenario, with a strong initial release (26 to 27 April), followed by a weak emission period of four days (28 April-1 May) and again a release, longer but less intense than the initial one (2 May-6 May). The retrieved quantities of iodine-131, caesium-134 and caesium-137 that have been released are in good agreement with the latest reported estimations. Yet, a stronger apportionment of the total released activity is ascribed to the first period and less to the third one. Finer chronological details are obtained, such as a sequence of eruptive episodes in the first two days, likely related to the modulation of the boundary layer diurnal cycle. In addition, the first two-day release surges are found to have effectively reached an altitude up to the top of the domain (5000 m).

  7. Preliminary investigation of processes that affect source term identification. Environmental Restoration Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wickliff, D.S.; Solomon, D.K.; Farrow, N.D.

    Solid Waste Storage Area (SWSA) 5 is known to be a significant source of contaminants, especially tritium ({sup 3}H), to the White Oak Creek (WOC) watershed. For example, Solomon et al. (1991) estimated the total {sup 3}H discharge in Melton Branch (most of which originates in SWSA 5) for the 1988 water year to be 1210 Ci. A critical issue for making decisions concerning remedial actions at SWSA 5 is knowing whether the annual contaminant discharge is increasing or decreasing. Because (1) the magnitude of the annual contaminant discharge is highly correlated to the amount of annual precipitation (Solomon etmore » al., 1991) and (2) a significant lag may exist between the time of peak contaminant release from primary sources (i.e., waste trenches) and the time of peak discharge into streams, short-term stream monitoring by itself is not sufficient for predicting future contaminant discharges. In this study we use {sup 3}H to examine the link between contaminant release from primary waste sources and contaminant discharge into streams. By understanding and quantifying subsurface transport processes, realistic predictions of future contaminant discharge, along with an evaluation of the effectiveness of remedial action alternatives, will be possible. The objectives of this study are (1) to characterize the subsurface movement of contaminants (primarily {sup 3}H) with an emphasis on the effects of matrix diffusion; (2) to determine the relative strength of primary vs secondary sources; and (3) to establish a methodology capable of determining whether the {sup 3}H discharge from SWSA 5 to streams is increasing or decreasing.« less

  8. Managing multicentre clinical trials with open source.

    PubMed

    Raptis, Dimitri Aristotle; Mettler, Tobias; Fischer, Michael Alexander; Patak, Michael; Lesurtel, Mickael; Eshmuminov, Dilmurodjon; de Rougemont, Olivier; Graf, Rolf; Clavien, Pierre-Alain; Breitenstein, Stefan

    2014-03-01

    Multicentre clinical trials are challenged by high administrative burden, data management pitfalls and costs. This leads to a reduced enthusiasm and commitment of the physicians involved and thus to a reluctance in conducting multicentre clinical trials. The purpose of this study was to develop a web-based open source platform to support a multi-centre clinical trial. We developed on Drupal, an open source software distributed under the terms of the General Public License, a web-based, multi-centre clinical trial management system with the design science research approach. This system was evaluated by user-testing and well supported several completed and on-going clinical trials and is available for free download. Open source clinical trial management systems are capable in supporting multi-centre clinical trials by enhancing efficiency, quality of data management and collaboration.

  9. Regulatory Technology Development Plan - Sodium Fast Reactor. Mechanistic Source Term - Metal Fuel Radionuclide Release

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabaskas, David; Bucknor, Matthew; Jerden, James

    2016-02-01

    The development of an accurate and defensible mechanistic source term will be vital for the future licensing efforts of metal fuel, pool-type sodium fast reactors. To assist in the creation of a comprehensive mechanistic source term, the current effort sought to estimate the release fraction of radionuclides from metal fuel pins to the primary sodium coolant during fuel pin failures at a variety of temperature conditions. These release estimates were based on the findings of an extensive literature search, which reviewed past experimentation and reactor fuel damage accidents. Data sources for each radionuclide of interest were reviewed to establish releasemore » fractions, along with possible release dependencies, and the corresponding uncertainty levels. Although the current knowledge base is substantial, and radionuclide release fractions were established for the elements deemed important for the determination of offsite consequences following a reactor accident, gaps were found pertaining to several radionuclides. First, there is uncertainty regarding the transport behavior of several radionuclides (iodine, barium, strontium, tellurium, and europium) during metal fuel irradiation to high burnup levels. The migration of these radionuclides within the fuel matrix and bond sodium region can greatly affect their release during pin failure incidents. Post-irradiation examination of existing high burnup metal fuel can likely resolve this knowledge gap. Second, data regarding the radionuclide release from molten high burnup metal fuel in sodium is sparse, which makes the assessment of radionuclide release from fuel melting accidents at high fuel burnup levels difficult. This gap could be addressed through fuel melting experimentation with samples from the existing high burnup metal fuel inventory.« less

  10. Steady-state solution of the semi-empirical diffusion equation for area sources. [air pollution studies

    NASA Technical Reports Server (NTRS)

    Lebedeff, S. A.; Hameed, S.

    1975-01-01

    The problem investigated can be solved exactly in a simple manner if the equations are written in terms of a similarity variable. The exact solution is used to explore two questions of interest in the modelling of urban air pollution, taking into account the distribution of surface concentration downwind of an area source and the distribution of concentration with height.

  11. Characteristics and source distribution of air pollution in winter in Qingdao, eastern China.

    PubMed

    Li, Lingyu; Yan, Dongyun; Xu, Shaohui; Huang, Mingli; Wang, Xiaoxia; Xie, Shaodong

    2017-05-01

    To characterize air pollution and determine its source distribution in Qingdao, Shandong Province, we analyzed hourly national air quality monitoring network data of normal pollutants at nine sites from 1 November 2015 to 31 January 2016. The average hourly concentrations of particulate matter <2.5 μm (PM 2.5 ) and <10 μm (PM 10 ), SO 2 , NO 2 , 8-h O 3 , and CO in Qingdao were 83, 129, 39, 41, and 41 μg m -3 , and 1.243 mg m -3 , respectively. During the polluted period, 19-26 December 2015, 29 December 2015 to 4 January 2016, and 14-17 January 2016, the mean 24-h PM 2.5 concentration was 168 μg m -3 with maximum of 311 μg m -3 . PM 2.5 was the main pollutant to contribute to the pollution during the above time. Heavier pollution and higher contributions of secondary formation to PM 2.5 concentration were observed in December and January. Pollution pathways and source distribution were investigated using the HYbrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model and potential source contribution function (PSCF) and concentration weighted trajectory (CWT) analyses. A cluster from the west, originating in Shanxi, southern Hebei, and west Shandong Provinces, accounted for 44.1% of the total air masses, had a mean PM 2.5 concentration of 134.9 μg m -3 and 73.9% trajectories polluted. This area contributed the most to PM 2.5 and PM 10 levels, >160 and 300 μg m -3 , respectively. In addition, primary crustal aerosols from desert of Inner Mongolia, and coarse and fine marine aerosols from the Yellow Sea contributed to ambient PM. The ambient pollutant concentrations in Qingdao in winter could be attributed to local primary emissions (e.g., coal combustion, vehicular, domestic and industrial emissions), secondary formation, and long distance transmission of emissions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Sources, distribution and export coefficient of phosphorus in lowland polders of Lake Taihu Basin, China.

    PubMed

    Huang, Jiacong; Gao, Junfeng; Jiang, Yong; Yin, Hongbin; Amiri, Bahman Jabbarian

    2017-12-01

    Identifying phosphorus (P) sources, distribution and export from lowland polders is important for P pollution management, however, is challenging due to the high complexity of hydrological and P transport processes in lowland areas. In this study, the spatial pattern and temporal dynamics of P export coefficient (PEC) from all the 2539 polders in Lake Taihu Basin, China were estimated using a coupled P model for describing P dynamics in a polder system. The estimated amount of P export from polders in Lake Taihu Basin during 2013 was 1916.2 t/yr, with a spatially-averaged PEC of 1.8 kg/ha/yr. PEC had peak values (more than 4.0 kg/ha/yr) in the polders near/within the large cities, and was high during the rice-cropping season. Sensitivity analysis based on the coupled P model revealed that the sensitive factors controlling the PEC varied spatially and changed through time. Precipitation and air temperature were the most sensitive factors controlling PEC. Culvert controlling and fertilization were sensitive factors controlling PEC during some periods. This study demonstrated an estimation of PEC from 2539 polders in Lake Taihu Basin, and an identification of sensitive environmental factors affecting PEC. The investigation of polder P export in a watershed scale is helpful for water managers to learn the distribution of P sources, to identify key P sources, and thus to achieve best management practice in controlling P export from lowland areas. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Simulating the Heliosphere with Kinetic Hydrogen and Dynamic MHD Source Terms

    DOE PAGES

    Heerikhuisen, Jacob; Pogorelov, Nikolai; Zank, Gary

    2013-04-01

    The interaction between the ionized plasma of the solar wind (SW) emanating from the sun and the partially ionized plasma of the local interstellar medium (LISM) creates the heliosphere. The heliospheric interface is characterized by the tangential discontinuity known as the heliopause that separates the SW and LISM plasmas, and a termination shock on the SW side along with a possible bow shock on the LISM side. Neutral Hydrogen of interstellar origin plays a critical role in shaping the heliospheric interface, since it freely traverses the heliopause. Charge-exchange between H-atoms and plasma protons couples the ions and neutrals, but themore » mean free paths are large, resulting in non-equilibrated energetic ion and neutral components. In our model, source terms for the MHD equations are generated using a kinetic approach for hydrogen, and the key computational challenge is to resolve these sources with sufficient statistics. For steady-state simulations, statistics can accumulate over arbitrarily long time intervals. In this paper we discuss an approach for improving the statistics in time-dependent calculations, and present results from simulations of the heliosphere where the SW conditions at the inner boundary of the computation vary according to an idealized solar cycle.« less

  14. Searching and exploitation of distributed geospatial data sources via the Naval Research Lab's Geospatial Information Database (GIDB) Portal System

    NASA Astrophysics Data System (ADS)

    McCreedy, Frank P.; Sample, John T.; Ladd, William P.; Thomas, Michael L.; Shaw, Kevin B.

    2005-05-01

    The Naval Research Laboratory"s Geospatial Information Database (GIDBTM) Portal System has been extended to now include an extensive geospatial search functionality. The GIDB Portal System interconnects over 600 distributed geospatial data sources via the Internet with a thick client, thin client and a PDA client. As the GIDB Portal System has rapidly grown over the last two years (adding hundreds of geospatial sources), the obvious requirement has arisen to more effectively mine the interconnected sources in near real-time. How the GIDB Search addresses this issue is the prime focus of this paper.

  15. On the numerical treatment of nonlinear source terms in reaction-convection equations

    NASA Technical Reports Server (NTRS)

    Lafon, A.; Yee, H. C.

    1992-01-01

    The objectives of this paper are to investigate how various numerical treatments of the nonlinear source term in a model reaction-convection equation can affect the stability of steady-state numerical solutions and to show under what conditions the conventional linearized analysis breaks down. The underlying goal is to provide part of the basic building blocks toward the ultimate goal of constructing suitable numerical schemes for hypersonic reacting flows, combustions and certain turbulence models in compressible Navier-Stokes computations. It can be shown that nonlinear analysis uncovers much of the nonlinear phenomena which linearized analysis is not capable of predicting in a model reaction-convection equation.

  16. Reconstructing source terms from atmospheric concentration measurements: Optimality analysis of an inversion technique

    NASA Astrophysics Data System (ADS)

    Turbelin, Grégory; Singh, Sarvesh Kumar; Issartel, Jean-Pierre

    2014-12-01

    In the event of an accidental or intentional contaminant release in the atmosphere, it is imperative, for managing emergency response, to diagnose the release parameters of the source from measured data. Reconstruction of the source information exploiting measured data is called an inverse problem. To solve such a problem, several techniques are currently being developed. The first part of this paper provides a detailed description of one of them, known as the renormalization method. This technique, proposed by Issartel (2005), has been derived using an approach different from that of standard inversion methods and gives a linear solution to the continuous Source Term Estimation (STE) problem. In the second part of this paper, the discrete counterpart of this method is presented. By using matrix notation, common in data assimilation and suitable for numerical computing, it is shown that the discrete renormalized solution belongs to a family of well-known inverse solutions (minimum weighted norm solutions), which can be computed by using the concept of generalized inverse operator. It is shown that, when the weight matrix satisfies the renormalization condition, this operator satisfies the criteria used in geophysics to define good inverses. Notably, by means of the Model Resolution Matrix (MRM) formalism, we demonstrate that the renormalized solution fulfils optimal properties for the localization of single point sources. Throughout the article, the main concepts are illustrated with data from a wind tunnel experiment conducted at the Environmental Flow Research Centre at the University of Surrey, UK.

  17. [Spatio-temporal characteristics and source identification of water pollutants in Wenruitang River watershed].

    PubMed

    Ma, Xiao-xue; Wang, La-chun; Liao, Ling-ling

    2015-01-01

    Identifying the temp-spatial distribution and sources of water pollutants is of great significance for efficient water quality management pollution control in Wenruitang River watershed, China. A total of twelve water quality parameters, including temperature, pH, dissolved oxygen (DO), total nitrogen (TN), ammonia nitrogen (NH4+ -N), electrical conductivity (EC), turbidity (Turb), nitrite-N (NO2-), nitrate-N(NO3-), phosphate-P(PO4(3-), total organic carbon (TOC) and silicate (SiO3(2-)), were analyzed from September, 2008 to October, 2009. Geographic information system(GIS) and principal component analysis(PCA) were used to determine the spatial distribution and to apportion the sources of pollutants. The results demonstrated that TN, NH4+ -N, PO4(3-) were the main pollutants during flow period, wet period, dry period, respectively, which was mainly caused by urban point sources and agricultural and rural non-point sources. In spatial terms, the order of pollution was tertiary river > secondary river > primary river, while the water quality was worse in city zones than in the suburb and wetland zone regardless of the river classification. In temporal terms, the order of pollution was dry period > wet period > flow period. Population density, land use type and water transfer affected the water quality in Wenruitang River.

  18. Parameterizing unresolved obstacles with source terms in wave modeling: A real-world application

    NASA Astrophysics Data System (ADS)

    Mentaschi, Lorenzo; Kakoulaki, Georgia; Vousdoukas, Michalis; Voukouvalas, Evangelos; Feyen, Luc; Besio, Giovanni

    2018-06-01

    Parameterizing the dissipative effects of small, unresolved coastal features, is fundamental to improve the skills of wave models. The established technique to deal with this problem consists in reducing the amount of energy advected within the propagation scheme, and is currently available only for regular grids. To find a more general approach, Mentaschi et al., 2015b formulated a technique based on source terms, and validated it on synthetic case studies. This technique separates the parameterization of the unresolved features from the energy advection, and can therefore be applied to any numerical scheme and to any type of mesh. Here we developed an open-source library for the estimation of the transparency coefficients needed by this approach, from bathymetric data and for any type of mesh. The spectral wave model WAVEWATCH III was used to show that in a real-world domain, such as the Caribbean Sea, the proposed approach has skills comparable and sometimes better than the established propagation-based technique.

  19. An open source platform for multi-scale spatially distributed simulations of microbial ecosystems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Segre, Daniel

    2014-08-14

    The goal of this project was to develop a tool for facilitating simulation, validation and discovery of multiscale dynamical processes in microbial ecosystems. This led to the development of an open-source software platform for Computation Of Microbial Ecosystems in Time and Space (COMETS). COMETS performs spatially distributed time-dependent flux balance based simulations of microbial metabolism. Our plan involved building the software platform itself, calibrating and testing it through comparison with experimental data, and integrating simulations and experiments to address important open questions on the evolution and dynamics of cross-feeding interactions between microbial species.

  20. NOTE: A Monte Carlo study of dose rate distribution around the specially asymmetric CSM3-a 137Cs source

    NASA Astrophysics Data System (ADS)

    Pérez-Calatayud, J.; Lliso, F.; Ballester, F.; Serrano, M. A.; Lluch, J. L.; Limami, Y.; Puchades, V.; Casal, E.

    2001-07-01

    The CSM3 137Cs type stainless-steel encapsulated source is widely used in manually afterloaded low dose rate brachytherapy. A specially asymmetric source, CSM3-a, has been designed by CIS Bio International (France) substituting the eyelet side seed with an inactive material in the CSM3 source. This modification has been done in order to allow a uniform dose level over the upper vaginal surface when this `linear' source is inserted at the top of the dome vaginal applicators. In this study the Monte Carlo GEANT3 simulation code, incorporating the source geometry in detail, was used to investigate the dosimetric characteristics of this special CSM3-a 137Cs brachytherapy source. The absolute dose rate distribution in water around this source was calculated and is presented in the form of an along-away table. Comparison of Sievert integral type calculations with Monte Carlo results are discussed.

  1. The source and distribution of thermogenic dissolved organic matter in the ocean

    NASA Astrophysics Data System (ADS)

    Dittmar, T.; Suryaputra, I. G. N. A.; Paeng, J.

    2009-04-01

    Thermogenic organic matter (ThOM) is abundant in the environment. ThOM is produced at elevated temperature and pressure in deep sediments and earth's crust, and it is also a residue of fossil fuel and biomass burning ("black carbon"). Because of its refractory character, it accumulates in soils and sediments and, therefore, may sequester carbon from active cycles. It was hypothesized that a significant component of marine dissolved organic matter (DOM) might be thermogenic. Here we present a detailed data set on the distribution of thermogenic DOM in major water masses of the deep and surface ocean. In addition, several potential sources of thermogenic DOM to the ocean were investigated: active seeps of brine fluids in the deep Gulf of Mexico, rivers, estuaries and submarine groundwaters. Studies on deep-sea hydrothermal vents and aerosol deposition are ongoing. All DOM samples were isolated from seawater via solid phase extraction (SPE-DOM). ThOM was quantified in the extracts as benzene-polycarboxylic acids (BPCAs) after nitric acid oxidation via high-performance liquid chromatography and diode array detection (HPLC-DAD). BPCAs are produced exclusively from fused ring systems and are therefore unambiguous molecular tracers for ThOM. In addition to BPCA determination, the molecular composition and structure of ThOM was characterized in detail via ultrahigh resolution mass spectrometry (FT-ICR-MS). All marine and river DOM samples yielded significant amounts of BPCAs. The cold seep system in the deep Gulf of Mexico, but also black water rivers (like the Suwannee River) were particularly rich in ThOM. Up to 10% of total dissolved organic carbon was thermogenic in both systems. The most abundant BPCA was benzene-pentacarboxylic acid (B5CA). The molecular composition of BPCAs and the FT-ICR-MS data indicate a relatively small number (5-8) of fused aromatic rings per molecule. Overall, the molecular BPCA patterns were very similar independent of the source of Th

  2. POI Summarization by Aesthetics Evaluation From Crowd Source Social Media.

    PubMed

    Qian, Xueming; Li, Cheng; Lan, Ke; Hou, Xingsong; Li, Zhetao; Han, Junwei

    2018-03-01

    Place-of-Interest (POI) summarization by aesthetics evaluation can recommend a set of POI images to the user and it is significant in image retrieval. In this paper, we propose a system that summarizes a collection of POI images regarding both aesthetics and diversity of the distribution of cameras. First, we generate visual albums by a coarse-to-fine POI clustering approach and then generate 3D models for each album by the collected images from social media. Second, based on the 3D to 2D projection relationship, we select candidate photos in terms of the proposed crowd source saliency model. Third, in order to improve the performance of aesthetic measurement model, we propose a crowd-sourced saliency detection approach by exploring the distribution of salient regions in the 3D model. Then, we measure the composition aesthetics of each image and we explore crowd source salient feature to yield saliency map, based on which, we propose an adaptive image adoption approach. Finally, we combine the diversity and the aesthetics to recommend aesthetic pictures. Experimental results show that the proposed POI summarization approach can return images with diverse camera distributions and aesthetics.

  3. Long-Term Changes in the Distributions of Larval and Adult Fish in the Northeast U.S. Shelf Ecosystem.

    PubMed

    Walsh, Harvey J; Richardson, David E; Marancik, Katrin E; Hare, Jonathan A

    2015-01-01

    Many studies have documented long-term changes in adult marine fish distributions and linked these changes to climate change and multi-decadal climate variability. Most marine fish, however, have complex life histories with morphologically distinct stages, which use different habitats. Shifts in distribution of one stage may affect the connectivity between life stages and thereby impact population processes including spawning and recruitment. Specifically, many marine fish species have a planktonic larval stage, which lasts from weeks to months. We compared the spatial distribution and seasonal occurrence of larval fish in the Northeast U.S. Shelf Ecosystem to test whether spatial and temporal distributions changed between two decades. Two large-scale ichthyoplankton programs sampled using similar methods and spatial domain each decade. Adult distributions from a long-term bottom trawl survey over the same time period and spatial area were also analyzed using the same analytical framework to compare changes in larval and adult distributions between the two decades. Changes in spatial distribution of larvae occurred for 43% of taxa, with shifts predominately northward (i.e., along-shelf). Timing of larval occurrence shifted for 49% of the larval taxa, with shifts evenly split between occurring earlier and later in the season. Where both larvae and adults of the same species were analyzed, 48% exhibited different shifts between larval and adult stages. Overall, these results demonstrate that larval fish distributions are changing in the ecosystem. The spatial changes are largely consistent with expectations from a changing climate. The temporal changes are more complex, indicating we need a better understanding of reproductive timing of fishes in the ecosystem. These changes may impact population productivity through changes in life history connectivity and recruitment, and add to the accumulating evidence for changes in the Northeast U.S. Shelf Ecosystem with

  4. Long-Term Changes in the Distributions of Larval and Adult Fish in the Northeast U.S. Shelf Ecosystem

    PubMed Central

    2015-01-01

    Many studies have documented long-term changes in adult marine fish distributions and linked these changes to climate change and multi-decadal climate variability. Most marine fish, however, have complex life histories with morphologically distinct stages, which use different habitats. Shifts in distribution of one stage may affect the connectivity between life stages and thereby impact population processes including spawning and recruitment. Specifically, many marine fish species have a planktonic larval stage, which lasts from weeks to months. We compared the spatial distribution and seasonal occurrence of larval fish in the Northeast U.S. Shelf Ecosystem to test whether spatial and temporal distributions changed between two decades. Two large-scale ichthyoplankton programs sampled using similar methods and spatial domain each decade. Adult distributions from a long-term bottom trawl survey over the same time period and spatial area were also analyzed using the same analytical framework to compare changes in larval and adult distributions between the two decades. Changes in spatial distribution of larvae occurred for 43% of taxa, with shifts predominately northward (i.e., along-shelf). Timing of larval occurrence shifted for 49% of the larval taxa, with shifts evenly split between occurring earlier and later in the season. Where both larvae and adults of the same species were analyzed, 48% exhibited different shifts between larval and adult stages. Overall, these results demonstrate that larval fish distributions are changing in the ecosystem. The spatial changes are largely consistent with expectations from a changing climate. The temporal changes are more complex, indicating we need a better understanding of reproductive timing of fishes in the ecosystem. These changes may impact population productivity through changes in life history connectivity and recruitment, and add to the accumulating evidence for changes in the Northeast U.S. Shelf Ecosystem with

  5. The radio spectral energy distribution of infrared-faint radio sources

    NASA Astrophysics Data System (ADS)

    Herzog, A.; Norris, R. P.; Middelberg, E.; Seymour, N.; Spitler, L. R.; Emonts, B. H. C.; Franzen, T. M. O.; Hunstead, R.; Intema, H. T.; Marvil, J.; Parker, Q. A.; Sirothia, S. K.; Hurley-Walker, N.; Bell, M.; Bernardi, G.; Bowman, J. D.; Briggs, F.; Cappallo, R. J.; Callingham, J. R.; Deshpande, A. A.; Dwarakanath, K. S.; For, B.-Q.; Greenhill, L. J.; Hancock, P.; Hazelton, B. J.; Hindson, L.; Johnston-Hollitt, M.; Kapińska, A. D.; Kaplan, D. L.; Lenc, E.; Lonsdale, C. J.; McKinley, B.; McWhirter, S. R.; Mitchell, D. A.; Morales, M. F.; Morgan, E.; Morgan, J.; Oberoi, D.; Offringa, A.; Ord, S. M.; Prabu, T.; Procopio, P.; Udaya Shankar, N.; Srivani, K. S.; Staveley-Smith, L.; Subrahmanyan, R.; Tingay, S. J.; Wayth, R. B.; Webster, R. L.; Williams, A.; Williams, C. L.; Wu, C.; Zheng, Q.; Bannister, K. W.; Chippendale, A. P.; Harvey-Smith, L.; Heywood, I.; Indermuehle, B.; Popping, A.; Sault, R. J.; Whiting, M. T.

    2016-10-01

    Context. Infrared-faint radio sources (IFRS) are a class of radio-loud (RL) active galactic nuclei (AGN) at high redshifts (z ≥ 1.7) that are characterised by their relative infrared faintness, resulting in enormous radio-to-infrared flux density ratios of up to several thousand. Aims: Because of their optical and infrared faintness, it is very challenging to study IFRS at these wavelengths. However, IFRS are relatively bright in the radio regime with 1.4 GHz flux densities of a few to a few tens of mJy. Therefore, the radio regime is the most promising wavelength regime in which to constrain their nature. We aim to test the hypothesis that IFRS are young AGN, particularly GHz peaked-spectrum (GPS) and compact steep-spectrum (CSS) sources that have a low frequency turnover. Methods: We use the rich radio data set available for the Australia Telescope Large Area Survey fields, covering the frequency range between 150 MHz and 34 GHz with up to 19 wavebands from different telescopes, and build radio spectral energy distributions (SEDs) for 34 IFRS. We then study the radio properties of this class of object with respect to turnover, spectral index, and behaviour towards higher frequencies. We also present the highest-frequency radio observations of an IFRS, observed with the Plateau de Bure Interferometer at 105 GHz, and model the multi-wavelength and radio-far-infrared SED of this source. Results: We find IFRS usually follow single power laws down to observed frequencies of around 150 MHz. Mostly, the radio SEDs are steep (α < -0.8; %), but we also find ultra-steep SEDs (α < -1.3; %). In particular, IFRS show statistically significantly steeper radio SEDs than the broader RL AGN population. Our analysis reveals that the fractions of GPS and CSS sources in the population of IFRS are consistent with the fractions in the broader RL AGN population. We find that at least % of IFRS contain young AGN, although the fraction might be significantly higher as suggested by

  6. The combined effects of a long-term experimental drought and an extreme drought on the use of plant-water sources in a Mediterranean forest.

    PubMed

    Barbeta, Adrià; Mejía-Chang, Monica; Ogaya, Romà; Voltas, Jordi; Dawson, Todd E; Peñuelas, Josep

    2015-03-01

    Vegetation in water-limited ecosystems relies strongly on access to deep water reserves to withstand dry periods. Most of these ecosystems have shallow soils over deep groundwater reserves. Understanding the functioning and functional plasticity of species-specific root systems and the patterns of or differences in the use of water sources under more frequent or intense droughts is therefore necessary to properly predict the responses of seasonally dry ecosystems to future climate. We used stable isotopes to investigate the seasonal patterns of water uptake by a sclerophyll forest on sloped terrain with shallow soils. We assessed the effect of a long-term experimental drought (12 years) and the added impact of an extreme natural drought that produced widespread tree mortality and crown defoliation. The dominant species, Quercus ilex, Arbutus unedo and Phillyrea latifolia, all have dimorphic root systems enabling them to access different water sources in space and time. The plants extracted water mainly from the soil in the cold and wet seasons but increased their use of groundwater during the summer drought. Interestingly, the plants subjected to the long-term experimental drought shifted water uptake toward deeper (10-35 cm) soil layers during the wet season and reduced groundwater uptake in summer, indicating plasticity in the functional distribution of fine roots that dampened the effect of our experimental drought over the long term. An extreme drought in 2011, however, further reduced the contribution of deep soil layers and groundwater to transpiration, which resulted in greater crown defoliation in the drought-affected plants. This study suggests that extreme droughts aggravate moderate but persistent drier conditions (simulated by our manipulation) and may lead to the depletion of water from groundwater reservoirs and weathered bedrock, threatening the preservation of these Mediterranean ecosystems in their current structures and compositions. © 2014

  7. Application of Phasor Measurement Units for Protection of Distribution Networks with High Penetration of Photovoltaic Sources

    NASA Astrophysics Data System (ADS)

    Meskin, Matin

    The rate of the integration of distributed generation (DG) units to the distribution level to meet the growth in demand increases as a reasonable replacement for costly network expansion. This integration brings many advantages to the consumers and power grids, as well as giving rise to more challenges in relation to protection and control. Recent research has brought to light the negative effects of DG units on short circuit currents and overcurrent (OC) protection systems in distribution networks. Change in the direction of fault current flow, increment or decrement of fault current magnitude, blindness of protection, feeder sympathy trip, nuisance trip of interrupting devices, and the disruption of coordination between protective devices are some potential impacts of DG unit integration. Among other types of DG units, the integration of renewable energy resources into the electric grid has seen a vast improvement in recent years. In particular, the interconnection of photovoltaic (PV) sources to the medium voltage (MV) distribution networks has experienced a rapid increase in the last decade. In this work, the effect of PV source on conventional OC relays in MV distribution networks is shown. It is indicated that the PV output fluctuation, due to changes in solar radiation, causes the magnitude and direction of the current to change haphazardly. These variations may result in the poor operation of OC relays as the main protective devices in the MV distribution networks. In other words, due to the bi-directional power flow characteristic and the fluctuation of current magnitude occurring in the presence of PV sources, a specific setting of OC relays is difficult to realize. Therefore, OC relays may operate in normal conditions. To improve the OC relay operation, a voltage-dependent-overcurrent protection is proposed. Although, this new method prevents the OC relay from maloperation, its ability to detect earth faults and high impedance faults is poor. Thus, a

  8. A three-dimensional point process model for the spatial distribution of disease occurrence in relation to an exposure source.

    PubMed

    Grell, Kathrine; Diggle, Peter J; Frederiksen, Kirsten; Schüz, Joachim; Cardis, Elisabeth; Andersen, Per K

    2015-10-15

    We study methods for how to include the spatial distribution of tumours when investigating the relation between brain tumours and the exposure from radio frequency electromagnetic fields caused by mobile phone use. Our suggested point process model is adapted from studies investigating spatial aggregation of a disease around a source of potential hazard in environmental epidemiology, where now the source is the preferred ear of each phone user. In this context, the spatial distribution is a distribution over a sample of patients rather than over multiple disease cases within one geographical area. We show how the distance relation between tumour and phone can be modelled nonparametrically and, with various parametric functions, how covariates can be included in the model and how to test for the effect of distance. To illustrate the models, we apply them to a subset of the data from the Interphone Study, a large multinational case-control study on the association between brain tumours and mobile phone use. Copyright © 2015 John Wiley & Sons, Ltd.

  9. Electron Source Brightness and Illumination Semi-Angle Distribution Measurement in a Transmission Electron Microscope.

    PubMed

    Börrnert, Felix; Renner, Julian; Kaiser, Ute

    2018-05-21

    The electron source brightness is an important parameter in an electron microscope. Reliable and easy brightness measurement routes are not easily found. A determination method for the illumination semi-angle distribution in transmission electron microscopy is even less well documented. Herein, we report a simple measurement route for both entities and demonstrate it on a state-of-the-art instrument. The reduced axial brightness of the FEI X-FEG with a monochromator was determined to be larger than 108 A/(m2 sr V).

  10. Computational studies for a multiple-frequency electron cyclotron resonance ion source (abstract)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alton, G.D.

    1996-03-01

    The number density of electrons, the energy (electron temperature), and energy distribution are three of the fundamental properties which govern the performance of electron cyclotron resonance (ECR) ion sources in terms of their capability to produce high charge state ions. The maximum electron energy is affected by several processes including the ability of the plasma to absorb power. In principle, the performances of an ECR ion source can be realized by increasing the physical size of the ECR zone in relation to the total plasma volume. The ECR zones can be increased either in the spatial or frequency domains inmore » any ECR ion source based on B-minimum plasma confinement principles. The former technique requires the design of a carefully tailored magnetic field geometry so that the central region of the plasma volume is a large, uniformly distributed plasma volume which surrounds the axis of symmetry, as proposed in Ref. . Present art forms of the ECR source utilize single frequency microwave power supplies to maintain the plasma discharge; because the magnetic field distribution continually changes in this source design, the ECR zones are relegated to thin {open_quote}{open_quote}surfaces{close_quote}{close_quote} which surround the axis of symmetry. As a consequence of the small ECR zone in relation to the total plasma volume, the probability for stochastic heating of the electrons is quite low, thereby compromising the source performance. This handicap can be overcome by use of broadband, multiple frequency microwave power as evidenced by the enhanced performances of the CAPRICE and AECR ion sources when two frequency microwave power was utilized. We have used particle-in-cell codes to simulate the magnetic field distributions in these sources and to demonstrate the advantages of using multiple, discrete frequencies over single frequencies to power conventional ECR ion sources. (Abstract Truncated)« less

  11. Constraining the redshift distribution of ultrahigh-energy-cosmic-ray sources by isotropic gamma-ray background

    NASA Astrophysics Data System (ADS)

    Liu, Ruo-Yu; Taylor, Andrew; Wang, Xiang-Yu; Aharonian, Felix

    2017-01-01

    By interacting with the cosmic background photons during their propagation through intergalactic space, ultrahigh energy cosmic rays (UHECRs) produce energetic electron/positron pairs and photons which will initiate electromagnetic cascades, contributing to the isotropic gamma-ray background (IGRB). The generated gamma-ray flux level highly depends on the redshift evolution of the UHECR sources. Recently, the Fermi-LAT collaboration reported that 86-14+16 of the total extragalactic gamma-ray flux comes from extragalactic point sources including those unresolved ones. This leaves a limited room for the diffusive gamma ray generated via UHECR propagation, and subsequently constrains their source distribution in the Universe. Normalizing the total cosmic ray energy budget with the observed UHECR flux in the energy band of (1-4)×1018 eV, we calculate the diffuse gamma-ray flux generated through UHECR propagation. We find that in order to not overshoot the new IGRB limit, these sub-ankle UHECRs should be produced mainly by nearby sources, with a possible non-negligible contribution from our Galaxy. The distance for the majority of UHECR sources can be further constrained if a given fraction of the observed IGRB at 820 GeV originates from UHECR. We note that our result should be conservative since there may be various other contributions to the IGRB that is not included here.

  12. A new traffic model with a lane-changing viscosity term

    NASA Astrophysics Data System (ADS)

    Ko, Hung-Tang; Liu, Xiao-He; Guo, Ming-Min; Wu, Zheng

    2015-09-01

    In this paper, a new continuum traffic flow model is proposed, with a lane-changing source term in the continuity equation and a lane-changing viscosity term in the acceleration equation. Based on previous literature, the source term addresses the impact of speed difference and density difference between adjacent lanes, which provides better precision for free lane-changing simulation; the viscosity term turns lane-changing behavior to a “force” that may influence speed distribution. Using a flux-splitting scheme for the model discretization, two cases are investigated numerically. The case under a homogeneous initial condition shows that the numerical results by our model agree well with the analytical ones; the case with a small initial disturbance shows that our model can simulate the evolution of perturbation, including propagation, dissipation, cluster effect and stop-and-go phenomenon. Project supported by the National Natural Science Foundation of China (Grant Nos. 11002035 and 11372147) and Hui-Chun Chin and Tsung-Dao Lee Chinese Undergraduate Research Endowment (Grant No. CURE 14024).

  13. moocRP: Enabling Open Learning Analytics with an Open Source Platform for Data Distribution, Analysis, and Visualization

    ERIC Educational Resources Information Center

    Pardos, Zachary A.; Whyte, Anthony; Kao, Kevin

    2016-01-01

    In this paper, we address issues of transparency, modularity, and privacy with the introduction of an open source, web-based data repository and analysis tool tailored to the Massive Open Online Course community. The tool integrates data request/authorization and distribution workflow features as well as provides a simple analytics module upload…

  14. Incorporation of a spatial source distribution and a spatial sensor sensitivity in a laser ultrasound propagation model using a streamlined Huygens' principle.

    PubMed

    Laloš, Jernej; Babnik, Aleš; Možina, Janez; Požar, Tomaž

    2016-03-01

    The near-field, surface-displacement waveforms in plates are modeled using interwoven concepts of Green's function formalism and streamlined Huygens' principle. Green's functions resemble the building blocks of the sought displacement waveform, superimposed and weighted according to the simplified distribution. The approach incorporates an arbitrary circular spatial source distribution and an arbitrary circular spatial sensitivity in the area probed by the sensor. The displacement histories for uniform, Gaussian and annular normal-force source distributions and the uniform spatial sensor sensitivity are calculated, and the corresponding weight distributions are compared. To demonstrate the applicability of the developed scheme, measurements of laser ultrasound induced solely by the radiation pressure are compared with the calculated waveforms. The ultrasound is induced by laser pulse reflection from the mirror-surface of a glass plate. The measurements show excellent agreement not only with respect to various wave-arrivals but also in the shape of each arrival. Their shape depends on the beam profile of the excitation laser pulse and its corresponding spatial normal-force distribution. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sig Drellack, Lance Prothro

    2007-12-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result ofmore » the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The

  16. Identifying and characterizing major emission point sources as a basis for geospatial distribution of mercury emissions inventories

    NASA Astrophysics Data System (ADS)

    Steenhuisen, Frits; Wilson, Simon J.

    2015-07-01

    Mercury is a global pollutant that poses threats to ecosystem and human health. Due to its global transport, mercury contamination is found in regions of the Earth that are remote from major emissions areas, including the Polar regions. Global anthropogenic emission inventories identify important sectors and industries responsible for emissions at a national level; however, to be useful for air transport modelling, more precise information on the locations of emission is required. This paper describes the methodology applied, and the results of work that was conducted to assign anthropogenic mercury emissions to point sources as part of geospatial mapping of the 2010 global anthropogenic mercury emissions inventory prepared by AMAP/UNEP. Major point-source emission sectors addressed in this work account for about 850 tonnes of the emissions included in the 2010 inventory. This work allocated more than 90% of these emissions to some 4600 identified point source locations, including significantly more point source locations in Africa, Asia, Australia and South America than had been identified during previous work to geospatially-distribute the 2005 global inventory. The results demonstrate the utility and the limitations of using existing, mainly public domain resources to accomplish this work. Assumptions necessary to make use of selected online resources are discussed, as are artefacts that can arise when these assumptions are applied to assign (national-sector) emissions estimates to point sources in various countries and regions. Notwithstanding the limitations of the available information, the value of this procedure over alternative methods commonly used to geo-spatially distribute emissions, such as use of 'proxy' datasets to represent emissions patterns, is illustrated. Improvements in information that would facilitate greater use of these methods in future work to assign emissions to point-sources are discussed. These include improvements to both national

  17. Sources and distribution of sedimentary organic matter along the Andong salt marsh, Hangzhou Bay

    NASA Astrophysics Data System (ADS)

    Yuan, Hong-Wei; Chen, Jian-Fang; Ye, Ying; Lou, Zhang-Hua; Jin, Ai-Min; Chen, Xue-Gang; Jiang, Zong-Pei; Lin, Yu-Shih; Chen, Chen-Tung Arthur; Loh, Pei Sun

    2017-10-01

    Lignin oxidation products, δ13C values, C/N ratios and particle size were used to investigate the sources, distribution and chemical stability of sedimentary organic matter (OM) along the Andong salt marsh located in the southwestern end of Hangzhou Bay, China. Terrestrial OM was highest at the upper marshes and decreased closer to the sea, and the distribution of sedimentary total organic carbon (TOC) was influenced mostly by particle size. Terrestrial OM with a C3 signature was the predominant source of sedimentary OM in the Spartina alterniflora-dominated salt marsh system. This means that aside from contributions from the local marsh plants, the Andong salt marsh received input mostly from the Qiantang River and the Changjiang Estuary. Transect C, which was situated nearer to the Qiantang River mouth, was most likely influenced by input from the Qiantang River. Likewise, a nearby creek could be transporting materials from Hangzhou Bay into Transect A (farther east than Transect C), as Transect A showed a signal resembling that of the Changjiang Estuary. The predominance of terrestrial OM in the Andong salt marsh despite overall reductions in sedimentary and terrestrial OM input from the rivers is most likely due to increased contributions of sedimentary and terrestrial OM from erosion. This study shows that lower salt marsh accretion due to the presence of reservoirs upstream may be counterbalanced by increased erosion from the surrounding coastal areas.

  18. Pu and 137Cs in the Yangtze River estuary sediments: distribution and source identification.

    PubMed

    Liu, Zhiyong; Zheng, Jian; Pan, Shaoming; Dong, Wei; Yamada, Masatoshi; Aono, Tatsuo; Guo, Qiuju

    2011-03-01

    Pu isotopes and (137)Cs were analyzed using sector field ICP-MS and γ spectrometry, respectively, in surface sediment and core sediment samples from the Yangtze River estuary. (239+240)Pu activity and (240)Pu/(239)Pu atom ratios (>0.18) shows a generally increasing trend from land to sea and from north to south in the estuary. This spatial distribution pattern indicates that the Pacific Proving Grounds (PPG) source Pu transported by ocean currents was intensively scavenged into the suspended sediment under favorable conditions, and mixed with riverine sediment as the water circulated in the estuary. This process is the main control for the distribution of Pu in the estuary. Moreover, Pu is also an important indicator for monitoring the changes of environmental radioactivity in the estuary as the river basin is currently the site of extensive human activities and the sea level is rising because of global climate changes. For core sediment samples the maximum peak of (239+240)Pu activity was observed at a depth of 172 cm. The sedimentation rate was estimated on the basis of the Pu maximum deposition peak in 1963-1964 to be 4.1 cm/a. The contributions of the PPG close-in fallout Pu (44%) and the riverine Pu (45%) in Yangtze River estuary sediments are equally important for the total Pu deposition in the estuary, which challenges the current hypothesis that the riverine Pu input was the major source of Pu budget in this area.

  19. Survey of ion plating sources

    NASA Technical Reports Server (NTRS)

    Spalvins, T.

    1979-01-01

    Ion plating is a plasma deposition technique where ions of the gas and the evaporant have a decisive role in the formation of a coating in terms of adherence, coherence, and morphological growth. The range of materials that can be ion plated is predominantly determined by the selection of the evaporation source. Based on the type of evaporation source, gaseous media and mode of transport, the following will be discussed: resistance, electron beam sputtering, reactive and ion beam evaporation. Ionization efficiencies and ion energies in the glow discharge determine the percentage of atoms which are ionized under typical ion plating conditions. The plating flux consists of a small number of energetic ions and a large number of energetic neutrals. The energy distribution ranges from thermal energies up to a maximum energy of the discharge. The various reaction mechanisms which contribute to the exceptionally strong adherence - formation of a graded substrate/coating interface are not fully understood, however the controlling factors are evaluated. The influence of process variables on the nucleation and growth characteristics are illustrated in terms of morphological changes which affect the mechanical and tribological properties of the coating.

  20. Composition, spatial distribution and sources of macro-marine litter on the Gulf of Alicante seafloor (Spanish Mediterranean).

    PubMed

    García-Rivera, Santiago; Lizaso, Jose Luis Sánchez; Millán, Jose María Bellido

    2017-08-15

    The composition, spatial distribution and source of marine litter in the Spanish Southeast Mediterranean were assessed. The data proceed from a marine litter retention programme implemented by commercial trawlers and were analysed by GIS. By weight, 75.9% was plastic, metal and glass. Glass and plastics were mainly found close to the coast. A high concentration of metal was observed in some isolated zones of both open and coastal waters. Fishing activity was the source of 29.16% of the macro-marine litter, almost 68.1% of the plastics, and 25.1% of the metal. The source of the other 60.84% could not be directly identified, revealing the high degree of uncertainty regarding its specific origin. Indirectly however, a qualitative analysis of marine traffic shows that the likely sources were merchant ships mainly in open waters and recreational and fishing vessels in coastal waters. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Current density distributions and sputter marks in electron cyclotron resonance ion sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panitzsch, Lauri; Peleikis, Thies; Boettcher, Stephan

    2013-01-15

    Most electron cyclotron resonance ion sources use hexapolar magnetic fields for the radial confinement of the plasma. The geometry of this magnetic structure is then-induced by charged particles-mapped onto the inner side of the plasma electrode via sputtering and deposition. The resulting structures usually show two different patterns: a sharp triangular one in the central region which in some cases is even sputtered deep into the material (referred to as thin groove or sharp structure), and a blurred but still triangular-like one in the surroundings (referred to as broad halo). Therefore, both patterns seem to have different sources. To investigatemore » their origins we replaced the standard plasma electrode by a custom-built plasma electrode acting as a planar, multi-segment current-detector. For different biased disc voltages, detector positions, and source biases (referred to the detector) we measured the electrical current density distributions in the plane of the plasma electrode. The results show a strong and sharply confined electron population with triangular shape surrounded by less intense and spatially less confined ions. Observed sputter- and deposition marks are related to the analysis of the results. Our measurements suggest that the two different patterns (thin and broad) indeed originate from different particle populations. The thin structures seem to be caused by the hot electron population while the broad marks seem to stem from the medium to highly charged ions. In this paper we present our measurements together with theoretical considerations and substantiate the conclusions drawn above. The validity of these results is also discussed.« less

  2. Simulation of a beam rotation system for a spallation source

    NASA Astrophysics Data System (ADS)

    Reiss, Tibor; Reggiani, Davide; Seidel, Mike; Talanov, Vadim; Wohlmuther, Michael

    2015-04-01

    With a nominal beam power of nearly 1 MW on target, the Swiss Spallation Neutron Source (SINQ), ranks among the world's most powerful spallation neutron sources. The proton beam transport to the SINQ target is carried out exclusively by means of linear magnetic elements. In the transport line to SINQ the beam is scattered in two meson production targets and as a consequence, at the SINQ target entrance the beam shape can be described by Gaussian distributions in transverse x and y directions with tails cut short by collimators. This leads to a highly nonuniform power distribution inside the SINQ target, giving rise to thermal and mechanical stresses. In view of a future proton beam intensity upgrade, the possibility of homogenizing the beam distribution by means of a fast beam rotation system is currently under investigation. Important aspects which need to be studied are the impact of a rotating proton beam on the resulting neutron spectra, spatial flux distributions and additional—previously not present—proton losses causing unwanted activation of accelerator components. Hence a new source description method was developed for the radiation transport code MCNPX. This new feature makes direct use of the results from the proton beam optics code TURTLE. Its advantage to existing MCNPX source options is that all phase space information and correlations of each primary beam particle computed with TURTLE are preserved and transferred to MCNPX. Simulations of the different beam distributions together with their consequences in terms of neutron production are presented in this publication. Additionally, a detailed description of the coupling method between TURTLE and MCNPX is provided.

  3. Modified ensemble Kalman filter for nuclear accident atmospheric dispersion: prediction improved and source estimated.

    PubMed

    Zhang, X L; Su, G F; Yuan, H Y; Chen, J G; Huang, Q Y

    2014-09-15

    Atmospheric dispersion models play an important role in nuclear power plant accident management. A reliable estimation of radioactive material distribution in short range (about 50 km) is in urgent need for population sheltering and evacuation planning. However, the meteorological data and the source term which greatly influence the accuracy of the atmospheric dispersion models are usually poorly known at the early phase of the emergency. In this study, a modified ensemble Kalman filter data assimilation method in conjunction with a Lagrangian puff-model is proposed to simultaneously improve the model prediction and reconstruct the source terms for short range atmospheric dispersion using the off-site environmental monitoring data. Four main uncertainty parameters are considered: source release rate, plume rise height, wind speed and wind direction. Twin experiments show that the method effectively improves the predicted concentration distribution, and the temporal profiles of source release rate and plume rise height are also successfully reconstructed. Moreover, the time lag in the response of ensemble Kalman filter is shortened. The method proposed here can be a useful tool not only in the nuclear power plant accident emergency management but also in other similar situation where hazardous material is released into the atmosphere. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Noise-enhanced CVQKD with untrusted source

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoqun; Huang, Chunhui

    2017-06-01

    The performance of one-way and two-way continuous variable quantum key distribution (CVQKD) protocols can be increased by adding some noise on the reconciliation side. In this paper, we propose to add noise at the reconciliation end to improve the performance of CVQKD with untrusted source. We derive the key rate of this case and analyze the impact of the additive noise. The simulation results show that the optimal additive noise can improve the performance of the system in terms of maximum transmission distance and tolerable excess noise.

  5. INEEL Subregional Conceptual Model Report Volume 3: Summary of Existing Knowledge of Natural and Anthropogenic Influences on the Release of Contaminants to the Subsurface Environment from Waste Source Terms at the INEEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paul L. Wichlacz

    2003-09-01

    This source-term summary document is intended to describe the current understanding of contaminant source terms and the conceptual model for potential source-term release to the environment at the Idaho National Engineering and Environmental Laboratory (INEEL), as presented in published INEEL reports. The document presents a generalized conceptual model of the sources of contamination and describes the general categories of source terms, primary waste forms, and factors that affect the release of contaminants from the waste form into the vadose zone and Snake River Plain Aquifer. Where the information has previously been published and is readily available, summaries of the inventorymore » of contaminants are also included. Uncertainties that affect the estimation of the source term release are also discussed where they have been identified by the Source Term Technical Advisory Group. Areas in which additional information are needed (i.e., research needs) are also identified.« less

  6. Influence of heat and particle fluxes nonlocality on spatial distribution of plasma density in two-chamber inductively coupled plasma sources

    NASA Astrophysics Data System (ADS)

    Kudryavtsev, A. A.; Serditov, K. Yu.

    2012-07-01

    This study presents 2D simulations of the two-chamber inductively coupled plasma source where power is supplied in the small discharge chamber and extends by electron thermal conductivity mechanism to the big diffusion chamber. Depending on pressure, two main scenarios of plasma density and its spatial distribution behavior were identified. One case is characterized by the localization of plasma in the small driver chamber where power is deposed. Another case describes when the diffusion chamber becomes the main source of plasma with maximum of the electron density. The differences in spatial distribution are caused by local or non-local behavior of electron energy transport in the discharge volume due to different characteristic scale of heat transfer with electronic conductivity.

  7. Source Distribution Method for Unsteady One-Dimensional Flows With Small Mass, Momentum, and Heat Addition and Small Area Variation

    NASA Technical Reports Server (NTRS)

    Mirels, Harold

    1959-01-01

    A source distribution method is presented for obtaining flow perturbations due to small unsteady area variations, mass, momentum, and heat additions in a basic uniform (or piecewise uniform) one-dimensional flow. First, the perturbations due to an elemental area variation, mass, momentum, and heat addition are found. The general solution is then represented by a spatial and temporal distribution of these elemental (source) solutions. Emphasis is placed on discussing the physical nature of the flow phenomena. The method is illustrated by several examples. These include the determination of perturbations in basic flows consisting of (1) a shock propagating through a nonuniform tube, (2) a constant-velocity piston driving a shock, (3) ideal shock-tube flows, and (4) deflagrations initiated at a closed end. The method is particularly applicable for finding the perturbations due to relatively thin wall boundary layers.

  8. An Exact Form of Lilley's Equation with a Velocity Quadrupole/Temperature Dipole Source Term

    NASA Technical Reports Server (NTRS)

    Goldstein, Marvin E.

    2001-01-01

    There have been several attempts to introduce approximations into the exact form of Lilley's equation in order to express the source term as the sum of a quadrupole whose strength is quadratic in the fluctuating velocities and a dipole whose strength is proportional to the temperature fluctuations. The purpose of this note is to show that it is possible to choose the dependent (i.e., the pressure) variable so that this type of result can be derived directly from the Euler equations without introducing any additional approximations.

  9. THE ENVIRONMENT AND DISTRIBUTION OF EMITTING ELECTRONS AS A FUNCTION OF SOURCE ACTIVITY IN MARKARIAN 421

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mankuzhiyil, Nijil; Ansoldi, Stefano; Persic, Massimo

    2011-05-20

    For the high-frequency-peaked BL Lac object Mrk 421, we study the variation of the spectral energy distribution (SED) as a function of source activity, from quiescent to active. We use a fully automatized {chi}{sup 2}-minimization procedure, instead of the 'eyeball' procedure more commonly used in the literature, to model nine SED data sets with a one-zone synchrotron self-Compton (SSC) model and examine how the model parameters vary with source activity. The latter issue can finally be addressed now, because simultaneous broadband SEDs (spanning from optical to very high energy photon) have finally become available. Our results suggest that in Mrkmore » 421 the magnetic field (B) decreases with source activity, whereas the electron spectrum's break energy ({gamma}{sub br}) and the Doppler factor ({delta}) increase-the other SSC parameters turn out to be uncorrelated with source activity. In the SSC framework, these results are interpreted in a picture where the synchrotron power and peak frequency remain constant with varying source activity, through a combination of decreasing magnetic field and increasing number density of {gamma} {<=} {gamma}{sub br} electrons: since this leads to an increased electron-photon scattering efficiency, the resulting Compton power increases, and so does the total (= synchrotron plus Compton) emission.« less

  10. Risks of nuclear waste disposal in space. III - Long-term orbital evolution of small particle distribution

    NASA Technical Reports Server (NTRS)

    Friedlander, A. L.; Wells, W. C.

    1980-01-01

    A study of long term risks is presented that treats an additional pathway that could result in earth reentry, namely, small radioactive particles released in solar orbit due to payload fragmentation by accidental explosion or meteoroid impact. A characterization of such an event and of the initial mass size distribution of particles is given for two extremes of waste form strength. Attention is given to numerical results showing the mass-time distribution of material and the fraction of initial mass intercepted by earth. It is concluded that it appears that program planners need not be to concerned about the risks of this particular failure mechanism and return pathway.

  11. Long-term information and distributed neural activation are relevant for the "internal features advantage" in face processing: electrophysiological and source reconstruction evidence.

    PubMed

    Olivares, Ela I; Saavedra, Cristina; Trujillo-Barreto, Nelson J; Iglesias, Jaime

    2013-01-01

    In face processing tasks, prior presentation of internal facial features, when compared with external ones, facilitates the recognition of subsequently displayed familiar faces. In a previous ERP study (Olivares & Iglesias, 2010) we found a visibly larger N400-like effect when identity mismatch familiar faces were preceded by internal features, as compared to prior presentation of external ones. In the present study we contrasted the processing of familiar and unfamiliar faces in the face-feature matching task to assess whether the so-called "internal features advantage" relies mainly on the use of stored face-identity-related information or if it might operate independently from stimulus familiarity. Our participants (N = 24) achieved better performance with internal features as primes and, significantly, with familiar faces. Importantly, ERPs elicited by identity mismatch complete faces displayed a negativity around 300-600 msec which was clearly enhanced for familiar faces primed by internal features when compared with the other experimental conditions. Source reconstruction showed incremented activity elicited by familiar stimuli in both posterior (ventral occipitotemporal) and more anterior (parahippocampal (ParaHIP) and orbitofrontal) brain regions. The activity elicited by unfamiliar stimuli was, in general, located in more posterior regions. Our findings suggest that the activation of multiple neural codes is required for optimal individuation in face-feature matching and that a cortical network related to long-term information for face-identity processing seems to support the internal feature effect. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Dark Energy Survey Year 1 Results: redshift distributions of the weak-lensing source galaxies

    NASA Astrophysics Data System (ADS)

    Hoyle, B.; Gruen, D.; Bernstein, G. M.; Rau, M. M.; De Vicente, J.; Hartley, W. G.; Gaztanaga, E.; DeRose, J.; Troxel, M. A.; Davis, C.; Alarcon, A.; MacCrann, N.; Prat, J.; Sánchez, C.; Sheldon, E.; Wechsler, R. H.; Asorey, J.; Becker, M. R.; Bonnett, C.; Carnero Rosell, A.; Carollo, D.; Carrasco Kind, M.; Castander, F. J.; Cawthon, R.; Chang, C.; Childress, M.; Davis, T. M.; Drlica-Wagner, A.; Gatti, M.; Glazebrook, K.; Gschwend, J.; Hinton, S. R.; Hoormann, J. K.; Kim, A. G.; King, A.; Kuehn, K.; Lewis, G.; Lidman, C.; Lin, H.; Macaulay, E.; Maia, M. A. G.; Martini, P.; Mudd, D.; Möller, A.; Nichol, R. C.; Ogando, R. L. C.; Rollins, R. P.; Roodman, A.; Ross, A. J.; Rozo, E.; Rykoff, E. S.; Samuroff, S.; Sevilla-Noarbe, I.; Sharp, R.; Sommer, N. E.; Tucker, B. E.; Uddin, S. A.; Varga, T. N.; Vielzeuf, P.; Yuan, F.; Zhang, B.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Annis, J.; Bechtol, K.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Busha, M. T.; Capozzi, D.; Carretero, J.; Crocce, M.; D'Andrea, C. B.; da Costa, L. N.; DePoy, D. L.; Desai, S.; Diehl, H. T.; Doel, P.; Eifler, T. F.; Estrada, J.; Evrard, A. E.; Fernandez, E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Giannantonio, T.; Goldstein, D. A.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; James, D. J.; Jarvis, M.; Jeltema, T.; Johnson, M. W. G.; Johnson, M. D.; Kirk, D.; Krause, E.; Kuhlmann, S.; Kuropatkin, N.; Lahav, O.; Li, T. S.; Lima, M.; March, M.; Marshall, J. L.; Melchior, P.; Menanteau, F.; Miquel, R.; Nord, B.; O'Neill, C. R.; Plazas, A. A.; Romer, A. K.; Sako, M.; Sanchez, E.; Santiago, B.; Scarpine, V.; Schindler, R.; Schubnell, M.; Smith, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Tucker, D. L.; Vikram, V.; Walker, A. R.; Weller, J.; Wester, W.; Wolf, R. C.; Yanny, B.; Zuntz, J.

    2018-07-01

    We describe the derivation and validation of redshift distribution estimates and their uncertainties for the populations of galaxies used as weak-lensing sources in the Dark Energy Survey (DES) Year 1 cosmological analyses. The Bayesian Photometric Redshift (BPZ) code is used to assign galaxies to four redshift bins between z ≈ 0.2 and ≈1.3, and to produce initial estimates of the lensing-weighted redshift distributions n^i_PZ(z)∝ dn^i/dz for members of bin i. Accurate determination of cosmological parameters depends critically on knowledge of ni, but is insensitive to bin assignments or redshift errors for individual galaxies. The cosmological analyses allow for shifts n^i(z)=n^i_PZ(z-Δ z^i) to correct the mean redshift of ni(z) for biases in n^i_PZ. The Δzi are constrained by comparison of independently estimated 30-band photometric redshifts of galaxies in the Cosmic Evolution Survey (COSMOS) field to BPZ estimates made from the DES griz fluxes, for a sample matched in fluxes, pre-seeing size, and lensing weight to the DES weak-lensing sources. In companion papers, the Δzi of the three lowest redshift bins are further constrained by the angular clustering of the source galaxies around red galaxies with secure photometric redshifts at 0.15 < z < 0.9. This paper details the BPZ and COSMOS procedures, and demonstrates that the cosmological inference is insensitive to details of the ni(z) beyond the choice of Δzi. The clustering and COSMOS validation methods produce consistent estimates of Δzi in the bins where both can be applied, with combined uncertainties of σ_{Δ z^i}=0.015, 0.013, 0.011, and 0.022 in the four bins. Repeating the photo-z procedure instead using the Directional Neighbourhood Fitting algorithm, or using the ni(z) estimated from the matched sample in COSMOS, yields no discernible difference in cosmological inferences.

  13. Dark Energy Survey Year 1 Results: Redshift distributions of the weak lensing source galaxies

    NASA Astrophysics Data System (ADS)

    Hoyle, B.; Gruen, D.; Bernstein, G. M.; Rau, M. M.; De Vicente, J.; Hartley, W. G.; Gaztanaga, E.; DeRose, J.; Troxel, M. A.; Davis, C.; Alarcon, A.; MacCrann, N.; Prat, J.; Sánchez, C.; Sheldon, E.; Wechsler, R. H.; Asorey, J.; Becker, M. R.; Bonnett, C.; Carnero Rosell, A.; Carollo, D.; Carrasco Kind, M.; Castander, F. J.; Cawthon, R.; Chang, C.; Childress, M.; Davis, T. M.; Drlica-Wagner, A.; Gatti, M.; Glazebrook, K.; Gschwend, J.; Hinton, S. R.; Hoormann, J. K.; Kim, A. G.; King, A.; Kuehn, K.; Lewis, G.; Lidman, C.; Lin, H.; Macaulay, E.; Maia, M. A. G.; Martini, P.; Mudd, D.; Möller, A.; Nichol, R. C.; Ogando, R. L. C.; Rollins, R. P.; Roodman, A.; Ross, A. J.; Rozo, E.; Rykoff, E. S.; Samuroff, S.; Sevilla-Noarbe, I.; Sharp, R.; Sommer, N. E.; Tucker, B. E.; Uddin, S. A.; Varga, T. N.; Vielzeuf, P.; Yuan, F.; Zhang, B.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Annis, J.; Bechtol, K.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Busha, M. T.; Capozzi, D.; Carretero, J.; Crocce, M.; D'Andrea, C. B.; da Costa, L. N.; DePoy, D. L.; Desai, S.; Diehl, H. T.; Doel, P.; Eifler, T. F.; Estrada, J.; Evrard, A. E.; Fernandez, E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Giannantonio, T.; Goldstein, D. A.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; James, D. J.; Jarvis, M.; Jeltema, T.; Johnson, M. W. G.; Johnson, M. D.; Kirk, D.; Krause, E.; Kuhlmann, S.; Kuropatkin, N.; Lahav, O.; Li, T. S.; Lima, M.; March, M.; Marshall, J. L.; Melchior, P.; Menanteau, F.; Miquel, R.; Nord, B.; O'Neill, C. R.; Plazas, A. A.; Romer, A. K.; Sako, M.; Sanchez, E.; Santiago, B.; Scarpine, V.; Schindler, R.; Schubnell, M.; Smith, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Tucker, D. L.; Vikram, V.; Walker, A. R.; Weller, J.; Wester, W.; Wolf, R. C.; Yanny, B.; Zuntz, J.; DES Collaboration

    2018-04-01

    We describe the derivation and validation of redshift distribution estimates and their uncertainties for the populations of galaxies used as weak lensing sources in the Dark Energy Survey (DES) Year 1 cosmological analyses. The Bayesian Photometric Redshift (BPZ) code is used to assign galaxies to four redshift bins between z ≈ 0.2 and ≈1.3, and to produce initial estimates of the lensing-weighted redshift distributions n^i_PZ(z)∝ dn^i/dz for members of bin i. Accurate determination of cosmological parameters depends critically on knowledge of ni but is insensitive to bin assignments or redshift errors for individual galaxies. The cosmological analyses allow for shifts n^i(z)=n^i_PZ(z-Δ z^i) to correct the mean redshift of ni(z) for biases in n^i_PZ. The Δzi are constrained by comparison of independently estimated 30-band photometric redshifts of galaxies in the COSMOS field to BPZ estimates made from the DES griz fluxes, for a sample matched in fluxes, pre-seeing size, and lensing weight to the DES weak-lensing sources. In companion papers, the Δzi of the three lowest redshift bins are further constrained by the angular clustering of the source galaxies around red galaxies with secure photometric redshifts at 0.15 < z < 0.9. This paper details the BPZ and COSMOS procedures, and demonstrates that the cosmological inference is insensitive to details of the ni(z) beyond the choice of Δzi. The clustering and COSMOS validation methods produce consistent estimates of Δzi in the bins where both can be applied, with combined uncertainties of σ _{Δ z^i}=0.015, 0.013, 0.011, and 0.022 in the four bins. Repeating the photo-z proceedure instead using the Directional Neighborhood Fitting (DNF) algorithm, or using the ni(z) estimated from the matched sample in COSMOS, yields no discernible difference in cosmological inferences.

  14. Acoustic Source Localization via Time Difference of Arrival Estimation for Distributed Sensor Networks Using Tera-Scale Optical Core Devices

    DOE PAGES

    Imam, Neena; Barhen, Jacob

    2009-01-01

    For real-time acoustic source localization applications, one of the primary challenges is the considerable growth in computational complexity associated with the emergence of ever larger, active or passive, distributed sensor networks. These sensors rely heavily on battery-operated system components to achieve highly functional automation in signal and information processing. In order to keep communication requirements minimal, it is desirable to perform as much processing on the receiver platforms as possible. However, the complexity of the calculations needed to achieve accurate source localization increases dramatically with the size of sensor arrays, resulting in substantial growth of computational requirements that cannot bemore » readily met with standard hardware. One option to meet this challenge builds upon the emergence of digital optical-core devices. The objective of this work was to explore the implementation of key building block algorithms used in underwater source localization on the optical-core digital processing platform recently introduced by Lenslet Inc. This demonstration of considerably faster signal processing capability should be of substantial significance to the design and innovation of future generations of distributed sensor networks.« less

  15. Distribution, Source, and Ecological Risk Assessment of Polycyclic Aromatic Hydrocarbons in Surface Sediment of Liaodong Bay, Northeast China

    NASA Astrophysics Data System (ADS)

    Xu, Shuang; Tao, Ping; Li, Yuxia; Guo, Qi; Zhang, Yan; Wang, Man; Jia, Hongliang; Shao, Mihua

    2018-01-01

    Sixteen polycyclic aromatic hydrocarbons (PAHs) were determined in surface sediments from Liaodong Bay, northeast China. The concentration levels of total PAHs (Σ16PAHs) in sediment were 11.0˜249.6 ng·g-1 dry weight (dw), with a mean value of 89.9 ng·g-1 dry weight (dw). From the point of the spatial distribution, high PAHs levels were found in the western areas of Liaodong Bay. In the paper, sources of PAHs were investigated by diagnostic ratios, which indicated that pyrogenic sources were the main sources of PAHs in the sediment of Liaodong Bay. Therefore, selected PAH levels in sediments were compared with Sediments Quality Guidelines (ERM-ERL indexes) for evaluation probable toxic effects on marine organism.

  16. Discrete random distribution of source dopants in nanowire tunnel transistors (TFETs)

    NASA Astrophysics Data System (ADS)

    Sylvia, Somaia; Abul Khayer, M.; Alam, Khairul; Park, Hong-Hyun; Klimeck, Gerhard; Lake, Roger

    2013-03-01

    InAs and InSb nanowire (NW) tunnel field effect transistors (TFETs) require highly degenerate source doping to support the high electric fields in the tunnel region. For a target on-current of 1 μA , the doping requirement may be as high as 1 . 5 ×1020cm-3 in a NW with diameter as low as 4 nm. The small size of these devices demand that the dopants near tunneling region be treated discretely. Therefore, the effects resulting from the random distribution of dopant atoms in the source of a TFET are studied for 30 test devices. Comparing with the transfer characteristics of the same device simulated with a continuum doping model, our results show (1) a spread of I - V toward the positive gate voltage axis, (2) the same average threshold voltage, (3) an average 62% reduction in the on current, and (4) a slight degradation of the subthreshold slope. Random fluctuations in both the number and placement of dopants will be discussed. Also, as the channel length is scaled down, direct tunneling through the channel starts limiting the device performance. Therefore, a comparison of materials is also performed, showing their ability to block direct tunneling for sub-10 nm channel FETs and TFETs. This work was supported in part by the Center on Functional Engineered Nano Architectonics and the Materials, Structures and Devices Focus Center, under the Focus Center Research Program, and by the National Science Foundation under Grant OCI-0749140

  17. Advection-diffusion model for the simulation of air pollution distribution from a point source emission

    NASA Astrophysics Data System (ADS)

    Ulfah, S.; Awalludin, S. A.; Wahidin

    2018-01-01

    Advection-diffusion model is one of the mathematical models, which can be used to understand the distribution of air pollutant in the atmosphere. It uses the 2D advection-diffusion model with time-dependent to simulate air pollution distribution in order to find out whether the pollutants are more concentrated at ground level or near the source of emission under particular atmospheric conditions such as stable, unstable, and neutral conditions. Wind profile, eddy diffusivity, and temperature are considered in the model as parameters. The model is solved by using explicit finite difference method, which is then visualized by a computer program developed using Lazarus programming software. The results show that the atmospheric conditions alone influencing the level of concentration of pollutants is not conclusive as the parameters in the model have their own effect on each atmospheric condition.

  18. Dark Energy Survey Year 1 Results: redshift distributions of the weak-lensing source galaxies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoyle, B.; Gruen, D.; Bernstein, G. M.

    We describe the derivation and validation of redshift distribution estimates and their uncertainties for the galaxies used as weak lensing sources in the Dark Energy Survey (DES) Year 1 cosmological analyses. The Bayesian Photometric Redshift (BPZ) code is used to assign galaxies to four redshift bins between z=0.2 and 1.3, and to produce initial estimates of the lensing-weighted redshift distributionsmore » $$n^i_{PZ}(z)$$ for bin i. Accurate determination of cosmological parameters depends critically on knowledge of $n^i$ but is insensitive to bin assignments or redshift errors for individual galaxies. The cosmological analyses allow for shifts $$n^i(z)=n^i_{PZ}(z-\\Delta z^i)$$ to correct the mean redshift of $n^i(z)$ for biases in $$n^i_{\\rm PZ}$$. The $$\\Delta z^i$$ are constrained by comparison of independently estimated 30-band photometric redshifts of galaxies in the COSMOS field to BPZ estimates made from the DES griz fluxes, for a sample matched in fluxes, pre-seeing size, and lensing weight to the DES weak-lensing sources. In companion papers, the $$\\Delta z^i$$ are further constrained by the angular clustering of the source galaxies around red galaxies with secure photometric redshifts at 0.15« less

  19. Dark Energy Survey Year 1 Results: redshift distributions of the weak-lensing source galaxies

    DOE PAGES

    Hoyle, B.; Gruen, D.; Bernstein, G. M.; ...

    2018-04-18

    We describe the derivation and validation of redshift distribution estimates and their uncertainties for the galaxies used as weak lensing sources in the Dark Energy Survey (DES) Year 1 cosmological analyses. The Bayesian Photometric Redshift (BPZ) code is used to assign galaxies to four redshift bins between z=0.2 and 1.3, and to produce initial estimates of the lensing-weighted redshift distributionsmore » $$n^i_{PZ}(z)$$ for bin i. Accurate determination of cosmological parameters depends critically on knowledge of $n^i$ but is insensitive to bin assignments or redshift errors for individual galaxies. The cosmological analyses allow for shifts $$n^i(z)=n^i_{PZ}(z-\\Delta z^i)$$ to correct the mean redshift of $n^i(z)$ for biases in $$n^i_{\\rm PZ}$$. The $$\\Delta z^i$$ are constrained by comparison of independently estimated 30-band photometric redshifts of galaxies in the COSMOS field to BPZ estimates made from the DES griz fluxes, for a sample matched in fluxes, pre-seeing size, and lensing weight to the DES weak-lensing sources. In companion papers, the $$\\Delta z^i$$ are further constrained by the angular clustering of the source galaxies around red galaxies with secure photometric redshifts at 0.15« less

  20. Dark Energy Survey Year 1 Results: Redshift distributions of the weak lensing source galaxies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoyle, B.; et al.

    2017-08-04

    We describe the derivation and validation of redshift distribution estimates and their uncertainties for the galaxies used as weak lensing sources in the Dark Energy Survey (DES) Year 1 cosmological analyses. The Bayesian Photometric Redshift (BPZ) code is used to assign galaxies to four redshift bins between z=0.2 and 1.3, and to produce initial estimates of the lensing-weighted redshift distributionsmore » $$n^i_{PZ}(z)$$ for bin i. Accurate determination of cosmological parameters depends critically on knowledge of $n^i$ but is insensitive to bin assignments or redshift errors for individual galaxies. The cosmological analyses allow for shifts $$n^i(z)=n^i_{PZ}(z-\\Delta z^i)$$ to correct the mean redshift of $n^i(z)$ for biases in $$n^i_{\\rm PZ}$$. The $$\\Delta z^i$$ are constrained by comparison of independently estimated 30-band photometric redshifts of galaxies in the COSMOS field to BPZ estimates made from the DES griz fluxes, for a sample matched in fluxes, pre-seeing size, and lensing weight to the DES weak-lensing sources. In companion papers, the $$\\Delta z^i$$ are further constrained by the angular clustering of the source galaxies around red galaxies with secure photometric redshifts at 0.15« less

  1. Bisphenol analogues in surface water and sediment from the shallow Chinese freshwater lakes: Occurrence, distribution, source apportionment, and ecological and human health risk.

    PubMed

    Yan, Zhengyu; Liu, Yanhua; Yan, Kun; Wu, Shengmin; Han, Zhihua; Guo, Ruixin; Chen, Meihong; Yang, Qiulian; Zhang, Shenghu; Chen, Jianqiu

    2017-10-01

    Compared to Bisphenol A (BPA), current knowledge on the spatial distribution, potential sources and environmental risk assessment of other bisphenol analogues (BPs) remains limited. The occurrence, distribution and sources of seven BPs were investigated in the surface water and sediment from Taihu Lake and Luoma Lake, which are the Chinese shallow freshwater lakes. Because there are many industries and living areas around Taihu Lake, the total concentrations of ∑BPs were much higher than that in Luoma Lake, which is away from the industry-intensive areas. For the two lakes, BPA was still the dominant BPs in both surface water and sediment, followed by BPF and BPS. The spatial distribution and principal component analysis showed that BPs in Luoma Lake was relatively homogeneous and the potential sources were relatively simple than that in Taihu Lake. The spatial distribution of BPs in sediment of Taihu Lake indicated that ∑BPs positively correlated with the TOC content. For both Taihu Lake and Luoma Lake, the risk assessment at the sampling sites showed that no high risk in surface water and sediment (RQ t  < 1.0, and EEQ t  < 1.0 ng E 2 /L). Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Source Term Estimation of Radioxenon Released from the Fukushima Dai-ichi Nuclear Reactors Using Measured Air Concentrations and Atmospheric Transport Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eslinger, Paul W.; Biegalski, S.; Bowyer, Ted W.

    2014-01-01

    Systems designed to monitor airborne radionuclides released from underground nuclear explosions detected radioactive fallout from the Fukushima Daiichi nuclear accident in March 2011. Atmospheric transport modeling (ATM) of plumes of noble gases and particulates were performed soon after the accident to determine plausible detection locations of any radioactive releases to the atmosphere. We combine sampling data from multiple International Modeling System (IMS) locations in a new way to estimate the magnitude and time sequence of the releases. Dilution factors from the modeled plume at five different detection locations were combined with 57 atmospheric concentration measurements of 133-Xe taken from Marchmore » 18 to March 23 to estimate the source term. This approach estimates that 59% of the 1.24×1019 Bq of 133-Xe present in the reactors at the time of the earthquake was released to the atmosphere over a three day period. Source term estimates from combinations of detection sites have lower spread than estimates based on measurements at single detection sites. Sensitivity cases based on data from four or more detection locations bound the source term between 35% and 255% of available xenon inventory.« less

  3. [Case study of red water phenomenon in drinking water distribution systems caused by water source switch].

    PubMed

    Wang, Yang; Zhang, Xiao-jian; Chen, Chao; Pan, An-jun; Xu, Yang; Liao, Ping-an; Zhang, Su-xia; Gu, Jun-nong

    2009-12-01

    Red water phenomenon occurred in some communities of a city in China after water source switch in recent days. The origin of this red water problem and mechanism of iron release were investigated in the study. Water quality of local and new water sources was tested and tap water quality in suffered area had been monitored for 3 months since red water occurred. Interior corrosion scales on the pipe which was obtained from the suffered area were analyzed by XRD, SEM, and EDS. Corrosion rates of cast iron under the conditions of two source water were obtained by Annular Reactor. The influence of different source water on iron release was studied by pipe section reactor to simulate the distribution systems. The results indicated that large increase of sulfate concentration by water source shift was regarded as the cause of red water problem. The Larson ratio increased from about 0.4 to 1.7-1.9 and the red water problem happened in the taps of some urban communities just several days after the new water source was applied. The mechanism of iron release was concluded that the stable shell of scales in the pipes had been corrupted by this kind of high-sulfate-concentration source water and it was hard to recover soon spontaneously. The effect of sulfate on iron release of the old cast iron was more significant than its effect on enhancing iron corrosion. The rate of iron release increased with increasing Larson ratio, and the correlation of them was nonlinear on the old cast-iron. The problem remained quite a long time even if the water source re-shifted into the blended one with only small ratio of the new source and the Larson ratio reduced to about 0.6.

  4. Water on Mars: Inventory, distribution, and possible sources of polar ice

    NASA Technical Reports Server (NTRS)

    Clifford, S. M.

    1992-01-01

    Theoretical considerations and various lines of morphologic evidence suggest that, in addition to the normal seasonal and climatic exchange of H2O that occurs between the Martian polar caps, atmosphere, and mid to high latitude regolith, large volumes of water have been introduced into the planet's long term hydrologic cycle by the sublimation of equatorial ground ice, impacts, catastrophic flooding, and volcanism. Under the climatic conditions that are thought to have prevailed on Mars throughout the past 3 to 4 b.y., much of this water is expected to have been cold trapped at the poles. The amount of polar ice contributed by each of the planet's potential crustal sources is discussed and estimated. The final analysis suggests that only 5 to 15 pct. of this potential inventory is now in residence at the poles.

  5. Spatial distributions of Southern Ocean mesozooplankton communities have been resilient to long-term surface warming.

    PubMed

    Tarling, Geraint A; Ward, Peter; Thorpe, Sally E

    2018-01-01

    The biogeographic response of oceanic planktonic communities to climatic change has a large influence on the future stability of marine food webs and the functioning of global biogeochemical cycles. Temperature plays a pivotal role in determining the distribution of these communities and ocean warming has the potential to cause major distributional shifts, particularly in polar regions where the thermal envelope is narrow. We considered the impact of long-term ocean warming on the spatial distribution of Southern Ocean mesozooplankton communities through examining plankton abundance in relation to sea surface temperature between two distinct periods, separated by around 60 years. Analyses considered 16 dominant mesozooplankton taxa (in terms of biomass and abundance) in the southwest Atlantic sector of the Southern Ocean, from net samples and in situ temperature records collected during the Discovery Investigations (1926-1938) and contemporary campaigns (1996-2013). Sea surface temperature was found to have increased significantly by 0.74°C between the two eras. The corresponding sea surface temperature at which community abundance peaked was also significantly higher in contemporary times, by 0.98°C. Spatial projections indicated that the geographical location of community peak abundance had remained the same between the two eras despite the poleward advance of sea surface isotherms. If the community had remained within the same thermal envelope as in the 1920s-1930s, community peak abundance would be 500 km further south in the contemporary era. Studies in the northern hemisphere have found that dominant taxa, such as calanoid copepods, have conserved their thermal niches and tracked surface isotherms polewards. The fact that this has not occurred in the Southern Ocean suggests that other selective pressures, particularly food availability and the properties of underlying water masses, place greater constraints on spatial distributions in this region. It

  6. Effect of long term organic amendments and vegetation of vineyard soils on the microscale distribution and biogeochemistry of copper.

    PubMed

    Navel, Aline; Martins, Jean M F

    2014-01-01

    In this study we evaluated the effect of the long term organic management of a vineyard-soil on the biogeochemistry of copper at the micro-aggregate scale. The model vineyard-soil (Mâcon-France) experienced a long-term field-experiment that consisted in amendments and vegetations with various materials and plants. We studied specifically the effect of Straw (S) and Conifer Compost (CC) organic amendments and Clover (Cl) and Fescue (F) vegetation on the fate of copper (fungicide) in the surface layer of this loamy soil, through a comparison with the Non Amended soil (NA). After collection the five soils were immediately physically fractionated in order to obtain 5 granulometric size-fractions. All soils and size-fractions were quantitatively characterized in terms of granulometry, chemical content and copper distribution, speciation and bioavailability to bacteria and plants. The results showed strong increases of soil-constituents aggregation for all treatments (Cl>CC>S>F>NA), in relation with the increased cementation of soil-constituents by organic matter (OM). The distribution patterns of all major elements and organic carbon were found highly variable within the soil sub-fractions and also between the 5 treatments. Due to their specific inorganic and organic composition, soil sub-fractions can thus be considered as a specific microbial habitat. Added OM accumulated preferentially in the 20-2 μm and in the >250 μm of the 5 soils. The distribution patterns of copper as well as its speciation and bioavailability to bacteria in the soil sub-fractions were shown to be strongly different among the five soils, in relation with OM distribution. Our results also suggest that Cu-bioavailability to plants is controlled by soil-rhizosphere structure. Altogether our results permitted to show that long-term organic management of a vineyard soil induced stable modifications of soil physical and chemical properties at both macro and micro-scales. These modifications

  7. Long term care financing in four OECD countries: fiscal burden and distributive effects.

    PubMed

    Karlsson, Martin; Mayhew, Les; Rickayzen, Ben

    2007-01-01

    This paper compares long term care (LTC) systems in four OECD countries (UK, Japan, Sweden and Germany). In the UK, provision is means tested, so that out of pocket payments depend on levels of income, savings and assets. In Sweden, where the system is wholly tax-financed, provision is essentially free at the point of use. In Germany and Japan, provision is financed from recently introduced compulsory insurance schemes, although the details of how each scheme operates and the distributive consequences differ somewhat. The paper analyses the effects of importing the other three countries' systems for financing LTC into the UK, focussing on both the distributive consequences and the tax burden. It finds that the German system would not be an improvement on the current UK system, because it uses a regressive method of financing. Therefore, the discussion of possible alternatives to the present UK system could be restricted to a general tax-based system as used in Sweden or the compulsory insurance system as used in Japan. The results suggest that all three systems would imply increased taxes in the UK.

  8. Assessing the origin of bacteria in tap water and distribution system in an unchlorinated drinking water system by SourceTracker using microbial community fingerprints.

    PubMed

    Liu, Gang; Zhang, Ya; van der Mark, Ed; Magic-Knezev, Aleksandra; Pinto, Ameet; van den Bogert, Bartholomeus; Liu, Wentso; van der Meer, Walter; Medema, Gertjan

    2018-07-01

    The general consensus is that the abundance of tap water bacteria is greatly influenced by water purification and distribution. Those bacteria that are released from biofilm in the distribution system are especially considered as the major potential risk for drinking water bio-safety. For the first time, this full-scale study has captured and identified the proportional contribution of the source water, treated water, and distribution system in shaping the tap water bacterial community based on their microbial community fingerprints using the Bayesian "SourceTracker" method. The bacterial community profiles and diversity analyses illustrated that the water purification process shaped the community of planktonic and suspended particle-associated bacteria in treated water. The bacterial communities associated with suspended particles, loose deposits, and biofilm were similar to each other, while the community of tap water planktonic bacteria varied across different locations in distribution system. The microbial source tracking results showed that there was not a detectable contribution of source water to bacterial community in the tap water and distribution system. The planktonic bacteria in the treated water was the major contributor to planktonic bacteria in the tap water (17.7-54.1%). The particle-associated bacterial community in the treated water seeded the bacterial community associated with loose deposits (24.9-32.7%) and biofilm (37.8-43.8%) in the distribution system. In return, the loose deposits and biofilm showed a significant influence on tap water planktonic and particle-associated bacteria, which were location dependent and influenced by hydraulic changes. This was revealed by the increased contribution of loose deposits to tap water planktonic bacteria (from 2.5% to 38.0%) and an increased contribution of biofilm to tap water particle-associated bacteria (from 5.9% to 19.7%) caused by possible hydraulic disturbance from proximal to distal regions

  9. Long-Term Bacterial Dynamics in a Full-Scale Drinking Water Distribution System

    PubMed Central

    Prest, E. I.; Weissbrodt, D. G.; Hammes, F.; van Loosdrecht, M. C. M.; Vrouwenvelder, J. S.

    2016-01-01

    Large seasonal variations in microbial drinking water quality can occur in distribution networks, but are often not taken into account when evaluating results from short-term water sampling campaigns. Temporal dynamics in bacterial community characteristics were investigated during a two-year drinking water monitoring campaign in a full-scale distribution system operating without detectable disinfectant residual. A total of 368 water samples were collected on a biweekly basis at the water treatment plant (WTP) effluent and at one fixed location in the drinking water distribution network (NET). The samples were analysed for heterotrophic plate counts (HPC), Aeromonas plate counts, adenosine-tri-phosphate (ATP) concentrations, and flow cytometric (FCM) total and intact cell counts (TCC, ICC), water temperature, pH, conductivity, total organic carbon (TOC) and assimilable organic carbon (AOC). Multivariate analysis of the large dataset was performed to explore correlative trends between microbial and environmental parameters. The WTP effluent displayed considerable seasonal variations in TCC (from 90 × 103 cells mL-1 in winter time up to 455 × 103 cells mL-1 in summer time) and in bacterial ATP concentrations (<1–3.6 ng L-1), which were congruent with water temperature variations. These fluctuations were not detected with HPC and Aeromonas counts. The water in the network was predominantly influenced by the characteristics of the WTP effluent. The increase in ICC between the WTP effluent and the network sampling location was small (34 × 103 cells mL-1 on average) compared to seasonal fluctuations in ICC in the WTP effluent. Interestingly, the extent of bacterial growth in the NET was inversely correlated to AOC concentrations in the WTP effluent (Pearson’s correlation factor r = -0.35), and positively correlated with water temperature (r = 0.49). Collecting a large dataset at high frequency over a two year period enabled the characterization of previously

  10. Long-Term Bacterial Dynamics in a Full-Scale Drinking Water Distribution System.

    PubMed

    Prest, E I; Weissbrodt, D G; Hammes, F; van Loosdrecht, M C M; Vrouwenvelder, J S

    2016-01-01

    Large seasonal variations in microbial drinking water quality can occur in distribution networks, but are often not taken into account when evaluating results from short-term water sampling campaigns. Temporal dynamics in bacterial community characteristics were investigated during a two-year drinking water monitoring campaign in a full-scale distribution system operating without detectable disinfectant residual. A total of 368 water samples were collected on a biweekly basis at the water treatment plant (WTP) effluent and at one fixed location in the drinking water distribution network (NET). The samples were analysed for heterotrophic plate counts (HPC), Aeromonas plate counts, adenosine-tri-phosphate (ATP) concentrations, and flow cytometric (FCM) total and intact cell counts (TCC, ICC), water temperature, pH, conductivity, total organic carbon (TOC) and assimilable organic carbon (AOC). Multivariate analysis of the large dataset was performed to explore correlative trends between microbial and environmental parameters. The WTP effluent displayed considerable seasonal variations in TCC (from 90 × 103 cells mL-1 in winter time up to 455 × 103 cells mL-1 in summer time) and in bacterial ATP concentrations (<1-3.6 ng L-1), which were congruent with water temperature variations. These fluctuations were not detected with HPC and Aeromonas counts. The water in the network was predominantly influenced by the characteristics of the WTP effluent. The increase in ICC between the WTP effluent and the network sampling location was small (34 × 103 cells mL-1 on average) compared to seasonal fluctuations in ICC in the WTP effluent. Interestingly, the extent of bacterial growth in the NET was inversely correlated to AOC concentrations in the WTP effluent (Pearson's correlation factor r = -0.35), and positively correlated with water temperature (r = 0.49). Collecting a large dataset at high frequency over a two year period enabled the characterization of previously

  11. Effect of source location and listener location on ILD cues in a reverberant room

    NASA Astrophysics Data System (ADS)

    Ihlefeld, Antje; Shinn-Cunningham, Barbara G.

    2004-05-01

    Short-term interaural level differences (ILDs) were analyzed for simulations of the signals that would reach a listener in a reverberant room. White noise was convolved with manikin head-related impulse responses measured in a classroom to simulate different locations of the source relative to the manikin and different manikin positions in the room. The ILDs of the signals were computed within each third-octave band over a relatively short time window to investigate how reliably ILD cues encode source laterality. Overall, the mean of the ILD magnitude increases with lateral angle and decreases with distance, as expected. Increasing reverberation decreases the mean ILD magnitude and increases the variance of the short-term ILD, so that the spatial information carried by ILD cues is degraded by reverberation. These results suggest that the mean ILD is not a reliable cue for determining source laterality in a reverberant room. However, by taking into account both the mean and variance, the distribution of high-frequency short-term ILDs provides some spatial information. This analysis suggests that, in order to use ILDs to judge source direction in reverberant space, listeners must accumulate information about how the short-term ILD varies over time. [Work supported by NIDCD and AFOSR.

  12. All fiber-coupled, long-term stable timing distribution for free-electron lasers with few-femtosecond jitter

    PubMed Central

    Şafak, K.; Xin, M.; Callahan, P. T.; Peng, M. Y.; Kärtner, F. X.

    2015-01-01

    We report recent progress made in a complete fiber-optic, high-precision, long-term stable timing distribution system for synchronization of next generation X-ray free-electron lasers. Timing jitter characterization of the master laser shows less than 170-as RMS integrated jitter for frequencies above 10 kHz, limited by the detection noise floor. Timing stabilization of a 3.5-km polarization-maintaining fiber link is successfully achieved with an RMS drift of 3.3 fs over 200 h of operation using all fiber-coupled elements. This all fiber-optic implementation will greatly reduce the complexity of optical alignment in timing distribution systems and improve the overall mechanical and timing stability of the system. PMID:26798814

  13. Size distribution and sources of humic-like substances in particulate matter at an urban site during winter.

    PubMed

    Park, Seungshik; Son, Se-Chang

    2016-01-01

    This study investigates the size distribution and possible sources of humic-like substances (HULIS) in ambient aerosol particles collected at an urban site in Gwangju, Korea during the winter of 2015. A total of 10 sets of size-segregated aerosol samples were collected using a 10-stage Micro-Orifice Uniform Deposit Impactor (MOUDI), and the samples were analyzed to determine the mass as well as the presence of ionic species (Na(+), NH4(+), K(+), Ca(2+), Mg(2+), Cl(-), NO3(-), and SO4(2-)), water-soluble organic carbon (WSOC) and HULIS. The separation and quantification of the size-resolved HULIS components from the MOUDI samples was accomplished using a Hydrophilic-Lipophilic Balanced (HLB) solid phase extraction method and a total organic carbon analyzer, respectively. The entire sampling period was divided into two periods: non-Asian dust (NAD) and Asian dust (AD) periods. The contributions of water-soluble organic mass (WSOM = 1.9 × WSOC) and HULIS (=1.9 × HULIS-C) to fine particles (PM1.8) were approximately two times higher in the NAD samples (23.2 and 8.0%) than in the AD samples (12.8 and 4.2%). However, the HULIS-C/WSOC ratio in PM1.8 showed little difference between the NAD (0.35 ± 0.07) and AD (0.35 ± 0.05) samples. The HULIS exhibited a uni-modal size distribution (@0.55 μm) during NAD and a bimodal distribution (@0.32 and 1.8 μm) during AD, which was quite similar to the mass size distributions of particulate matter, WSOC, NO3(-), SO4(2-), and NH4(+) in both the NAD and AD samples. The size distribution characteristics and the results of the correlation analyses indicate that the sources of HULIS varied according to the particle size. In the fine mode (≤1.8 μm), the HULIS composition during the NAD period was strongly associated with secondary organic aerosol (SOA) formation processes similar to those of secondary ionic species (cloud processing and/or heterogeneous reactions) and primary emissions during the biomass burning period, and during

  14. Winter-time size distribution and source apportionment of total suspended particulate matter and associated metals in Delhi

    NASA Astrophysics Data System (ADS)

    Srivastava, Arun; Gupta, Sandeep; Jain, V. K.

    2009-03-01

    A study of the winter time size distribution and source apportionment of total suspended particulate matter (TSPM) and associated heavy metal concentrations have been carried out for the city of Delhi. This study is important from the point of view of implementation of compressed natural gas (CNG) as alternate of diesel fuel in the public transport system in 2001 to reduce the pollution level. TSPM were collected using a five-stage cascade impactor at six sites in the winters of 2005-06. The results of size distribution indicate that a major portion (~ 40%) of TSPM concentration is in the form of PM0.7 (< 0.7 μm). Similar trends were observed with most of the heavy metals associated with various size fractions of TSPM. A very good correlation between coarse and fine size fraction of TSPM was observed. It was also observed that the metals associated with coarse particles have more chances of correlation with other metals; rather they are associated with fine particles. Source apportionment was carried out separately in coarse and fine size modes of TSPM by Chemical Mass Balance Receptor Model (CMB8) as well as by Principle Component Analysis (PCA) of SPSS. Source apportionment by PCA reveals that there are two major sources (possibly vehicular and crustal re-suspension) in both coarse and fine size fractions. Results obtained by CMB8 show the dominance of vehicular pollutants and crustal dust in fine and coarse size mode respectively. Noticeably the dominance of vehicular pollutants are now confined to fine size only whilst during pre CNG era it dominated both coarse and fine size mode. An increase of 42.5, 44.4, 48.2, 38.6 and 38.9% in the concentrations of TSPM, PM10.9, coarse particles, fine particles and lead respectively was observed during pre (2001) to post CNG (2005-06) period.

  15. Root distribution of Nitraria sibirica with seasonally varying water sources in a desert habitat.

    PubMed

    Zhou, Hai; Zhao, Wenzhi; Zheng, Xinjun; Li, Shoujuan

    2015-07-01

    In water-limited environments, the water sources used by desert shrubs are critical to understanding hydrological processes. Here we studied the oxygen stable isotope ratios (δ (18)O) of stem water of Nitraria sibirica as well as those of precipitation, groundwater and soil water from different layers to identify the possible water sources for the shrub. The results showed that the shrub used a mixture of soil water, recent precipitation and groundwater, with shallow lateral roots and deeply penetrating tap (sinker) roots, in different seasons. During the wet period (in spring), a large proportion of stem water in N. sibirica was from snow melt and recent precipitation, but use of these sources declined sharply with the decreasing summer rain at the site. At the height of summer, N. sibirica mainly utilized deep soil water from its tap roots, not only supporting the growth of shoots but also keeping the shallow lateral roots well-hydrated. This flexibility allowed the plants to maintain normal metabolic processes during prolonged periods when little precipitation occurs and upper soil layers become extremely dry. With the increase in precipitation that occurs as winter approaches, the percentage of water in the stem base of a plant derived from the tap roots (deep soil water or ground water) decreased again. These results suggested that the shrub's root distribution and morphology were the most important determinants of its ability to utilize different water sources, and that its adjustment to water availability was significant for acclimation to the desert habitat.

  16. Gravitational lens optical scalars in terms of energy-momentum distributions in the cosmological framework

    NASA Astrophysics Data System (ADS)

    Boero, Ezequiel F.; Moreschi, Osvaldo M.

    2018-04-01

    We present new results on gravitational lensing over cosmological Robertson-Walker backgrounds which extend and generalize previous works. Our expressions show the presence of new terms and factors which have been neglected in the literature on the subject. The new equations derived here for the optical scalars allow to deal with more general matter content including sources with non-Newtonian components of the energy-momentum tensor and arbitrary motion. Our treatment is within the framework of weak gravitational lenses in which first-order effects of the curvature are considered. We have been able to make all calculations without referring to the concept of deviation angle. This in turn, makes the presentation shorter but also allows for the consideration of global effects on the Robertson-Walker background that have been neglected in the literature. We also discuss two intensity magnifications that we define in this article; one coming from a natural geometrical construction in terms of the affine distance, that we here call \\tilde{μ }, and the other adapted to cosmological discussions in terms of the redshift, that we call μ΄. We show that the natural intensity magnification \\tilde{μ } coincides with the standard angular magnification (μ).

  17. Open source acceleration of wave optics simulations on energy efficient high-performance computing platforms

    NASA Astrophysics Data System (ADS)

    Beck, Jeffrey; Bos, Jeremy P.

    2017-05-01

    We compare several modifications to the open-source wave optics package, WavePy, intended to improve execution time. Specifically, we compare the relative performance of the Intel MKL, a CPU based OpenCV distribution, and GPU-based version. Performance is compared between distributions both on the same compute platform and between a fully-featured computing workstation and the NVIDIA Jetson TX1 platform. Comparisons are drawn in terms of both execution time and power consumption. We have found that substituting the Fast Fourier Transform operation from OpenCV provides a marked improvement on all platforms. In addition, we show that embedded platforms offer some possibility for extensive improvement in terms of efficiency compared to a fully featured workstation.

  18. The Funding of Long-Term Care in Canada: What Do We Know, What Should We Know?

    PubMed

    Grignon, Michel; Spencer, Byron G

    2018-06-01

    ABSTRACTLong-term care is a growing component of health care spending but how much is spent or who bears the cost is uncertain, and the measures vary depending on the source used. We drew on regularly published series and ad hoc publications to compile preferred estimates of the share of long-term care spending in total health care spending, the private share of long-term care spending, and the share of residential care within long-term care. For each series, we compared estimates obtainable from published sources (CIHI [Canadian Institute for Health Information] and OECD [Organization for Economic Cooperation and Development]) with our preferred estimates. We conclude that using published series without adjustment would lead to spurious conclusions on the level and evolution of spending on long-term care in Canada as well as on the distribution of costs between private and public funders and between residential and home care.

  19. Identifying sources of acidity and spatial distribution of acid sulfate soils in the Anglesea River catchment, southern Australia

    NASA Astrophysics Data System (ADS)

    Wong, Vanessa; Yau, Chin; Kennedy, David

    2015-04-01

    Globally, coastal and estuarine floodplains are frequently underlain by sulfidic sediments. When exposed to oxygen, sulfidic sediments oxidise to form acid sulfate soils, adversely impacting on floodplain health and adjacent aquatic ecoystems. In eastern Australia, our understanding of the formation of these coastal and estuarine floodplains, and hence, spatial distribution of acid sulfate soils, is relatively well established. These soils have largely formed as a result of sedimentation of coastal river valleys approximately 6000 years BP when sea levels were one to two metres higher. However, our understanding of the evolution of estuarine systems and acid sulfate soil formation, and hence, distribution, in southern Australia remains limited. The Anglesea River, in southern Australia, is subjected to frequent episodes of poor water quality and low pH resulting in closure of the river and, in extreme cases, large fish kill events. This region is heavily reliant on tourism and host to a number of iconic features, including the Great Ocean Road and Twelve Apostles. Poor water quality has been linked to acid leakage from mining activities and Tertiary-aged coal seams, peat swamps and acid sulfate soils in the region. However, our understanding of the sources of acidity and distribution of acid sulfate soils in this region remains poor. In this study, four sites on the Anglesea River floodplain were sampled, representative of the main vegetation communities. Peat swamps and intertidal marshes were both significant sources of acidity on the floodplain in the lower catchment. However, acid neutralising capacity provided by carbonate sands suggests that there are additional sources of acidity higher in the catchment. This pilot study has highlighted the complexity in the links between the floodplain, upper catchment and waterways with further research required to understand these links for targeted acid management strategies.

  20. Long-term particulate matter modeling for health effect studies in California - Part 2: Concentrations and sources of ultrafine organic aerosols

    NASA Astrophysics Data System (ADS)

    Hu, Jianlin; Jathar, Shantanu; Zhang, Hongliang; Ying, Qi; Chen, Shu-Hua; Cappa, Christopher D.; Kleeman, Michael J.

    2017-04-01

    Organic aerosol (OA) is a major constituent of ultrafine particulate matter (PM0. 1). Recent epidemiological studies have identified associations between PM0. 1 OA and premature mortality and low birth weight. In this study, the source-oriented UCD/CIT model was used to simulate the concentrations and sources of primary organic aerosols (POA) and secondary organic aerosols (SOA) in PM0. 1 in California for a 9-year (2000-2008) modeling period with 4 km horizontal resolution to provide more insights about PM0. 1 OA for health effect studies. As a related quality control, predicted monthly average concentrations of fine particulate matter (PM2. 5) total organic carbon at six major urban sites had mean fractional bias of -0.31 to 0.19 and mean fractional errors of 0.4 to 0.59. The predicted ratio of PM2. 5 SOA / OA was lower than estimates derived from chemical mass balance (CMB) calculations by a factor of 2-3, which suggests the potential effects of processes such as POA volatility, additional SOA formation mechanism, and missing sources. OA in PM0. 1, the focus size fraction of this study, is dominated by POA. Wood smoke is found to be the single biggest source of PM0. 1 OA in winter in California, while meat cooking, mobile emissions (gasoline and diesel engines), and other anthropogenic sources (mainly solvent usage and waste disposal) are the most important sources in summer. Biogenic emissions are predicted to be the largest PM0. 1 SOA source, followed by mobile sources and other anthropogenic sources, but these rankings are sensitive to the SOA model used in the calculation. Air pollution control programs aiming to reduce the PM0. 1 OA concentrations should consider controlling solvent usage, waste disposal, and mobile emissions in California, but these findings should be revisited after the latest science is incorporated into the SOA exposure calculations. The spatial distributions of SOA associated with different sources are not sensitive to the choice of

  1. A review of sources, multimedia distribution and health risks of perfluoroalkyl acids (PFAAs) in China.

    PubMed

    Wang, Tieyu; Wang, Pei; Meng, Jing; Liu, Shijie; Lu, Yonglong; Khim, Jong Seong; Giesy, John P

    2015-06-01

    Perfluoroalkyl acids (PFAAs) have been recognized as emerging pollutants because of their ubiquitous occurrence in the environment, biota and humans. In order to investigate their sources, fate and environmental effects, a great number of surveys have been carried out over the past several years. In the present review, we summarized the status of sources and emission, concentration, distribution and risks of PFAAs in China. Concentrations of PFAAs, especially perfluorooctane sulfonic acid (PFOS) and perfluorooctanoic acid (PFOA) in various environmental media including water, sediment, soil, rain, snow and organisms, as well as human tissues are summarized based on the available data. Concentrations of PFAAs in aquatic systems are higher in relatively more industrialized and urbanized areas than those from the less populated and remote regions in China, indicating that their emission and distribution are closely related to regional urbanization and industrialization. PFAAs and related products have been widely used over the past several decades, which have brought about high concentrations detected in environmental matrixes, biota and even local residents. Ecological risk assessment of PFAAs is still less developed in China. Most existing studies compared concentrations of PFAAs to guideline values derived for single species to evaluate the risk. In order to reveal the transport, partitioning and degradation of PFAAs in the environment, further studies on their behavior, fate, bioaccumulation and adverse effects in different trophic levels should be conducted. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Acetone in the atmosphere: Distribution, sources, and sinks

    NASA Technical Reports Server (NTRS)

    Singh, H. B.; O'Hara, D.; Herlth, D.; Sachse, W.; Blake, D. R.; Bradshaw, J. D.; Kanakidou, M.; Crutzen, P. J.

    1994-01-01

    Acetone (CH3COCH3) was found to be the dominant nonmethane organic species present in the atmosphere sampled primarily over eastern Canada (0-6 km, 35 deg-65 deg N) during ABLE3B (July to August 1990). A concentration range of 357 to 2310 ppt (= 10(exp -12) v/v) with a mean value of 1140 +/- 413 ppt was measured. Under extremely clean conditions, generally involving Arctic flows, lowest (background) mixing ratios of 550 +/- 100 ppt were present in much of the troposphere studied. Correlations between atmospheric mixing ratios of acetone and select species such as C2H2, CO, C3H8, C2Cl4 and isoprene provided important clues to its possible sources and to the causes of its atmospheric variability. Biomass burning as a source of acetone has been identified for the first time. By using atmospheric data and three-dimensional photochemical models, a global acetone source of 40-60 Tg (= 10(exp 12) g)/yr is estimated to be present. Secondary formation from the atmospheric oxidation of precursor hydrocarbons (principally propane, isobutane, and isobutene) provides the single largest source (51%). The remainder is attributable to biomass burning (26%), direct biogenic emissions (21%), and primary anthropogenic emissions (3%). Atmospheric removal of acetone is estimated to be due to photolysis (64%), reaction with OH radicals (24%), and deposition (12%). Model calculations also suggest that acetone photolysis contributed significantly to PAN formation (100-200 ppt) in the middle and upper troposphere of the sampled region and may be important globally. While the source-sink equation appears to be roughly balanced, much more atmospheric and source data, especially from the southern hemisphere, are needed to reliably quantify the atmospheric budget of acetone.

  3. Experimental Verification of Application of Looped System and Centralized Voltage Control in a Distribution System with Renewable Energy Sources

    NASA Astrophysics Data System (ADS)

    Hanai, Yuji; Hayashi, Yasuhiro; Matsuki, Junya

    The line voltage control in a distribution network is one of the most important issues for a penetration of Renewable Energy Sources (RES). A loop distribution network configuration is an effective solution to resolve voltage and distribution loss issues concerned about a penetration of RES. In this paper, for a loop distribution network, the authors propose a voltage control method based on tap change control of LRT and active/reactive power control of RES. The tap change control of LRT takes a major role of the proposed voltage control. Additionally the active/reactive power control of RES supports the voltage control when voltage deviation from the upper or lower voltage limit is unavoidable. The proposed method adopts SCADA system based on measured data from IT switches, which are sectionalizing switch with sensor installed in distribution feeder. In order to check the validity of the proposed voltage control method, experimental simulations using a distribution system analog simulator “ANSWER” are carried out. In the simulations, the voltage maintenance capability in the normal and the emergency is evaluated.

  4. Asymptotically and exactly energy balanced augmented flux-ADER schemes with application to hyperbolic conservation laws with geometric source terms

    NASA Astrophysics Data System (ADS)

    Navas-Montilla, A.; Murillo, J.

    2016-07-01

    In this work, an arbitrary order HLL-type numerical scheme is constructed using the flux-ADER methodology. The proposed scheme is based on an augmented Derivative Riemann solver that was used for the first time in Navas-Montilla and Murillo (2015) [1]. Such solver, hereafter referred to as Flux-Source (FS) solver, was conceived as a high order extension of the augmented Roe solver and led to the generation of a novel numerical scheme called AR-ADER scheme. Here, we provide a general definition of the FS solver independently of the Riemann solver used in it. Moreover, a simplified version of the solver, referred to as Linearized-Flux-Source (LFS) solver, is presented. This novel version of the FS solver allows to compute the solution without requiring reconstruction of derivatives of the fluxes, nevertheless some drawbacks are evidenced. In contrast to other previously defined Derivative Riemann solvers, the proposed FS and LFS solvers take into account the presence of the source term in the resolution of the Derivative Riemann Problem (DRP), which is of particular interest when dealing with geometric source terms. When applied to the shallow water equations, the proposed HLLS-ADER and AR-ADER schemes can be constructed to fulfill the exactly well-balanced property, showing that an arbitrary quadrature of the integral of the source inside the cell does not ensure energy balanced solutions. As a result of this work, energy balanced flux-ADER schemes that provide the exact solution for steady cases and that converge to the exact solution with arbitrary order for transient cases are constructed.

  5. Quantifying the Combined Effect of Radiation Therapy and Hyperthermia in Terms of Equivalent Dose Distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kok, H. Petra, E-mail: H.P.Kok@amc.uva.nl; Crezee, Johannes; Franken, Nicolaas A.P.

    2014-03-01

    Purpose: To develop a method to quantify the therapeutic effect of radiosensitization by hyperthermia; to this end, a numerical method was proposed to convert radiation therapy dose distributions with hyperthermia to equivalent dose distributions without hyperthermia. Methods and Materials: Clinical intensity modulated radiation therapy plans were created for 15 prostate cancer cases. To simulate a clinically relevant heterogeneous temperature distribution, hyperthermia treatment planning was performed for heating with the AMC-8 system. The temperature-dependent parameters α (Gy{sup −1}) and β (Gy{sup −2}) of the linear–quadratic model for prostate cancer were estimated from the literature. No thermal enhancement was assumed for normalmore » tissue. The intensity modulated radiation therapy plans and temperature distributions were exported to our in-house-developed radiation therapy treatment planning system, APlan, and equivalent dose distributions without hyperthermia were calculated voxel by voxel using the linear–quadratic model. Results: The planned average tumor temperatures T90, T50, and T10 in the planning target volume were 40.5°C, 41.6°C, and 42.4°C, respectively. The planned minimum, mean, and maximum radiation therapy doses were 62.9 Gy, 76.0 Gy, and 81.0 Gy, respectively. Adding hyperthermia yielded an equivalent dose distribution with an extended 95% isodose level. The equivalent minimum, mean, and maximum doses reflecting the radiosensitization by hyperthermia were 70.3 Gy, 86.3 Gy, and 93.6 Gy, respectively, for a linear increase of α with temperature. This can be considered similar to a dose escalation with a substantial increase in tumor control probability for high-risk prostate carcinoma. Conclusion: A model to quantify the effect of combined radiation therapy and hyperthermia in terms of equivalent dose distributions was presented. This model is particularly instructive to estimate the potential effects of interaction from

  6. The Use of Source-Sink and Doublet Distributions Extended to the Solution of Boundary-Value Problems in Supersonic Flow

    NASA Technical Reports Server (NTRS)

    Heaslet, Max A; Lomax, Harvard

    1948-01-01

    A direct analogy is established between the use of source-sink and doublet distributions in the solution of specific boundary-value problems in subsonic wing theory and the corresponding problems in supersonic theory. The correct concept of the "finite part" of an integral is introduced and used in the calculation of the improper integrals associated with supersonic doublet distributions. The general equations developed are shown to include several previously published results and particular examples are given for the loading on rolling and pitching triangular wings with supersonic leading edges.

  7. Distribution and sources of polychlorinated biphenyls in Woods Inlet, Lake Worth, Fort Worth, Texas, 2003

    USGS Publications Warehouse

    Besse, Richard E.; Van Metre, Peter C.; Wilson, Jennifer T.

    2005-01-01

    Woods Inlet is a flooded stream channel on the southern shore of Lake Worth along the western boundary of Air Force Plant 4 in Fort Worth, Texas, where elevated polychlorinated biphenyl (PCB) concentrations in sediment were detected in a previous study. In response, the U.S. Geological Survey, in cooperation with the U.S. Air Force, conducted a study in 2003 to map the extent of elevated PCB concentrations in Woods Inlet and to identify possible sources (or more specifically, source areas) of PCBs in the watershed of Woods Inlet. Three gravity cores (penetration to pre-reservoir sediment at three sites) and 17 box cores (surficial bottom sediment samples) were collected in Woods Inlet. Suspended sediment in stormwater runoff and streambed sediment were sampled in tributaries to Woods Inlet following storms. Assemblages of PCB congeners in surficial inlet sediments and suspended and streambed sediments were analyzed to indicate sources of PCBs in the inlet sediments on the basis of chemical signatures of PCBs. Woods Inlet receives runoff primarily from three tributaries: (1) Gruggs Park Creek, (2) the small unnamed creek that drains a Texas National Guard maintenance facility, called TNG Creek for this report, and (3) Meandering Road Creek. Twenty-seven of 209 possible PCB congeners were analyzed. The sum of the congeners was used as a measure of total PCB. The spatial distribution of total PCB concentrations in the inlet indicates that most PCBs are originating in the Meandering Road Creek watershed. Peak total PCB concentrations in the three gravity cores occurred at depths corresponding to sediment deposition dates of about 1960 for two of the cores and about 1980 for the third core. The magnitudes of peak total PCB concentrations in the gravity cores followed a spatial distribution generally similar to that of surficial bottom sediment concentrations. Total PCB concentrations in suspended and streambed sediment varied greatly between sites and indicated a likely

  8. Electromagnetic Modeling of Distributed-Source-Excitation of Coplanar Waveguides: Applications to Traveling-Wave Photomixers

    NASA Technical Reports Server (NTRS)

    Pasqualini, Davide; Neto, Andrea; Wyss, Rolf A.

    2001-01-01

    In this work an electromagnetic model and subsequent design is presented for a traveling-wave, coplanar waveguide (CPW) based source that will operate in the THz frequency regime. The radio frequency (RF) driving current is a result of photoexcitation of a thin GaAs membrane using two frequency-offset lasers. The GaAs film is grown by molecular-beam-epitaxy (MBE) and displays sub-ps carrier lifetimes which enable the material conductivity to be modulated at a very high rate. The RF current flows between electrodes deposited on the GaAs membrane which are biased with a DC voltage source. The electrodes form a CPW and are terminated with a double slot antenna that couples the power to a quasi-optical system. The membrane is suspended above a metallic reflector to launch all radiation in one direction. The theoretical investigation and consequent design is performed in two steps. The first step consists of a direct evaluation of the magnetic current distribution on an infinitely extended coplanar waveguide excited by an impressed electric current distributed over a finite area. The result of the analysis is the difference between the incident angle of the laser beams and the length of the excited area that maximizes the RF power coupled to the CPW. The optimal values for both parameters are found as functions of the CPW and membrane dimensions as well as the dielectric constants of the layers. In the second step, a design is presented of a double slot antenna that matches the CPW characteristic impedance and gives good overall performance. The design is presently being implemented and measurements will soon be available.

  9. Reference-Frame-Independent and Measurement-Device-Independent Quantum Key Distribution Using One Single Source

    NASA Astrophysics Data System (ADS)

    Li, Qian; Zhu, Changhua; Ma, Shuquan; Wei, Kejin; Pei, Changxing

    2018-04-01

    Measurement-device-independent quantum key distribution (MDI-QKD) is immune to all detector side-channel attacks. However, practical implementations of MDI-QKD, which require two-photon interferences from separated independent single-photon sources and a nontrivial reference alignment procedure, are still challenging with current technologies. Here, we propose a scheme that significantly reduces the experimental complexity of two-photon interferences and eliminates reference frame alignment by the combination of plug-and-play and reference frame independent MDI-QKD. Simulation results show that the secure communication distance can be up to 219 km in the finite-data case and the scheme has good potential for practical MDI-QKD systems.

  10. [Distribution and source of particulate organic carbon and particulate nitrogen in the Yangtze River Estuary in summer 2012].

    PubMed

    Xing, Jian-Wei; Xian, Wei-Wei; Sheng, Xiu-Zhen

    2014-07-01

    Based on the data from the cruise carried out in August 2012 in the Yangtze River Estuary and its adjacent waters, spatial distributions of particulate organic carbon (POC), particulate nitrogen (PN) and their relationships with environmental factors were studied, and the source of POC and the contribution of phytoplankton to POC were analyzed combined with n (C)/n (N) ratio and chlorophyll a (Chl a) in the Yangtze River Estuary in summer 2012. The results showed that the concentrations of POC in the Yangtze River Estuary ranged from 0.68 mg x L(-1) to 34.80 mg x L(-1) in summer and the average content was 3.74 mg x L(-1), and PN contents varied between 0.03 mg x L(-1) and 9.13 mg x L(-1) with an average value of 0.57 mg x L(-1). Both of them presented that the concentrations in bottom layers were higher than those in the surface. POC and PN as well as total suspended matter (TSM) showed a extremel similar horizontal distribution trend that the highest values appeared in the near of the mouth and southwest of the survey waters, and decreased rapidly as toward the open seas, both of them showed higher contents in coastal zones and lower in outer sea. There was a fairly good positive linear relationship between POC and PN, which indicated that they had the same source. POC and PN expressed significantly positive correlations with TSM and chemical oxygen demand (COD), but showed relatively weak correlations with salinit and chlorophyll a, which demonstrated that terrestrial inputs had a strong influence on the distribution of POC and PN, and phytoplankton production was not the major source of organic matters in the Yangtze River Estuary. Both the n (C)/n (N) ratio and POC/Chl a analysis showed that the main source of POC was terrestrial inputs, and organic debris was the main existence form of POC. Quantitative analysis showed the biomass of phytoplankton only made an average of 2.54% contribution to POC in the Yangtze Rive Estuary in summer and non-living POC

  11. Estimation of the Cesium-137 Source Term from the Fukushima Daiichi Power Plant Using Air Concentration and Deposition Data

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Duhanyan, Nora; Roustan, Yelva; Saunier, Olivier; Mathieu, Anne

    2013-04-01

    A major difficulty when inverting the source term of an atmospheric tracer dispersion problem is the estimation of the prior errors: those of the atmospheric transport model, those ascribed to the representativeness of the measurements, the instrumental errors, and those attached to the prior knowledge on the variables one seeks to retrieve. In the case of an accidental release of pollutant, and specially in a situation of sparse observability, the reconstructed source is sensitive to these assumptions. This sensitivity makes the quality of the retrieval dependent on the methods used to model and estimate the prior errors of the inverse modeling scheme. In Winiarek et al. (2012), we proposed to use an estimation method for the errors' amplitude based on the maximum likelihood principle. Under semi-Gaussian assumptions, it takes into account, without approximation, the positivity assumption on the source. We applied the method to the estimation of the Fukushima Daiichi cesium-137 and iodine-131 source terms using activity concentrations in the air. The results were compared to an L-curve estimation technique, and to Desroziers's scheme. Additionally to the estimations of released activities, we provided related uncertainties (12 PBq with a std. of 15 - 20 % for cesium-137 and 190 - 380 PBq with a std. of 5 - 10 % for iodine-131). We also enlightened that, because of the low number of available observations (few hundreds) and even if orders of magnitude were consistent, the reconstructed activities significantly depended on the method used to estimate the prior errors. In order to use more data, we propose to extend the methods to the use of several data types, such as activity concentrations in the air and fallout measurements. The idea is to simultaneously estimate the prior errors related to each dataset, in order to fully exploit the information content of each one. Using the activity concentration measurements, but also daily fallout data from prefectures and

  12. Methods for assessing long-term mean pathogen count in drinking water and risk management implications.

    PubMed

    Englehardt, James D; Ashbolt, Nicholas J; Loewenstine, Chad; Gadzinski, Erik R; Ayenu-Prah, Albert Y

    2012-06-01

    Recently pathogen counts in drinking and source waters were shown theoretically to have the discrete Weibull (DW) or closely related discrete growth distribution (DGD). The result was demonstrated versus nine short-term and three simulated long-term water quality datasets. These distributions are highly skewed such that available datasets seldom represent the rare but important high-count events, making estimation of the long-term mean difficult. In the current work the methods, and data record length, required to assess long-term mean microbial count were evaluated by simulation of representative DW and DGD waterborne pathogen count distributions. Also, microbial count data were analyzed spectrally for correlation and cycles. In general, longer data records were required for more highly skewed distributions, conceptually associated with more highly treated water. In particular, 500-1,000 random samples were required for reliable assessment of the population mean ±10%, though 50-100 samples produced an estimate within one log (45%) below. A simple correlated first order model was shown to produce count series with 1/f signal, and such periodicity over many scales was shown in empirical microbial count data, for consideration in sampling. A tiered management strategy is recommended, including a plan for rapid response to unusual levels of routinely-monitored water quality indicators.

  13. Occurrence, Distribution, Sources, and Trends of Elevated Chloride Concentrations in the Mississippi River Valley Alluvial Aquifer in Southeastern Arkansas

    USGS Publications Warehouse

    Kresse, Timothy M.; Clark, Brian R.

    2008-01-01

    Water-quality data from approximately 2,500 sites were used to investigate the distribution of chloride concentrations in the Mississippi River Valley alluvial aquifer in southeastern Arkansas. The large volume and areal distribution of the data used for the investigation proved useful in delineating areas of elevated (greater than 100 milligrams per liter) chloride concentrations, assessing potential sources of saline water, and evaluating trends in chloride distribution and concentration over time. Irrigation water containing elevated chloride concentrations is associated with negative effects to rice and soybeans, two of the major crops in Arkansas, and a groundwater chloride concentration of 100 milligrams per liter is recommended as the upper limit for use on rice. As such, accurately delineating areas with high salinity ground water, defining potential sources of chloride, and documenting trends over time is important in assisting the agricultural community in water management. The distribution and range of chloride concentrations in the study area revealed distinct areas of elevated chloride concentrations. Area I includes an elongated, generally northwest-southeast trending band of moderately elevated chloride concentrations in the northern part of the study area. This band of elevated chloride concentrations is approximately 40 miles in length and varies from approximately 2 to 9 miles in width, with a maximum chloride concentration of 360 milligrams per liter. Area II is a narrow, north-south trending band of elevated chloride concentrations in the southern part of the study area, with a maximum chloride concentration of 1,639 milligrams per liter. A zone of chloride concentrations exceeding 200 milligrams per liter is approximately 25 miles in length and 5 to 6 miles in width. In Area I, low chloride concentrations in samples from wells completed in the alluvial aquifer next to the Arkansas River and in samples from the upper Claiborne aquifer, which

  14. Source-term development for a contaminant plume for use by multimedia risk assessment models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whelan, Gene; McDonald, John P.; Taira, Randal Y.

    1999-12-01

    Multimedia modelers from the U.S. Environmental Protection Agency (EPA) and the U.S. Department of Energy (DOE) are collaborating to conduct a comprehensive and quantitative benchmarking analysis of four intermedia models: DOE's Multimedia Environmental Pollutant Assessment System (MEPAS), EPA's MMSOILS, EPA's PRESTO, and DOE's RESidual RADioactivity (RESRAD). These models represent typical analytically, semi-analytically, and empirically based tools that are utilized in human risk and endangerment assessments for use at installations containing radioactive and/or hazardous contaminants. Although the benchmarking exercise traditionally emphasizes the application and comparison of these models, the establishment of a Conceptual Site Model (CSM) should be viewed with equalmore » importance. This paper reviews an approach for developing a CSM of an existing, real-world, Sr-90 plume at DOE's Hanford installation in Richland, Washington, for use in a multimedia-based benchmarking exercise bet ween MEPAS, MMSOILS, PRESTO, and RESRAD. In an unconventional move for analytically based modeling, the benchmarking exercise will begin with the plume as the source of contamination. The source and release mechanism are developed and described within the context of performing a preliminary risk assessment utilizing these analytical models. By beginning with the plume as the source term, this paper reviews a typical process and procedure an analyst would follow in developing a CSM for use in a preliminary assessment using this class of analytical tool.« less

  15. Affordable non-traditional source data mining for context assessment to improve distributed fusion system robustness

    NASA Astrophysics Data System (ADS)

    Bowman, Christopher; Haith, Gary; Steinberg, Alan; Morefield, Charles; Morefield, Michael

    2013-05-01

    This paper describes methods to affordably improve the robustness of distributed fusion systems by opportunistically leveraging non-traditional data sources. Adaptive methods help find relevant data, create models, and characterize the model quality. These methods also can measure the conformity of this non-traditional data with fusion system products including situation modeling and mission impact prediction. Non-traditional data can improve the quantity, quality, availability, timeliness, and diversity of the baseline fusion system sources and therefore can improve prediction and estimation accuracy and robustness at all levels of fusion. Techniques are described that automatically learn to characterize and search non-traditional contextual data to enable operators integrate the data with the high-level fusion systems and ontologies. These techniques apply the extension of the Data Fusion & Resource Management Dual Node Network (DNN) technical architecture at Level 4. The DNN architecture supports effectively assessment and management of the expanded portfolio of data sources, entities of interest, models, and algorithms including data pattern discovery and context conformity. Affordable model-driven and data-driven data mining methods to discover unknown models from non-traditional and `big data' sources are used to automatically learn entity behaviors and correlations with fusion products, [14 and 15]. This paper describes our context assessment software development, and the demonstration of context assessment of non-traditional data to compare to an intelligence surveillance and reconnaissance fusion product based upon an IED POIs workflow.

  16. Magnetoencephalographic Mapping of Epileptic Spike Population Using Distributed Source Analysis: Comparison With Intracranial Electroencephalographic Spikes.

    PubMed

    Tanaka, Naoaki; Papadelis, Christos; Tamilia, Eleonora; Madsen, Joseph R; Pearl, Phillip L; Stufflebeam, Steven M

    2018-04-27

    This study evaluates magnetoencephalographic (MEG) spike population as compared with intracranial electroencephalographic (IEEG) spikes using a quantitative method based on distributed source analysis. We retrospectively studied eight patients with medically intractable epilepsy who had an MEG and subsequent IEEG monitoring. Fifty MEG spikes were analyzed in each patient using minimum norm estimate. For individual spikes, each vertex in the source space was considered activated when its source amplitude at the peak latency was higher than a threshold, which was set at 50% of the maximum amplitude over all vertices. We mapped the total count of activation at each vertex. We also analyzed 50 IEEG spikes in the same manner over the intracranial electrodes and created the activation count map. The location of the electrodes was obtained in the MEG source space by coregistering postimplantation computed tomography to MRI. We estimated the MEG- and IEEG-active regions associated with the spike populations using the vertices/electrodes with a count over 25. The activation count maps of MEG spikes demonstrated the localization associated with the spike population by variable count values at each vertex. The MEG-active region overlapped with 65 to 85% of the IEEG-active region in our patient group. Mapping the MEG spike population is valid for demonstrating the trend of spikes clustering in patients with epilepsy. In addition, comparison of MEG and IEEG spikes quantitatively may be informative for understanding their relationship.

  17. A short-term and high-resolution distribution system load forecasting approach using support vector regression with hybrid parameters optimization

    DOE PAGES

    Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard; ...

    2016-01-01

    This paper proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of themore » hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system. The performance of the proposed approach is compared to some classic methods in later sections of the paper.« less

  18. Activities and sources of income after a period of long-term sick leave--a population-based prospective cohort study.

    PubMed

    Wikman, Anders; Wiberg, Michael; Marklund, Staffan; Alexanderson, Kristina

    2012-09-06

    There is limited knowledge about what happens to people after long-term sick leave. The aim of this report was to conduct a prospective study of individuals who were on prolonged sick leave during a particular year, considering their activities and sources of income during subsequent years. To enable comparison of different time periods, we used three cohorts of individuals with different starting years. Using data from national registers, three separate cohorts were constructed that included all people living in Sweden who were 20-64 years of age (>5 million) in the years 1995, 2000 and 2005, respectively. The individual members of the cohorts were classified into the following groups based on their main source of income and activity in 1995-2008: on long-term sick leave, employed, old-age pensioner, long-term unemployed, disability pensioner, on parental leave, social assistance recipient, student allowance recipient, deceased, or emigrated. Most individuals on long-term (> 6 months) sick leave in 1995 were not employed 13 years later. Only 11% of the women and 13% of the men were primarily in employment after 13 years. Instead, a wide range of alternatives existed, for example, many had been granted disability pension, and about 10% of the women and 17% of the men had died during the follow-up period. A larger proportion of those with long-term sick leave were back in employment when 2005 was the starting year for the follow-up. The low future employment rates for people on long-term sick leave may seem surprising. There are several possible explanations for the finding: The disorders these people may have, might have entailed longstanding difficulties on the labor market. Besides, long-term absence from work, no matter what its causes were, might have worsen the chances of further employment. The economic cycles may also have been of importance. The improving labor market during later years seems to have improved the chances for employment among those earlier

  19. A modification of the Regional Nutrient Management model (ReNuMa) to identify long-term changes in riverine nitrogen sources

    NASA Astrophysics Data System (ADS)

    Hu, Minpeng; Liu, Yanmei; Wang, Jiahui; Dahlgren, Randy A.; Chen, Dingjiang

    2018-06-01

    Source apportionment is critical for guiding development of efficient watershed nitrogen (N) pollution control measures. The ReNuMa (Regional Nutrient Management) model, a semi-empirical, semi-process-oriented model with modest data requirements, has been widely used for riverine N source apportionment. However, the ReNuMa model contains limitations for addressing long-term N dynamics by ignoring temporal changes in atmospheric N deposition rates and N-leaching lag effects. This work modified the ReNuMa model by revising the source code to allow yearly changes in atmospheric N deposition and incorporation of N-leaching lag effects into N transport processes. The appropriate N-leaching lag time was determined from cross-correlation analysis between annual watershed individual N source inputs and riverine N export. Accuracy of the modified ReNuMa model was demonstrated through analysis of a 31-year water quality record (1980-2010) from the Yongan watershed in eastern China. The revisions considerably improved the accuracy (Nash-Sutcliff coefficient increased by ∼0.2) of the modified ReNuMa model for predicting riverine N loads. The modified model explicitly identified annual and seasonal changes in contributions of various N sources (i.e., point vs. nonpoint source, surface runoff vs. groundwater) to riverine N loads as well as the fate of watershed anthropogenic N inputs. Model results were consistent with previously modeled or observed lag time length as well as changes in riverine chloride and nitrate concentrations during the low-flow regime and available N levels in agricultural soils of this watershed. The modified ReNuMa model is applicable for addressing long-term changes in riverine N sources, providing decision-makers with critical information for guiding watershed N pollution control strategies.

  20. Impact of fugitive sources and meteorological parameters on vertical distribution of particulate matter over the industrial agglomeration.

    PubMed

    Štrbová, Kristína; Raclavská, Helena; Bílek, Jiří

    2017-12-01

    The aim of the study was to characterize vertical distribution of particulate matter, in an area well known by highest air pollution levels in Europe. A balloon filled with helium with measuring instrumentation was used for vertical observation of air pollution over the fugitive sources in Moravian-Silesian metropolitan area during spring and summer. Synchronously, selected meteorological parameters were recorded together with particulate matter for exploration its relationship with particulate matter. Concentrations of particulate matter in the vertical profile were significantly higher in the spring than in the summer. Significant effect of fugitive sources was observed up to the altitude ∼255 m (∼45 m above ground) in both seasons. The presence of inversion layer was observed at the altitude ∼350 m (120-135 m above ground) at locations with major source traffic load. Both particulate matter concentrations and number of particles for the selected particle sizes decreased with increasing height. Strong correlation of particulate matter with meteorological parameters was not observed. The study represents the first attempt to assess the vertical profile over the fugitive emission sources - old environmental burdens in industrial region. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Practical passive decoy state measurement-device-independent quantum key distribution with unstable sources.

    PubMed

    Liu, Li; Guo, Fen-Zhuo; Wen, Qiao-Yan

    2017-09-12

    Measurement-device-independent quantum key distribution (MDI-QKD) with the active decoy state method can remove all detector loopholes, and resist the imperfections of sources. But it may lead to side channel attacks and break the security of QKD system. In this paper, we apply the passive decoy state method to the MDI-QKD based on polarization encoding mode. Not only all attacks on detectors can be removed, but also the side channel attacks on sources can be overcome. We get that the MDI-QKD with our passive decoy state method can have a performance comparable to the protocol with the active decoy state method. To fit for the demand of practical application, we discuss intensity fluctuation in the security analysis of MDI-QKD protocol using passive decoy state method, and derive the key generation rate for our protocol with intensity fluctuation. It shows that intensity fluctuation has an adverse effect on the key generation rate which is non-negligible, especially in the case of small data size of total transmitting signals and long distance transmission. We give specific simulations on the relationship between intensity fluctuation and the key generation rate. Furthermore, the statistical fluctuation due to the finite length of data is also taken into account.

  2. Bacterial composition in a metropolitan drinking water distribution system utilizing different source waters.

    PubMed

    Gomez-Alvarez, Vicente; Humrighouse, Ben W; Revetta, Randy P; Santo Domingo, Jorge W

    2015-03-01

    We investigated the bacterial composition of water samples from two service areas within a drinking water distribution system (DWDS), each associated with a different primary source of water (groundwater, GW; surface water, SW) and different treatment process. Community analysis based on 16S rRNA gene clone libraries indicated that Actinobacteria (Mycobacterium spp.) and α-Proteobacteria represented nearly 43 and 38% of the total sequences, respectively. Sequences closely related to Legionella, Pseudomonas, and Vibrio spp. were also identified. In spite of the high number of sequences (71%) shared in both areas, multivariable analysis revealed significant differences between the GW and SW areas. While the dominant phylotypes where not significantly contributing in the ordination of samples, the populations associated with the core of phylotypes (1-10% in each sample) significantly contributed to the differences between both service areas. Diversity indices indicate that the microbial community inhabiting the SW area is more diverse and contains more distantly related species coexisting with local assemblages as compared with the GW area. The bacterial community structure of SW and GW service areas were dissimilar, suggesting that their respective source water and/or water quality parameters shaped by the treatment processes may contribute to the differences in community structure observed.

  3. Voltage management of distribution networks with high penetration of distributed photovoltaic generation sources

    NASA Astrophysics Data System (ADS)

    Alyami, Saeed

    Installation of photovoltaic (PV) units could lead to great challenges to the existing electrical systems. Issues such as voltage rise, protection coordination, islanding detection, harmonics, increased or changed short-circuit levels, etc., need to be carefully addressed before we can see a wide adoption of this environmentally friendly technology. Voltage rise or overvoltage issues are of particular importance to be addressed for deploying more PV systems to distribution networks. This dissertation proposes a comprehensive solution to deal with the voltage violations in distribution networks, from controlling PV power outputs and electricity consumption of smart appliances in real time to optimal placement of PVs at the planning stage. The dissertation is composed of three parts: the literature review, the work that has already been done and the future research tasks. An overview on renewable energy generation and its challenges are given in Chapter 1. The overall literature survey, motivation and the scope of study are also outlined in the chapter. Detailed literature reviews are given in the rest of chapters. The overvoltage and undervoltage phenomena in typical distribution networks with integration of PVs are further explained in Chapter 2. Possible approaches for voltage quality control are also discussed in this chapter, followed by the discussion on the importance of the load management for PHEVs and appliances and its benefits to electric utilities and end users. A new real power capping method is presented in Chapter 3 to prevent overvoltage by adaptively setting the power caps for PV inverters in real time. The proposed method can maintain voltage profiles below a pre-set upper limit while maximizing the PV generation and fairly distributing the real power curtailments among all the PV systems in the network. As a result, each of the PV systems in the network has equal opportunity to generate electricity and shares the responsibility of voltage

  4. Detecting fission from special nuclear material sources

    DOEpatents

    Rowland, Mark S [Alamo, CA; Snyderman, Neal J [Berkeley, CA

    2012-06-05

    A neutron detector system for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly into information that a first responder can use to discriminate materials. The system comprises counting neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source. The system includes a graphing component that displays the plot of the neutron distribution from the unknown source over a Poisson distribution and a plot of neutrons due to background or environmental sources. The system further includes a known neutron source placed in proximity to the unknown source to actively interrogate the unknown source in order to accentuate differences in neutron emission from the unknown source from Poisson distributions and/or environmental sources.

  5. Evaluation of long-term community recovery from Hurricane Andrew: sources of assistance received by population sub-groups.

    PubMed

    McDonnell, S; Troiano, R P; Barker, N; Noji, E; Hlady, W G; Hopkins, R

    1995-12-01

    Two three-stage cluster surveys were conducted in South Dade County, Florida, 14 months apart, to assess recovery following Hurricane Andrew. Response rates were 75 per cent and 84 per cent. Sources of assistance used in recovery from Hurricane Andrew differed according to race, per capita income, ethnicity, and education. Reports of improved living situation post-hurricane were not associated with receiving relief assistance, but reports of a worse situation were associated with loss of income, being exploited, or job loss. The number of households reporting problems with crime and community violence doubled between the two surveys. Disaster relief efforts had less impact on subjective long-term recovery than did job or income loss or housing repair difficulties. Existing sources of assistance were used more often than specific post-hurricane relief resources. The demographic make-up of a community may determine which are the most effective means to inform them after a disaster and what sources of assistance may be useful.

  6. Predicting vertically-nonsequential wetting patterns with a source-responsive model

    USGS Publications Warehouse

    Nimmo, John R.; Mitchell, Lara

    2013-01-01

    Water infiltrating into soil of natural structure often causes wetting patterns that do not develop in an orderly sequence. Because traditional unsaturated flow models represent a water advance that proceeds sequentially, they fail to predict irregular development of water distribution. In the source-responsive model, a diffuse domain (D) represents flow within soil matrix material following traditional formulations, and a source-responsive domain (S), characterized in terms of the capacity for preferential flow and its degree of activation, represents preferential flow as it responds to changing water-source conditions. In this paper we assume water undergoing rapid source-responsive transport at any particular time is of negligibly small volume; it becomes sensible at the time and depth where domain transfer occurs. A first-order transfer term represents abstraction from the S to the D domain which renders the water sensible. In tests with lab and field data, for some cases the model shows good quantitative agreement, and in all cases it captures the characteristic patterns of wetting that proceed nonsequentially in the vertical direction. In these tests we determined the values of the essential characterizing functions by inverse modeling. These functions relate directly to observable soil characteristics, rendering them amenable to evaluation and improvement through hydropedologic development.

  7. [Distribution Characteristics and Source Analysis of Dustfall Trace Elements During Winter in Beijing].

    PubMed

    Xiong, Qiu-lin; Zhao, Wen-ji; Guo, Xiao-yu; Chen, Fan-tao; Shu, Tong-tong; Zheng, Xiao-xia; Zhao, Wen-hui

    2015-08-01

    The dustfall content is one of the evaluation indexes of atmospheric pollution. Trace elements especially heavy metals in dustfall can lead to risks to ecological environment and human health. In order to study the distribution characteristics of trace elements, heavy metals pollution and their sources in winter atmospheric dust, 49 dustfall samples were collected in Beijing City and nearby during November 2013 to March 2014. Then the contents (mass percentages) of 40 trace elements were measured by Elan DRC It type inductively coupled plasma mass (ICP-MS). Test results showed that more than half of the trace elements in the dust were less than 10 mg x kg(-1); about a quarter were between 10-100 mg x kg-1); while 7 elements (Pb, Zr, Cr, Cu, Zn, Sr and Ba) were more than 100 mg x kg(-1). The contents of Pb, Cu, Zn, Bi, Cd and Mo of winter dustfall in Beijing city.were respectively 4.18, 4.66, 5.35, 6.31, 6.62, and 8.62 times as high as those of corresponding elements in the surface soil in the same period, which went beyond the soil background values by more than 300% . The contribution of human activities to dustfall trace heavy metals content in Beijing city was larger than that in the surrounding region. Then sources analysis of dustfall and its 20 main trace elements (Cd, Mo, Nb, Ga, Co, Y, Nd, Li, La, Ni, Rb, V, Ce, Pb, Zr, Cr, Cu, Zn, Sr, Ba) was conducted through a multi-method analysis, including Pearson correlation analysis, Kendall correlation coefficient analysis and principal component analysis. Research results indicated that sources of winter dustfall in Beijing city were mainly composed of the earth's crust sources (including road dust, construction dust and remote transmission of dust) and the burning of fossil fuels (vehicle emissions, coal combustion, biomass combustion and industrial processes).

  8. Distribution and importance of microplastics in the marine environment: A review of the sources, fate, effects, and potential solutions.

    PubMed

    Auta, H S; Emenike, C U; Fauziah, S H

    2017-05-01

    The presence of microplastics in the marine environment poses a great threat to the entire ecosystem and has received much attention lately as the presence has greatly impacted oceans, lakes, seas, rivers, coastal areas and even the Polar Regions. Microplastics are found in most commonly utilized products (primary microplastics), or may originate from the fragmentation of larger plastic debris (secondary microplastics). The material enters the marine environment through terrestrial and land-based activities, especially via runoffs and is known to have great impact on marine organisms as studies have shown that large numbers of marine organisms have been affected by microplastics. Microplastic particles have been found distributed in large numbers in Africa, Asia, Southeast Asia, India, South Africa, North America, and in Europe. This review describes the sources and global distribution of microplastics in the environment, the fate and impact on marine biota, especially the food chain. Furthermore, the control measures discussed are those mapped out by both national and international environmental organizations for combating the impact from microplastics. Identifying the main sources of microplastic pollution in the environment and creating awareness through education at the public, private, and government sectors will go a long way in reducing the entry of microplastics into the environment. Also, knowing the associated behavioral mechanisms will enable better understanding of the impacts for the marine environment. However, a more promising and environmentally safe approach could be provided by exploiting the potentials of microorganisms, especially those of marine origin that can degrade microplastics. The concentration, distribution sources and fate of microplastics in the global marine environment were discussed, so also was the impact of microplastics on a wide range of marine biota. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Legacy phosphorus in calcareous soils: effects of long-term poultry litter application on phosphorus distribution in Texas Blackland Vertisol

    USDA-ARS?s Scientific Manuscript database

    Sequential fractionation techniques, coupled with phosphatase hydrolysis, have allowed for greater understanding of manure/litter effects on soil phosphorus (P) distribution. We evaluated the effect of long-term (> 10 years) poultry litter (broiler and turkey litter) application at rates of 4.5, 6.7...

  10. The Relation between School Leadership from a Distributed Perspective and Teachers' Organizational Commitment: Examining the Source of the Leadership Function

    ERIC Educational Resources Information Center

    Hulpia, Hester; Devos, Geert; Van Keer, Hilde

    2011-01-01

    Purpose: In this study the relationship between school leadership and teachers' organizational commitment is examined by taking into account a distributed leadership perspective. The relation between teachers' organizational commitment and contextual variables of teachers' perceptions of the quality and the source of the supportive and supervisory…

  11. Study of the thermal distribution in vocal cords irradiated by an optical source for the treatment of voice disabilities

    NASA Astrophysics Data System (ADS)

    Arce-Diego, José L.; Fanjul-Vélez, Félix; Borragán-Torre, Alfonso

    2006-02-01

    Vocal cords disorders constitute an important problem for people suffering from them. Particularly the reduction of mucosal wave movement is not appropriately treated by conventional therapies, like drugs administration or surgery. In this work, an alternative therapy, consisting in controlled temperature increases by means of optical sources is proposed. The distribution of heat inside vocal cords when an optical source illuminates them is studied. Optical and thermal properties of tissue are discussed, as a basis for the appropriate knowledge of its behaviour. Propagation of light is shown using the Radiation Transfer Theory (RTT) and a numerical Monte Carlo model. A thermal transfer model, that uses the results of the propagation of radiation, determines the distribution of temperature in the tissue. Two widely used lasers are considered, Nd:YAG (1064 nm) and KTP (532 nm). Adequate amounts of radiation, resulting in temperature rise, must be achieved in order to avoid damage in vocal cords and so to assure an improvement in the vocal functions of the patient. The limits in temperature should be considered with a combined temperature-time and Arrhenius analysis.

  12. Numerical simulations of the hard X-ray pulse intensity distribution at the Linac Coherent Light Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pardini, Tom; Aquila, Andrew; Boutet, Sebastien

    Numerical simulations of the current and future pulse intensity distributions at selected locations along the Far Experimental Hall, the hard X-ray section of the Linac Coherent Light Source (LCLS), are provided. Estimates are given for the pulse fluence, energy and size in and out of focus, taking into account effects due to the experimentally measured divergence of the X-ray beam, and measured figure errors of all X-ray optics in the beam path. Out-of-focus results are validated by comparison with experimental data. Previous work is expanded on, providing quantitatively correct predictions of the pulse intensity distribution. Numerical estimates in focus aremore » particularly important given that the latter cannot be measured with direct imaging techniques due to detector damage. Finally, novel numerical estimates of improvements to the pulse intensity distribution expected as part of the on-going upgrade of the LCLS X-ray transport system are provided. As a result, we suggest how the new generation of X-ray optics to be installed would outperform the old one, satisfying the tight requirements imposed by X-ray free-electron laser facilities.« less

  13. Numerical simulations of the hard X-ray pulse intensity distribution at the Linac Coherent Light Source

    DOE PAGES

    Pardini, Tom; Aquila, Andrew; Boutet, Sebastien; ...

    2017-06-15

    Numerical simulations of the current and future pulse intensity distributions at selected locations along the Far Experimental Hall, the hard X-ray section of the Linac Coherent Light Source (LCLS), are provided. Estimates are given for the pulse fluence, energy and size in and out of focus, taking into account effects due to the experimentally measured divergence of the X-ray beam, and measured figure errors of all X-ray optics in the beam path. Out-of-focus results are validated by comparison with experimental data. Previous work is expanded on, providing quantitatively correct predictions of the pulse intensity distribution. Numerical estimates in focus aremore » particularly important given that the latter cannot be measured with direct imaging techniques due to detector damage. Finally, novel numerical estimates of improvements to the pulse intensity distribution expected as part of the on-going upgrade of the LCLS X-ray transport system are provided. As a result, we suggest how the new generation of X-ray optics to be installed would outperform the old one, satisfying the tight requirements imposed by X-ray free-electron laser facilities.« less

  14. EPA's mobile monitoring of source emissions and near-source impact

    EPA Science Inventory

    Real-time ambient monitoring onboard a moving vehicle is a unique data collection approach applied to characterize large-area sources, such as major roadways, and detect fugitive emissions from distributed sources, such as leaking oil wells. EPA's Office of Research and Developme...

  15. Analysis of the different source terms of natural radionuclides in a river affected by NORM (Naturally Occurring Radioactive Materials) activities.

    PubMed

    Baeza, A; Corbacho, J A; Guillén, J; Salas, A; Mora, J C

    2011-05-01

    The present work studied the radioacitivity impact of a coal-fired power plant (CFPP), a NORM industry, on the water of the Regallo river which the plant uses for cooling. Downstream, this river passes through an important irrigated farming area, and it is a tributary of the Ebro, one of Spain's largest rivers. Although no alteration of the (210)Po or (232)Th content was detected, the (234,238)U and (226)Ra contents of the water were significantly greater immediately below CFPP's discharge point. The (226)Ra concentration decreased progressively downstream from the discharge point, but the uranium content increased significantly again at two sampling points 8 km downstream from the CFPP's effluent. This suggested the presence of another, unexpected uranium source term different from the CFPP. The input from this second uranium source term was even greater than that from the CFPP. Different hypotheses were tested (a reservoir used for irrigation, remobilization from sediments, and the effect of fertilizers used in the area), with it finally being demonstrated that the source was the fertilizers used in the adjacent farming areas. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. The Effects of Weather Patterns on the Spatio-Temporal Distribution of SO2 over East Asia as Seen from Satellite Measurements

    NASA Astrophysics Data System (ADS)

    Dunlap, L.; Li, C.; Dickerson, R. R.; Krotkov, N. A.

    2015-12-01

    Weather systems, particularly mid-latitude wave cyclones, have been known to play an important role in the short-term variation of near-surface air pollution. Ground measurements and model simulations have demonstrated that stagnant air and minimal precipitation associated with high pressure systems are conducive to pollutant accumulation. With the passage of a cold front, built up pollution is transported downwind of the emission sources or washed out by precipitation. This concept is important to note when studying long-term changes in spatio-temporal pollution distribution, but has not been studied in detail from space. In this study, we focus on East Asia (especially the industrialized eastern China), where numerous large power plants and other point sources as well as area sources emit large amounts of SO2, an important gaseous pollutant and a precursor of aerosols. Using data from the Aura Ozone Monitoring Instrument (OMI) we show that such weather driven distribution can indeed be discerned from satellite data by utilizing probability distribution functions (PDFs) of SO2 column content. These PDFs are multimodal and give insight into the background pollution level at a given location and contribution from local and upwind emission sources. From these PDFs it is possible to determine the frequency for a given region to have SO2 loading that exceeds the background amount. By comparing OMI-observed long-term change in the frequency with meteorological data, we can gain insights into the effects of climate change (e.g., the weakening of Asian monsoon) on regional air quality. Such insight allows for better interpretation of satellite measurements as well as better prediction of future pollution distribution as a changing climate gives way to changing weather patterns.

  17. Mercury in the sediments of the Marano and Grado Lagoon (northern Adriatic Sea): Sources, distribution and speciation

    NASA Astrophysics Data System (ADS)

    Acquavita, Alessandro; Covelli, Stefano; Emili, Andrea; Berto, Daniela; Faganeli, Jadran; Giani, Michele; Horvat, Milena; Koron, Neža; Rampazzo, Federico

    2012-11-01

    The existence of mining tailings in Idrija (Slovenia) and their subsequent transportation via the Isonzo River has been the primary source of mercury (Hg) in the northern Adriatic Sea for almost 500 years, making the Gulf of Trieste and the adjacent Marano and Grado Lagoon two of the most contaminated marine areas in the world. A further, more recent, contribution of Hg has been added by the operation of a chlor-alkali plant (CAP) located in the drainage basin flowing into the Lagoon. On the basis of previous research, as well as new data obtained from the "MIRACLE" project (Mercury Interdisciplinary Research for Appropriate Clam farming in a Lagoon Environment), the spatial distribution of Hg and its relationships with methylmercury (MeHg), organic matter and several geochemical parameters in surface sediments were investigated. The predominant and long-term impacts of the cinnabar-rich Isonzo River particulate matter in the Lagoon surface sediments are evident and confirmed by a decreasing concentration gradient from east (>11 μg g-1) to west (0.7 μg g-1). Hg originated from the CAP is only significant in the central sector of the Lagoon. Hg is primarily associated with fine-grained sediments (<16 μm), as a consequence of transport and dispersion from the fluvial source through littoral and tidal currents. However, speciation analyses highlighted the presence of Hg sulphides in the coarse sandy fraction of sediments from the eastern area, as expected given the origin of the sedimentary material. Unlike Hg, the distribution of MeHg (0.47-7.85 ng g-1) does not show a clear trend. MeHg constitutes, on average, 0.08% of total Hg and percentages are comparable to those obtained in similar lagoon environments. Higher MeHg concentrations in low to intermediate Hg-contaminated sediments indicate that the metal availability is not a limiting factor for MeHg occurrence, thus suggesting a major role played by environmental conditions and/or speciation. The reasonably

  18. Distribution and Source Apportionment of Polycyclic Aromatic Hydrocarbons (PAHs) in Forest Soils from Urban to Rural Areas in the Pearl River Delta of Southern China

    PubMed Central

    Xiao, Yihua; Tong, Fuchun; Kuang, Yuanwen; Chen, Bufeng

    2014-01-01

    The upper layer of forest soils (0–20 cm depth) were collected from urban, suburban, and rural areas in the Pearl River Delta of Southern China to estimate the distribution and the possible sources of polycyclic aromatic hydrocarbons (PAHs). Total concentrations of PAHs in the forest soils decreased significantly along the urban–suburban–rural gradient, indicating the influence of anthropogenic emissions on the PAH distribution in forest soils. High and low molecular weight PAHs dominated in the urban and rural forest soils, respectively, implying the difference in emission sources between the areas. The values of PAH isomeric diagnostic ratios indicated that forest soil PAHs were mainly originated from traffic emissions, mixed sources and coal/wood combustion in the urban, suburban and rural areas, respectively. Principal component analysis revealed that traffic emissions, coal burning and residential biomass combustion were the three primary contributors to forest soil PAHs in the Pearl River Delta. Long range transportation of PAHs via atmosphere from urban area might also impact the PAHs distribution in the forest soils of rural area. PMID:24599040

  19. Two-Dimensional DOA and Polarization Estimation for a Mixture of Uncorrelated and Coherent Sources with Sparsely-Distributed Vector Sensor Array

    PubMed Central

    Si, Weijian; Zhao, Pinjiao; Qu, Zhiyu

    2016-01-01

    This paper presents an L-shaped sparsely-distributed vector sensor (SD-VS) array with four different antenna compositions. With the proposed SD-VS array, a novel two-dimensional (2-D) direction of arrival (DOA) and polarization estimation method is proposed to handle the scenario where uncorrelated and coherent sources coexist. The uncorrelated and coherent sources are separated based on the moduli of the eigenvalues. For the uncorrelated sources, coarse estimates are acquired by extracting the DOA information embedded in the steering vectors from estimated array response matrix of the uncorrelated sources, and they serve as coarse references to disambiguate fine estimates with cyclical ambiguity obtained from the spatial phase factors. For the coherent sources, four Hankel matrices are constructed, with which the coherent sources are resolved in a similar way as for the uncorrelated sources. The proposed SD-VS array requires only two collocated antennas for each vector sensor, thus the mutual coupling effects across the collocated antennas are reduced greatly. Moreover, the inter-sensor spacings are allowed beyond a half-wavelength, which results in an extended array aperture. Simulation results demonstrate the effectiveness and favorable performance of the proposed method. PMID:27258271

  20. Long-term cloud condensation nuclei number concentration, particle number size distribution and chemical composition measurements at regionally representative observatories

    NASA Astrophysics Data System (ADS)

    Schmale, Julia; Henning, Silvia; Decesari, Stefano; Henzing, Bas; Keskinen, Helmi; Sellegri, Karine; Ovadnevaite, Jurgita; Pöhlker, Mira L.; Brito, Joel; Bougiatioti, Aikaterini; Kristensson, Adam; Kalivitis, Nikos; Stavroulas, Iasonas; Carbone, Samara; Jefferson, Anne; Park, Minsu; Schlag, Patrick; Iwamoto, Yoko; Aalto, Pasi; Äijälä, Mikko; Bukowiecki, Nicolas; Ehn, Mikael; Frank, Göran; Fröhlich, Roman; Frumau, Arnoud; Herrmann, Erik; Herrmann, Hartmut; Holzinger, Rupert; Kos, Gerard; Kulmala, Markku; Mihalopoulos, Nikolaos; Nenes, Athanasios; O'Dowd, Colin; Petäjä, Tuukka; Picard, David; Pöhlker, Christopher; Pöschl, Ulrich; Poulain, Laurent; Prévôt, André Stephan Henry; Swietlicki, Erik; Andreae, Meinrat O.; Artaxo, Paulo; Wiedensohler, Alfred; Ogren, John; Matsuki, Atsushi; Yum, Seong Soo; Stratmann, Frank; Baltensperger, Urs; Gysel, Martin

    2018-02-01

    Aerosol-cloud interactions (ACI) constitute the single largest uncertainty in anthropogenic radiative forcing. To reduce the uncertainties and gain more confidence in the simulation of ACI, models need to be evaluated against observations, in particular against measurements of cloud condensation nuclei (CCN). Here we present a data set - ready to be used for model validation - of long-term observations of CCN number concentrations, particle number size distributions and chemical composition from 12 sites on 3 continents. Studied environments include coastal background, rural background, alpine sites, remote forests and an urban surrounding. Expectedly, CCN characteristics are highly variable across site categories. However, they also vary within them, most strongly in the coastal background group, where CCN number concentrations can vary by up to a factor of 30 within one season. In terms of particle activation behaviour, most continental stations exhibit very similar activation ratios (relative to particles > 20 nm) across the range of 0.1 to 1.0 % supersaturation. At the coastal sites the transition from particles being CCN inactive to becoming CCN active occurs over a wider range of the supersaturation spectrum. Several stations show strong seasonal cycles of CCN number concentrations and particle number size distributions, e.g. at Barrow (Arctic haze in spring), at the alpine stations (stronger influence of polluted boundary layer air masses in summer), the rain forest (wet and dry season) or Finokalia (wildfire influence in autumn). The rural background and urban sites exhibit relatively little variability throughout the year, while short-term variability can be high especially at the urban site. The average hygroscopicity parameter, κ, calculated from the chemical composition of submicron particles was highest at the coastal site of Mace Head (0.6) and lowest at the rain forest station ATTO (0.2-0.3). We performed closure studies based on κ

  1. WATER QUALITY IN SOURCE WATER, TREATMENT, AND DISTRIBUTION SYSTEMS

    EPA Science Inventory

    Most drinking water utilities practice the multiple-barrier concept as the guiding principle for providing safe water. This chapter discusses multiple barriers as they relate to the basic criteria for selecting and protecting source waters, including known and potential sources ...

  2. A method to analyze "source-sink" structure of non-point source pollution based on remote sensing technology.

    PubMed

    Jiang, Mengzhen; Chen, Haiying; Chen, Qinghui

    2013-11-01

    With the purpose of providing scientific basis for environmental planning about non-point source pollution prevention and control, and improving the pollution regulating efficiency, this paper established the Grid Landscape Contrast Index based on Location-weighted Landscape Contrast Index according to the "source-sink" theory. The spatial distribution of non-point source pollution caused by Jiulongjiang Estuary could be worked out by utilizing high resolution remote sensing images. The results showed that, the area of "source" of nitrogen and phosphorus in Jiulongjiang Estuary was 534.42 km(2) in 2008, and the "sink" was 172.06 km(2). The "source" of non-point source pollution was distributed mainly over Xiamen island, most of Haicang, east of Jiaomei and river bank of Gangwei and Shima; and the "sink" was distributed over southwest of Xiamen island and west of Shima. Generally speaking, the intensity of "source" gets weaker along with the distance from the seas boundary increase, while "sink" gets stronger. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Polycyclic aromatic hydrocarbons associated with total suspended particles and surface soils in Kunming, China: distribution, possible sources, and cancer risks.

    PubMed

    Yang, Xiaoxia; Ren, Dong; Sun, Wenwen; Li, Xiaoman; Huang, Bin; Chen, Rong; Lin, Chan; Pan, Xuejun

    2015-05-01

    The concentrations, distribution, possible sources, and cancer risks of polycyclic aromatic hydrocarbons (PAHs) in total suspended particles (TSPs) and surface soils collected from the same sampling spots were compared in Kunming, China. The total PAH concentrations were 9.35-75.01 ng/m(3) and 101.64-693.30 ng/g dry weight (d.w.), respectively, in TSPs and surface soils. Fluoranthene (FLA), pyrene (PYR), chrysene (CHR), and phenanthrene (PHE) were the abundant compounds in TSP samples, and phenanthrene (PHE), fluorene (FLO), fluoranthene (FLA), benzo[b]fluoranthene (BbF), and benzo[g,h,i]perylene (BghiP) were the abundant compounds in surface soil samples. The spatial distribution of PAHs in TSPs is closely related to the surrounding environment, which varied significantly as a result of variations in source emission and changes in meteorology. However, the spatial distribution of PAHs in surface soils is supposed to correlate with a city's urbanization history, and high levels of PAHs were always observed in industry district, or central or old district of city. Based on the diagnostic ratios and principal component analysis (PCA), vehicle emissions (especially diesel-powered vehicles) and coal and wood combustion were the main sources of PAHs in TSPs, and the combustion of wood and coal, and spills of unburnt petroleum were the main sources of PAHs in the surface soils. The benzo[a]pyrene equivalent concentration (BaPeq) for the TSPs and surface soil samples were 0.16-2.57 ng/m(3) and 11.44-116.03 ng/g d.w., respectively. The incremental lifetime cancer risk (ILCR) exposed to particulate PAHs ranged from 10(-4) to 10(-3) indicating high potential of carcinogenic risk, and the ILCR exposed to soil PAHs was from 10(-7) to 10(-6) indicating virtual safety. These presented results showed that particle-bound PAHs had higher potential carcinogenic ability for human than soil PAHs. And, the values of cancer risk for children were always higher than for adults, which

  4. ON THE CONNECTION OF THE APPARENT PROPER MOTION AND THE VLBI STRUCTURE OF COMPACT RADIO SOURCES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moor, A.; Frey, S.; Lambert, S. B.

    2011-06-15

    Many of the compact extragalactic radio sources that are used as fiducial points to define the celestial reference frame are known to have proper motions detectable with long-term geodetic/astrometric very long baseline interferometry (VLBI) measurements. These changes can be as high as several hundred microarcseconds per year for certain objects. When imaged with VLBI at milliarcsecond (mas) angular resolution, these sources (radio-loud active galactic nuclei) typically show structures dominated by a compact, often unresolved 'core' and a one-sided 'jet'. The positional instability of compact radio sources is believed to be connected with changes in their brightness distribution structure. For themore » first time, we test this assumption in a statistical sense on a large sample rather than on only individual objects. We investigate a sample of 62 radio sources for which reliable long-term time series of astrometric positions as well as detailed 8 GHz VLBI brightness distribution models are available. We compare the characteristic direction of their extended jet structure and the direction of their apparent proper motion. We present our data and analysis method, and conclude that there is indeed a correlation between the two characteristic directions. However, there are cases where the {approx}1-10 mas scale VLBI jet directions are significantly misaligned with respect to the apparent proper motion direction.« less

  5. Performance improvement of continuous-variable quantum key distribution with an entangled source in the middle via photon subtraction

    NASA Astrophysics Data System (ADS)

    Guo, Ying; Liao, Qin; Wang, Yijun; Huang, Duan; Huang, Peng; Zeng, Guihua

    2017-03-01

    A suitable photon-subtraction operation can be exploited to improve the maximal transmission of continuous-variable quantum key distribution (CVQKD) in point-to-point quantum communication. Unfortunately, the photon-subtraction operation faces solving the improvement transmission problem of practical quantum networks, where the entangled source is located in the third part, which may be controlled by a malicious eavesdropper, instead of in one of the trusted parts, controlled by Alice or Bob. In this paper, we show that a solution can come from using a non-Gaussian operation, in particular, the photon-subtraction operation, which provides a method to enhance the performance of entanglement-based (EB) CVQKD. Photon subtraction not only can lengthen the maximal transmission distance by increasing the signal-to-noise rate but also can be easily implemented with existing technologies. Security analysis shows that CVQKD with an entangled source in the middle (ESIM) from applying photon subtraction can well increase the secure transmission distance in both direct and reverse reconciliations of the EB-CVQKD scheme, even if the entangled source originates from an untrusted part. Moreover, it can defend against the inner-source attack, which is a specific attack by an untrusted entangled source in the framework of ESIM.

  6. Fourth order Douglas implicit scheme for solving three dimension reaction diffusion equation with non-linear source term

    NASA Astrophysics Data System (ADS)

    Hasnain, Shahid; Saqib, Muhammad; Mashat, Daoud Suleiman

    2017-07-01

    This research paper represents a numerical approximation to non-linear three dimension reaction diffusion equation with non-linear source term from population genetics. Since various initial and boundary value problems exist in three dimension reaction diffusion phenomena, which are studied numerically by different numerical methods, here we use finite difference schemes (Alternating Direction Implicit and Fourth Order Douglas Implicit) to approximate the solution. Accuracy is studied in term of L2, L∞ and relative error norms by random selected grids along time levels for comparison with analytical results. The test example demonstrates the accuracy, efficiency and versatility of the proposed schemes. Numerical results showed that Fourth Order Douglas Implicit scheme is very efficient and reliable for solving 3-D non-linear reaction diffusion equation.

  7. Distribution and Sources of Petroleum Hydrocarbons in Recent Sediments of the Imo River, SE Nigeria.

    PubMed

    Oyo-Ita, Inyang O; Oyo-Ita, Orok E; Dosunmu, Miranda I; Domínguez, Carmen; Bayona, Josep M; Albaigés, Joan

    2016-02-01

    The distribution of aliphatic and aromatic hydrocarbons in surface sediments of the lower course of the Imo River (Nigeria) was investigated to determine the sources and fate of these compounds. The aliphatic fraction is characterized by a widespread contribution of highly weathered/biodegraded hydrocarbon residues (reflected in the absence of prominent n-alkane peaks coupled with the presence of 17α(H),21β(H)-25-norhopane, an indicator of heavy hydrocarbon biodegradation) of Nigerian crude oils (confirmed by the occurrence of 18α(H)-oleanane, a compound characteristic of oils of deltaic origin). The concentrations of polycyclic aromatic hydrocarbons (PAHs) ranging from 48 to 117 ng/g dry weight (dw; ∑13PAHs) indicate a moderate pollution, possibly lowered by the sandy lithology and low organic carbon (OC) content of the sediments. Concentrations slightly decrease towards the estuary of the river, probably due to the fact that these stations are affected by tidal flushing of pollutants adsorbed on sediment particles and carried away by occasional storm to the Atlantic Ocean. A number of PAH ratios, including parent/alkylated and isomeric compounds, indicates a predominance of petrogenic sources, with a low contribution of pyrolytic inputs, particularly of fossil fuel combustion. On the basis of OC/ON (>10) and Per/ΣPAHpenta- (>10) values, a diagenetic terrigenous OC was proposed as a source of perylene to the river.

  8. SU-E-T-102: Determination of Dose Distributions and Water-Equivalence of MAGIC-F Polymer Gel for 60Co and 192Ir Brachytherapy Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quevedo, A; Nicolucci, P

    2014-06-01

    Purpose: Analyse the water-equivalence of MAGIC-f polymer gel for {sup 60}Co and {sup 192}Ir clinical brachytherapy sources, through dose distributions simulated with PENELOPE Monte Carlo code. Methods: The real geometry of {sup 60} (BEBIG, modelo Co0.A86) and {sup 192}192Ir (Varian, model GammaMed Plus) clinical brachytherapy sources were modelled on PENELOPE Monte Carlo simulation code. The most probable emission lines of photons were used for both sources: 17 emission lines for {sup 192}Ir and 12 lines for {sup 60}. The dose distributions were obtained in a cubic water or gel homogeneous phantom (30 × 30 × 30 cm{sup 3}), with themore » source positioned in the middle of the phantom. In all cases the number of simulation showers remained constant at 10{sup 9} particles. A specific material for gel was constructed in PENELOPE using weight fraction components of MAGIC-f: wH = 0,1062, wC = 0,0751, wN = 0,0139, wO = 0,8021, wS = 2,58×10{sup −6} e wCu = 5,08 × 10{sup −6}. The voxel size in the dose distributions was 0.6 mm. Dose distribution maps on the longitudinal and radial direction through the centre of the source were used to analyse the water-equivalence of MAGIC-f. Results: For the {sup 60} source, the maximum diferences in relative doses obtained in the gel and water were 0,65% and 1,90%, for radial and longitudinal direction, respectively. For {sup 192}Ir, the maximum difereces in relative doses were 0,30% and 1,05%, for radial and longitudinal direction, respectively. The materials equivalence can also be verified through the effective atomic number and density of each material: Zef-MAGIC-f = 7,07 e .MAGIC-f = 1,060 g/cm{sup 3} and Zef-water = 7,22. Conclusion: The results showed that MAGIC-f is water equivalent, consequently being suitable to simulate soft tissue, for Cobalt and Iridium energies. Hence, gel can be used as a dosimeter in clinical applications. Further investigation to its use in a clinical protocol is needed.« less

  9. Open source software integrated into data services of Japanese planetary explorations

    NASA Astrophysics Data System (ADS)

    Yamamoto, Y.; Ishihara, Y.; Otake, H.; Imai, K.; Masuda, K.

    2015-12-01

    Scientific data obtained by Japanese scientific satellites and lunar and planetary explorations are archived in DARTS (Data ARchives and Transmission System). DARTS provides the data with a simple method such as HTTP directory listing for long-term preservation while DARTS tries to provide rich web applications for ease of access with modern web technologies based on open source software. This presentation showcases availability of open source software through our services. KADIAS is a web-based application to search, analyze, and obtain scientific data measured by SELENE(Kaguya), a Japanese lunar orbiter. KADIAS uses OpenLayers to display maps distributed from Web Map Service (WMS). As a WMS server, open source software MapServer is adopted. KAGUYA 3D GIS (KAGUYA 3D Moon NAVI) provides a virtual globe for the SELENE's data. The main purpose of this application is public outreach. NASA World Wind Java SDK is used to develop. C3 (Cross-Cutting Comparisons) is a tool to compare data from various observations and simulations. It uses Highcharts to draw graphs on web browsers. Flow is a tool to simulate a Field-Of-View of an instrument onboard a spacecraft. This tool itself is open source software developed by JAXA/ISAS, and the license is BSD 3-Caluse License. SPICE Toolkit is essential to compile FLOW. SPICE Toolkit is also open source software developed by NASA/JPL, and the website distributes many spacecrafts' data. Nowadays, open source software is an indispensable tool to integrate DARTS services.

  10. Mercury sources, distribution, and bioavailability in the North Pacific Ocean: Insights from data and models

    USGS Publications Warehouse

    Sunderland, E.M.; Krabbenhoft, D.P.; Moreau, J.W.; Strode, S.A.; Landing, W.M.

    2009-01-01

    Fish harvested from the Pacific Ocean are a major contributor to human methylmercury (MeHg) exposure. Limited oceanic mercury (Hg) data, particularly MeHg, has confounded our understanding of linkages between sources, methylation sites, and concentrations in marine food webs. Here we present methylated (MeHg and dimethylmercury (Me2Hg)) and total Hg concentrations from 16 hydrographie stations in the eastern North Pacific Ocean. We use these data in combination with information from previous cruises and coupled atmospheric-oceanic modeling results to better understand controls on Hg concentrations, distribution, and bioavailability. Total Hg concentrations (average 1.14 ?? 0.38 pM) are elevated relative to previous cruises. Modeling results agree with observed increases and suggest that at present atmospheric Hg deposition rates, basin-wide Hg concentrations will double relative to circa 1995 by 2050. Methylated Hg accounts for up to 29% of the total Hg in subsurface waters (average 260 ??114 fM). We observed lower ambient methylated Hg concentrations in the euphotic zone and older, deeper water masses, which likely result from decay of MeHg and Me2Hg when net production is not occurring. We found a significant, positive linear relationship between methylated Hg concentrations and rates of organic carbon remineralization (r2 = 0.66, p < 0.001). These results provide evidence for the importance of particulate organic carbon (POC) transport and remineralization on the production and distribution of methylated Hg species in marine waters. Specifically, settling POC provides a source of inorganic Hg(II) to microbially active subsurface waters and can also provide a substrate for microbial activity facilitating water column methylation. Copyright 2009 by the American Geophysical Union.

  11. Development of a hemispherical rotational modulation collimator system for imaging spatial distribution of radiation sources

    NASA Astrophysics Data System (ADS)

    Na, M.; Lee, S.; Kim, G.; Kim, H. S.; Rho, J.; Ok, J. G.

    2017-12-01

    Detecting and mapping the spatial distribution of radioactive materials is of great importance for environmental and security issues. We design and present a novel hemispherical rotational modulation collimator (H-RMC) system which can visualize the location of the radiation source by collecting signals from incident rays that go through collimator masks. The H-RMC system comprises a servo motor-controlled rotating module and a hollow heavy-metallic hemisphere with slits/slats equally spaced with the same angle subtended from the main axis. In addition, we also designed an auxiliary instrument to test the imaging performance of the H-RMC system, comprising a high-precision x- and y-axis staging station on which one can mount radiation sources of various shapes. We fabricated the H-RMC system which can be operated in a fully-automated fashion through the computer-based controller, and verify the accuracy and reproducibility of the system by measuring the rotational and linear positions with respect to the programmed values. Our H-RMC system may provide a pivotal tool for spatial radiation imaging with high reliability and accuracy.

  12. Spatial distribution and source identification of heavy metals in surface soils in a typical coal mine city, Lianyuan, China.

    PubMed

    Liang, Jie; Feng, Chunting; Zeng, Guangming; Gao, Xiang; Zhong, Minzhou; Li, Xiaodong; Li, Xin; He, Xinyue; Fang, Yilong

    2017-06-01

    In this study, we investigated the pollution degree and spatial distribution of heavy metals and determined their sources in topsoil in a typical coal mine city, Lianyuan, Hunan Province, China. We collected 6078 soil surface samples in different land use types. And the concentrations of Zn, Cd, Cu, Hg, Pb, Sb, As, Mo, V, Mn, Fe and Cr were measured. The average contents of all heavy metals were lower than their corresponding Grade II values of Chinese Soil Quality Standard with the exception of Hg. However, average contents of twelve heavy metals, except for Mn, exceeded their background level in soils in Hunan Province. Based on one-way analysis of variance (ANOVA), the contents of Cu, Zn, Cd, Pb, Hg, Mo and V were related to the anthropogenic source and there were statistically significant differences in their concentrations among different land use patterns. The spatial variation of heavy metal was visualized by GIS. The PMF model was used to ascertain contamination sources of twelve heavy metals and apportion their source contributions in Lianyuan soils. The results showed that the source contributions of the natural source, atmospheric deposition, industrial activities and agricultural activities accounted for 33.6%, 26.05%, 23.44% and 16.91%, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Excitation efficiency of an optical fiber core source

    NASA Technical Reports Server (NTRS)

    Egalon, Claudio O.; Rogowski, Robert S.; Tai, Alan C.

    1992-01-01

    The exact field solution of a step-index profile fiber is used to determine the excitation efficiency of a distribution of sources in the core of an optical fiber. Previous results of a thin-film cladding source distribution to its core source counterpart are used for comparison. The behavior of power efficiency with the fiber parameters is examined and found to be similar to the behavior exhibited by cladding sources. It is also found that a core-source fiber is two orders of magnitude more efficient than a fiber with a bulk distribution of cladding sources. This result agrees qualitatively with previous ones obtained experimentally.

  14. Distribution and source analysis of aluminum in rivers near Xi'an City, China.

    PubMed

    Wang, Dongqi; He, Yanling; Liang, Jidong; Liu, Pei; Zhuang, Pengyu

    2013-02-01

    To study the status and source of aluminum (Al) contamination, a total of 21 sampling sites along six rivers near Xi'an City (Shaanxi province, China) were investigated during 2008-2010. The results indicated that the average concentration of total Al (Al(t)) in the six rivers increased by 1.6 times from 2008 to 2010. The spatial distribution of Al(t) concentrations in the rivers near Xi'an City was significantly different, ranged from 367 μg/L (Bahe River) to 1,978 μg/L (Taiping River). The Al(t) concentration was highest near an industrial area for pulp and paper-making (2,773 μg/L), where the Al level greatly exceeded the water quality criteria of both the USA (Criterion Continuous Concentration, 87 μg/L) and Canada (100 μg/L). The average concentration of inorganic monometric aluminum (Al(im)) was 72 μg/L which would pose threats to fishes and other aquatic lives in the rivers. The concentrations of exchangeable Al (Al(ex)) in the sediment of the Taiping River sampled were relatively high, making it to be an alternative explanation of increasing Al concentrations in the rivers near Xi'an City. Furthermore, an increasing Al level has been detected in the upstream watershed near Xi'an City in recent years, which might indicate another notable pollution source of Al.

  15. Measuring Spatial Variability of Vapor Flux to Characterize Vadose-zone VOC Sources: Flow-cell Experiments

    DOE PAGES

    Mainhagu, Jon; Morrison, C.; Truex, Michael J.; ...

    2014-08-05

    A method termed vapor-phase tomography has recently been proposed to characterize the distribution of volatile organic contaminant mass in vadose-zone source areas, and to measure associated three-dimensional distributions of local contaminant mass discharge. The method is based on measuring the spatial variability of vapor flux, and thus inherent to its effectiveness is the premise that the magnitudes and temporal variability of vapor concentrations measured at different monitoring points within the interrogated area will be a function of the geospatial positions of the points relative to the source location. A series of flow-cell experiments was conducted to evaluate this premise. Amore » well-defined source zone was created by injection and extraction of a non-reactive gas (SF6). Spatial and temporal concentration distributions obtained from the tests were compared to simulations produced with a mathematical model describing advective and diffusive transport. Tests were conducted to characterize both areal and vertical components of the application. Decreases in concentration over time were observed for monitoring points located on the opposite side of the source zone from the local–extraction point, whereas increases were observed for monitoring points located between the local–extraction point and the source zone. We found that the results illustrate that comparison of temporal concentration profiles obtained at various monitoring points gives a general indication of the source location with respect to the extraction and monitoring points.« less

  16. Distribution, richness, quality, and thermal maturity of source rock units on the North Slope of Alaska

    USGS Publications Warehouse

    Peters, K.E.; Bird, K.J.; Keller, M.A.; Lillis, P.G.; Magoon, L.B.

    2003-01-01

    Four source rock units on the North Slope were identified, characterized, and mapped to better understand the origin of petroleum in the area: Hue-gamma ray zone (Hue-GRZ), pebble shale unit, Kingak Shale, and Shublik Formation. Rock-Eval pyrolysis, total organic carbon analysis, and well logs were used to map the present-day thickness, organic quantity (TOC), quality (hydrogen index, HI), and thermal maturity (Tmax) of each unit. To map these units, we screened all available geochemical data for wells in the study area and assumed that the top and bottom of the oil window occur at Tmax of ~440° and 470°C, respectively. Based on several assumptions related to carbon mass balance and regional distributions of TOC, the present-day source rock quantity and quality maps were used to determine the extent of fractional conversion of the kerogen to petroleum and to map the original organic richness prior to thermal maturation.

  17. Tidal river sediments in the Washington, D.C. area. 11. Distribution and sources of organic containmants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wade, T.L.; Velinsky, D.J.; Reinharz, E.

    1994-06-01

    Concentrations of aliphatic, aromatic, and chlorinated hydrocarbons were determined from 33 surface-sediment samples taken from the Tidal Basin, Washington Ship Channel, and the Anacostia and Potomac rivers in Washington, D.C. In conjunction with these samples, selected storm sewers and outfalls also were sampled to help elucidate general sources of contamination to the area. All of the sediments contained detectable concentrations of aliphatic and aromatic hydrocarbons, DDT (total dichlorodiphenytrichloroethande), DDE (dichlorodiphenyldichloroethene), DDD (dichlorodiphenyldichloroethane), PCBx (total polychlorinated biphenyls) and total chlordanes (oxy-, {alpha}-, and {gamma}-chlordane and cis + trans-nonachlor). Sediment concentrations of most contaminants were highest in the Anacostia River just downstreammore » of the Washington Navy Yard, except for total chlordane, which appeared to have upstream sources in addition to storm and combined sewer runoff. This area has the highest number of storm and combined sewer outfalls in the river. Potomac River stations had lower concentrations than other stations. Polycyclic aromatic hydrocarbons, saturated hydrocarbons, and the unresolved complex mixture (UCM) distributions reflect mixtures of combustion products and direct discharges of petroleum products. Sources of PCBs appear to be related to specific outfalls, while hydrocarbon inputs, especially PAHs, are diffuse, and may be related to street runoff. This study indicates that in large urban areas, nonpoint sources deliver substantial amounts of contaminants to ecosystems through storm and combined sewer systems, and control of these inputs must be addressed. 33 refs., 6 figs., 3 tabs.« less

  18. Tsunami Size Distributions at Far-Field Locations from Aggregated Earthquake Sources

    NASA Astrophysics Data System (ADS)

    Geist, E. L.; Parsons, T.

    2015-12-01

    The distribution of tsunami amplitudes at far-field tide gauge stations is explained by aggregating the probability of tsunamis derived from individual subduction zones and scaled by their seismic moment. The observed tsunami amplitude distributions of both continental (e.g., San Francisco) and island (e.g., Hilo) stations distant from subduction zones are examined. Although the observed probability distributions nominally follow a Pareto (power-law) distribution, there are significant deviations. Some stations exhibit varying degrees of tapering of the distribution at high amplitudes and, in the case of the Hilo station, there is a prominent break in slope on log-log probability plots. There are also differences in the slopes of the observed distributions among stations that can be significant. To explain these differences we first estimate seismic moment distributions of observed earthquakes for major subduction zones. Second, regression models are developed that relate the tsunami amplitude at a station to seismic moment at a subduction zone, correcting for epicentral distance. The seismic moment distribution is then transformed to a site-specific tsunami amplitude distribution using the regression model. Finally, a mixture distribution is developed, aggregating the transformed tsunami distributions from all relevant subduction zones. This mixture distribution is compared to the observed distribution to assess the performance of the method described above. This method allows us to estimate the largest tsunami that can be expected in a given time period at a station.

  19. Oil gravity distribution in the diatomite at South Belridge Field, Kern County, CA: Implications for oil sourcing and migration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hill, D.W.; Sande, J.J.; Doe, P.H.

    1995-04-01

    Understanding oil gravity distribution in the Belridge Diatomite has led to economic infill development and specific enhanced recovery methods for targeted oil properties. To date more than 100 wells have provided samples used to determining vertical and areal distribution of oil gravity in the field. Detailed geochemical analyses were also conducted on many of the oil samples to establish different oil types, relative maturities, and to identify transformed oils. The geochemical analysis also helped identify source rock expulsion temperatures and depositional environments. The data suggests that the Belridge diatomite has been charged by a single hydrocarbon source rock type andmore » was generated over a relatively wide range of temperatures. Map and statistical data support two distinct oil segregation processes occurring post expulsion. Normal gravity segregation within depositional cycles of diatomite have caused lightest oils to migrate to the crests of individual cycle structures. Some data suggests a loss of the light end oils in the uppermost cycles to the Tulare Formation above, or through early biodegradation. Structural rotation post early oil expulsion has also left older, heavier oils concentrated on the east flank of the structure. With the addition of other samples from the south central San Joaquin area, we have been able to tie the Belridge diatomite hydrocarbon charge into a regional framework. We have also enhanced our ability to predict oil gravity and well primary recovery by unraveling some key components of the diatomite oil source and migration history.« less

  20. Distributed Optimization System

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2004-11-30

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  1. 14 CFR 25.1310 - Power source capacity and distribution.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... certification or under operating rules and that requires a power supply is an “essential load” on the power supply. The power sources and the system must be able to supply the following power loads in probable... source of power is required, after any failure or malfunction in any one power supply system...

  2. 14 CFR 25.1310 - Power source capacity and distribution.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... certification or under operating rules and that requires a power supply is an “essential load” on the power supply. The power sources and the system must be able to supply the following power loads in probable... source of power is required, after any failure or malfunction in any one power supply system...

  3. Toward a Mechanistic Source Term in Advanced Reactors: A Review of Past U.S. SFR Incidents, Experiments, and Analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bucknor, Matthew; Brunett, Acacia J.; Grabaskas, David

    In 2015, as part of a Regulatory Technology Development Plan (RTDP) effort for sodium-cooled fast reactors (SFRs), Argonne National Laboratory investigated the current state of knowledge of source term development for a metal-fueled, pool-type SFR. This paper provides a summary of past domestic metal-fueled SFR incidents and experiments and highlights information relevant to source term estimations that were gathered as part of the RTDP effort. The incidents described in this paper include fuel pin failures at the Sodium Reactor Experiment (SRE) facility in July of 1959, the Fermi I meltdown that occurred in October of 1966, and the repeated meltingmore » of a fuel element within an experimental capsule at the Experimental Breeder Reactor II (EBR-II) from November 1967 to May 1968. The experiments described in this paper include the Run-Beyond-Cladding-Breach tests that were performed at EBR-II in 1985 and a series of severe transient overpower tests conducted at the Transient Reactor Test Facility (TREAT) in the mid-1980s.« less

  4. Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison for GPU and MIC Parallel Computing Devices

    NASA Astrophysics Data System (ADS)

    Lin, Hui; Liu, Tianyu; Su, Lin; Bednarz, Bryan; Caracappa, Peter; Xu, X. George

    2017-09-01

    Monte Carlo (MC) simulation is well recognized as the most accurate method for radiation dose calculations. For radiotherapy applications, accurate modelling of the source term, i.e. the clinical linear accelerator is critical to the simulation. The purpose of this paper is to perform source modelling and examine the accuracy and performance of the models on Intel Many Integrated Core coprocessors (aka Xeon Phi) and Nvidia GPU using ARCHER and explore the potential optimization methods. Phase Space-based source modelling for has been implemented. Good agreements were found in a tomotherapy prostate patient case and a TrueBeam breast case. From the aspect of performance, the whole simulation for prostate plan and breast plan cost about 173s and 73s with 1% statistical error.

  5. Distribution and sources of polycyclic aromatic hydrocarbons and phthalic acid esters in water and surface sediment from the Three Gorges Reservoir.

    PubMed

    Lin, Li; Dong, Lei; Meng, Xiaoyang; Li, Qingyun; Huang, Zhuo; Li, Chao; Li, Rui; Yang, Wenjun; Crittenden, John

    2018-07-01

    After the impoundment of the Three Gorges Reservoir (TGR), the hydrological situation of the reservoir has changed greatly. The concentration and distribution of typical persistent organic pollutants in water and sediment have also changed accordingly. In this study, the concentration, distribution and potential sources of 16 polycyclic aromatic hydrocarbons (PAHs) and 6 phthalic acid esters (PAEs) during the water drawdown and impoundment periods were investigated in water and sediment from the TGR. According to our results, PAHs and PAEs showed temporal and spatial variations. The mean ΣPAH and ΣPAE concentrations in water and sediment were both higher during the water impoundment period than during the water drawdown period. The water samples from the main stream showed larger ΣPAH concentration fluctuations than those from tributaries. Both the PAH and PAE concentrations meet the Chinese national water environmental quality standard (GB 3838-2002). PAH monomers with 2-3 rings and 4 rings were dominant in water, and 4-ring and 5-6-ring PAHs were dominant in sediment. Di-n-butyl phthalate (DBP) and di-2-ethylhexyl phthalate (DEHP) were the dominant PAE pollutants in the TGR. DBP and DEHP had the highest concentrations in water and sediment, respectively. The main source of PAHs in water from the TGR was petroleum and emissions from coal and biomass combustion, whereas the main sources of PAHs in sediments included coal and biomass combustion, petroleum, and petroleum combustion. The main source of PAEs in water was domestic waste, and the plastics and heavy chemical industries were the main sources of PAEs in sediment. Copyright © 2017. Published by Elsevier B.V.

  6. Mass discharge assessment at a brominated DNAPL site: Effects of known DNAPL source mass removal

    NASA Astrophysics Data System (ADS)

    Johnston, C. D.; Davis, G. B.; Bastow, T. P.; Woodbury, R. J.; Rao, P. S. C.; Annable, M. D.; Rhodes, S.

    2014-08-01

    Management and closure of contaminated sites is increasingly being proposed on the basis of mass flux of dissolved contaminants in groundwater. Better understanding of the links between source mass removal and contaminant mass fluxes in groundwater would allow greater acceptance of this metric in dealing with contaminated sites. Our objectives here were to show how measurements of the distribution of contaminant mass flux and the overall mass discharge emanating from the source under undisturbed groundwater conditions could be related to the processes and extent of source mass depletion. In addition, these estimates of mass discharge were sought in the application of agreed remediation targets set in terms of pumped groundwater quality from offsite wells. Results are reported from field studies conducted over a 5-year period at a brominated DNAPL (tetrabromoethane, TBA; and tribromoethene, TriBE) site located in suburban Perth, Western Australia. Groundwater fluxes (qw; L3/L2/T) and mass fluxes (Jc; M/L2/T) of dissolved brominated compounds were simultaneously estimated by deploying Passive Flux Meters (PFMs) in wells in a heterogeneous layered aquifer. PFMs were deployed in control plane (CP) wells immediately down-gradient of the source zone, before (2006) and after (2011) 69-85% of the source mass was removed, mainly by groundwater pumping from the source zone. The high-resolution (26-cm depth interval) measures of qw and Jc along the source CP allowed investigation of the DNAPL source-zone architecture and impacts of source mass removal. Comparable estimates of total mass discharge (MD; M/T) across the source zone CP reduced from 104 g day- 1 to 24-31 g day- 1 (70-77% reductions). Importantly, this mass discharge reduction was consistent with the estimated proportion of source mass remaining at the site (15-31%). That is, a linear relationship between mass discharge and source mass is suggested. The spatial detail of groundwater and mass flux distributions also

  7. Mass discharge assessment at a brominated DNAPL site: Effects of known DNAPL source mass removal.

    PubMed

    Johnston, C D; Davis, G B; Bastow, T P; Woodbury, R J; Rao, P S C; Annable, M D; Rhodes, S

    2014-08-01

    Management and closure of contaminated sites is increasingly being proposed on the basis of mass flux of dissolved contaminants in groundwater. Better understanding of the links between source mass removal and contaminant mass fluxes in groundwater would allow greater acceptance of this metric in dealing with contaminated sites. Our objectives here were to show how measurements of the distribution of contaminant mass flux and the overall mass discharge emanating from the source under undisturbed groundwater conditions could be related to the processes and extent of source mass depletion. In addition, these estimates of mass discharge were sought in the application of agreed remediation targets set in terms of pumped groundwater quality from offsite wells. Results are reported from field studies conducted over a 5-year period at a brominated DNAPL (tetrabromoethane, TBA; and tribromoethene, TriBE) site located in suburban Perth, Western Australia. Groundwater fluxes (qw; L(3)/L(2)/T) and mass fluxes (Jc; M/L(2)/T) of dissolved brominated compounds were simultaneously estimated by deploying Passive Flux Meters (PFMs) in wells in a heterogeneous layered aquifer. PFMs were deployed in control plane (CP) wells immediately down-gradient of the source zone, before (2006) and after (2011) 69-85% of the source mass was removed, mainly by groundwater pumping from the source zone. The high-resolution (26-cm depth interval) measures of qw and Jc along the source CP allowed investigation of the DNAPL source-zone architecture and impacts of source mass removal. Comparable estimates of total mass discharge (MD; M/T) across the source zone CP reduced from 104gday(-1) to 24-31gday(-1) (70-77% reductions). Importantly, this mass discharge reduction was consistent with the estimated proportion of source mass remaining at the site (15-31%). That is, a linear relationship between mass discharge and source mass is suggested. The spatial detail of groundwater and mass flux distributions

  8. Progress towards practical device-independent quantum key distribution with spontaneous parametric down-conversion sources, on-off photodetectors, and entanglement swapping

    NASA Astrophysics Data System (ADS)

    Seshadreesan, Kaushik P.; Takeoka, Masahiro; Sasaki, Masahide

    2016-04-01

    Device-independent quantum key distribution (DIQKD) guarantees unconditional security of a secret key without making assumptions about the internal workings of the devices used for distribution. It does so using the loophole-free violation of a Bell's inequality. The primary challenge in realizing DIQKD in practice is the detection loophole problem that is inherent to photonic tests of Bell' s inequalities over lossy channels. We revisit the proposal of Curty and Moroder [Phys. Rev. A 84, 010304(R) (2011), 10.1103/PhysRevA.84.010304] to use a linear optics-based entanglement-swapping relay (ESR) to counter this problem. We consider realistic models for the entanglement sources and photodetectors: more precisely, (a) polarization-entangled states based on pulsed spontaneous parametric down-conversion sources with infinitely higher-order multiphoton components and multimode spectral structure, and (b) on-off photodetectors with nonunit efficiencies and nonzero dark-count probabilities. We show that the ESR-based scheme is robust against the above imperfections and enables positive key rates at distances much larger than what is possible otherwise.

  9. Spatial distribution, enrichment, and source of environmentally important elements in Batticaloa lagoon, Sri Lanka.

    PubMed

    Adikaram, Madurya; Pitawala, Amarasooriya; Ishiga, Hiroaki; Jayawardana, Daham

    2017-01-01

    The present paper is the first documentation of distribution and contamination status of environmentally important elements of superficial sediments in the Batticaloa lagoon that is connected to the largest bay of the world. Surface sediment samples were collected from 34 sites covering all over the lagoon. Concentrations of elements such as As, Cr, Cu, Fe, Nb, Ni, Pb, Sc, Sr, Th, V, Y, Zn, and Zr were measured by X-ray florescence analysis. Geochemically, the lagoon has three different zones that were influenced mainly by fresh water sources, marine fronts, and intermediate mixing zones. The marine sediment quality standards indicate that Zr and Th values are exceeded throughout the lagoon. According to the freshwater sediment quality standards, Cr levels of all sampling sites exceed the threshold effect level (TEL) and 17 % of them are even above the probable effect level (PEL). Most sampling sites of the channel discharging areas show minor enrichment of Cu, Ni, and Zn with respect to the TEL. Contamination indices show that the lagoon mouth area is enriched with As. Statistical analysis implies that discharges from agricultural channel and marine fluxes of the lagoon effects on the spatial distribution of measured elements. Further research is required to understand the rate of contamination in the studied marine system.

  10. 3-D time-domain induced polarization tomography: a new approach based on a source current density formulation

    NASA Astrophysics Data System (ADS)

    Soueid Ahmed, A.; Revil, A.

    2018-04-01

    Induced polarization (IP) of porous rocks can be associated with a secondary source current density, which is proportional to both the intrinsic chargeability and the primary (applied) current density. This gives the possibility of reformulating the time domain induced polarization (TDIP) problem as a time-dependent self-potential-type problem. This new approach implies a change of strategy regarding data acquisition and inversion, allowing major time savings for both. For inverting TDIP data, we first retrieve the electrical resistivity distribution. Then, we use this electrical resistivity distribution to reconstruct the primary current density during the injection/retrieval of the (primary) current between the current electrodes A and B. The time-lapse secondary source current density distribution is determined given the primary source current density and a distribution of chargeability (forward modelling step). The inverse problem is linear between the secondary voltages (measured at all the electrodes) and the computed secondary source current density. A kernel matrix relating the secondary observed voltages data to the source current density model is computed once (using the electrical conductivity distribution), and then used throughout the inversion process. This recovered source current density model is in turn used to estimate the time-dependent chargeability (normalized voltages) in each cell of the domain of interest. Assuming a Cole-Cole model for simplicity, we can reconstruct the 3-D distributions of the relaxation time τ and the Cole-Cole exponent c by fitting the intrinsic chargeability decay curve to a Cole-Cole relaxation model for each cell. Two simple cases are studied in details to explain this new approach. In the first case, we estimate the Cole-Cole parameters as well as the source current density field from a synthetic TDIP data set. Our approach is successfully able to reveal the presence of the anomaly and to invert its Cole

  11. 'Distributed health literacy': longitudinal qualitative analysis of the roles of health literacy mediators and social networks of people living with a long-term health condition.

    PubMed

    Edwards, Michelle; Wood, Fiona; Davies, Myfanwy; Edwards, Adrian

    2015-10-01

    The role of one's social network in the process of becoming health literate is not well understood. We aim to explain the 'distributed' nature of health literacy and how people living with a long-term condition draw on their social network for support with health literacy-related tasks such as managing their condition, interacting with health professionals and making decisions about their health. This paper reports a longitudinal qualitative interview and observation study of the development and practice of health literacy in people with long-term health conditions, living in South Wales, UK. Participants were recruited from health education groups (n = 14) and community education venues (n = 4). The 44 interview transcripts were analysed using the 'Framework' approach. Health literacy was distributed through family and social networks, and participants often drew on the health literacy skills of others to seek, understand and use health information. Those who passed on their health literacy skills acted as health literacy mediators and supported participants in becoming more health literate about their condition. The distribution of health literacy supported participants to manage their health, become more active in health-care decision-making processes, communicate with health professionals and come to terms with living with a long-term condition. Participants accessed health literacy mediators through personal and community networks. Distributed health literacy is a potential resource for managing one's health, communicating with health professionals and making health decisions. © 2013 John Wiley & Sons Ltd.

  12. Prediction of Down-Gradient Impacts of DNAPL Source Depletion Using Tracer Techniques

    NASA Astrophysics Data System (ADS)

    Basu, N. B.; Fure, A. D.; Jawitz, J. W.

    2006-12-01

    Four simplified DNAPL source depletion models that have been discussed in the literature recently are evaluated for the prediction of long-term effects of source depletion under natural gradient flow. These models are simple in form (a power function equation is an example) but are shown here to serve as mathematical analogs to complex multiphase flow and transport simulators. One of the source depletion models, the equilibrium streamtube model, is shown to be relatively easily parameterized using non-reactive and reactive tracers. Non-reactive tracers are used to characterize the aquifer heterogeneity while reactive tracers are used to describe the mean DNAPL mass and its distribution. This information is then used in a Lagrangian framework to predict source remediation performance. In a Lagrangian approach the source zone is conceptualized as a collection of non-interacting streamtubes with hydrodynamic and DNAPL heterogeneity represented by the variation of the travel time and DNAPL saturation among the streamtubes. The travel time statistics are estimated from the non-reactive tracer data while the DNAPL distribution statistics are estimated from the reactive tracer data. The combined statistics are used to define an analytical solution for contaminant dissolution under natural gradient flow. The tracer prediction technique compared favorably with results from a multiphase flow and transport simulator UTCHEM in domains with different hydrodynamic heterogeneity (variance of the log conductivity field = 0.2, 1 and 3).

  13. Variational Iterative Refinement Source Term Estimation Algorithm Assessment for Rural and Urban Environments

    NASA Astrophysics Data System (ADS)

    Delle Monache, L.; Rodriguez, L. M.; Meech, S.; Hahn, D.; Betancourt, T.; Steinhoff, D.

    2016-12-01

    It is necessary to accurately estimate the initial source characteristics in the event of an accidental or intentional release of a Chemical, Biological, Radiological, or Nuclear (CBRN) agent into the atmosphere. The accurate estimation of the source characteristics are important because many times they are unknown and the Atmospheric Transport and Dispersion (AT&D) models rely heavily on these estimates to create hazard assessments. To correctly assess the source characteristics in an operational environment where time is critical, the National Center for Atmospheric Research (NCAR) has developed a Source Term Estimation (STE) method, known as the Variational Iterative Refinement STE algorithm (VIRSA). VIRSA consists of a combination of modeling systems. These systems include an AT&D model, its corresponding STE model, a Hybrid Lagrangian-Eulerian Plume Model (H-LEPM), and its mathematical adjoint model. In an operational scenario where we have information regarding the infrastructure of a city, the AT&D model used is the Urban Dispersion Model (UDM) and when using this model in VIRSA we refer to the system as uVIRSA. In all other scenarios where we do not have the city infrastructure information readily available, the AT&D model used is the Second-order Closure Integrated PUFF model (SCIPUFF) and the system is referred to as sVIRSA. VIRSA was originally developed using SCIPUFF 2.4 for the Defense Threat Reduction Agency and integrated into the Hazard Prediction and Assessment Capability and Joint Program for Information Systems Joint Effects Model. The results discussed here are the verification and validation of the upgraded system with SCIPUFF 3.0 and the newly implemented UDM capability. To verify uVIRSA and sVIRSA, synthetic concentration observation scenarios were created in urban and rural environments and the results of this verification are shown. Finally, we validate the STE performance of uVIRSA using scenarios from the Joint Urban 2003 (JU03

  14. Microlensing of an extended source by a power-law mass distribution

    NASA Astrophysics Data System (ADS)

    Congdon, Arthur B.; Keeton, Charles R.; Osmer, S. J.

    2007-03-01

    Microlensing promises to be a powerful tool for studying distant galaxies and quasars. As the data and models improve, there are systematic effects that need to be explored. Quasar continuum and broad-line regions may respond differently to microlensing due to their different sizes; to understand this effect, we study microlensing of finite sources by a mass function of stars. We find that microlensing is insensitive to the slope of the mass function but does depend on the mass range. For negative-parity images, diluting the stellar population with dark matter increases the magnification dispersion for small sources and decreases it for large sources. This implies that the quasar continuum and broad-line regions may experience very different microlensing in negative-parity lensed images. We confirm earlier conclusions that the surface brightness profile and geometry of the source have little effect on microlensing. Finally, we consider non-circular sources. We show that elliptical sources that are aligned with the direction of shear have larger magnification dispersions than sources with perpendicular alignment, an effect that becomes more prominent as the ellipticity increases. Elongated sources can lead to more rapid variability than circular sources, which raises the prospect of using microlensing to probe source shape.

  15. An Empirical Temperature Variance Source Model in Heated Jets

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  16. Distribution and sources of the polycyclic aromatic hydrocarbons in the sediments of the Pearl River estuary, China.

    PubMed

    Zhang, Jian-Dong; Wang, You-Shao; Cheng, Hao; Jiang, Zhao-Yu; Sun, Cui-Ci; Wu, Mei-Lin

    2015-10-01

    The Pearl River delta, one of the most prosperous economically region in China, has experienced significant contaminant inputs. However, the dynamics of pollutants in the Pearl River estuary and the adjacent coastal areas are still unclear at present. In the paper, distribution and sources of polycyclic aromatic hydrocarbons (PAHs) were investigated in the surface sediments of the Pearl River estuary. The total PAHs concentrations ranged from 126.08 to 3828.58 ng/g with a mean value of 563.52 ng/g, whereas the highest PAHs were observed in Guangzhou channel. Among the U.S. Environmental Protection Agency's 16 priority PAHs, PAHs with 3-4 rings exhibited relative higher levels. A positive relationship was found between PAHs and total organic carbon. The source analysis further showed that the major sources of PAHs in the Pearl River estuary were originated from the pyrolytic inputs, reflecting a mixed energy structure such as wood, coal and petroleum combustion. In summary, although PAHs in Lingding Bay and the adjacent coastal areas of the Pearl River estuary exhibited a relatively low pollution level, the relatively high pollution level of PAHs in Guangzhou channel will be attended.

  17. ERP correlates of source memory: unitized source information increases familiarity-based retrieval.

    PubMed

    Diana, Rachel A; Van den Boom, Wijnand; Yonelinas, Andrew P; Ranganath, Charan

    2011-01-07

    Source memory tests typically require subjects to make decisions about the context in which an item was encoded and are thought to depend on recollection of details from the study episode. Although it is generally believed that familiarity does not contribute to source memory, recent behavioral studies have suggested that familiarity may also support source recognition when item and source information are integrated, or "unitized," during study (Diana, Yonelinas, and Ranganath, 2008). However, an alternative explanation of these behavioral findings is that unitization affects the manner in which recollection contributes to performance, rather than increasing familiarity-based source memory. To discriminate between these possibilities, we conducted an event-related potential (ERP) study testing the hypothesis that unitization increases the contribution of familiarity to source recognition. Participants studied associations between words and background colors using tasks that either encouraged or discouraged unitization. ERPs were recorded during a source memory test for background color. The results revealed two distinct neural correlates of source recognition: a frontally distributed positivity that was associated with familiarity-based source memory in the high-unitization condition only and a parietally distributed positivity that was associated with recollection-based source memory in both the high- and low-unitization conditions. The ERP and behavioral findings provide converging evidence for the idea that familiarity can contribute to source recognition, particularly when source information is encoded as an item detail. Copyright © 2010 Elsevier B.V. All rights reserved.

  18. SU-G-201-17: Verification of Dose Distributions From High-Dose-Rate Brachytherapy Ir-192 Source Using a Multiple-Array-Diode-Detector (MapCheck2)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harpool, K; De La Fuente Herman, T; Ahmad, S

    Purpose: To investigate quantitatively the accuracy of dose distributions for the Ir-192 high-dose-rate (HDR) brachytherapy source calculated by the Brachytherapy-Planning system (BPS) and measured using a multiple-array-diode-detector in a heterogeneous medium. Methods: A two-dimensional diode-array-detector system (MapCheck2) was scanned with a catheter and the CT-images were loaded into the Varian-Brachytherapy-Planning which uses TG-43-formalism for dose calculation. Treatment plans were calculated for different combinations of one dwell-position and varying irradiation times and different-dwell positions and fixed irradiation time with the source placed 12mm from the diode-array plane. The calculated dose distributions were compared to the measured doses with MapCheck2 delivered bymore » an Ir-192-source from a Nucletron-Microselectron-V2-remote-after-loader. The linearity of MapCheck2 was tested for a range of dwell-times (2–600 seconds). The angular effect was tested with 30 seconds irradiation delivered to the central-diode and then moving the source away in increments of 10mm. Results: Large differences were found between calculated and measured dose distributions. These differences are mainly due to absence of heterogeneity in the dose calculation and diode-artifacts in the measurements. The dose differences between measured and calculated due to heterogeneity ranged from 5%–12% depending on the position of the source relative to the diodes in MapCheck2 and different heterogeneities in the beam path. The linearity test of the diode-detector showed 3.98%, 2.61%, and 2.27% over-response at short irradiation times of 2, 5, and 10 seconds, respectively, and within 2% for 20 to 600 seconds (p-value=0.05) which depends strongly on MapCheck2 noise. The angular dependency was more pronounced at acute angles ranging up to 34% at 5.7 degrees. Conclusion: Large deviations between measured and calculated dose distributions for HDR-brachytherapy with Ir-192 may

  19. Wood Residue Distribution Simulator (WORDS)

    Treesearch

    Douglas A. Eza; James W. McMinn; Peter E. Dress

    1984-01-01

    Successful development of woody biomass for energy will depend on the distribution of local supply and demand within subregions, rather than on the total inventory of residues. The Wood Residue Distribution Simulator (WORDS) attempts to find a least-cost allocation of residues from local sources of supply to local sources of demand, given the cost of the materials,...

  20. The interplay of various sources of noise on reliability of species distribution models hinges on ecological specialisation.

    PubMed

    Soultan, Alaaeldin; Safi, Kamran

    2017-01-01

    Digitized species occurrence data provide an unprecedented source of information for ecologists and conservationists. Species distribution model (SDM) has become a popular method to utilise these data for understanding the spatial and temporal distribution of species, and for modelling biodiversity patterns. Our objective is to study the impact of noise in species occurrence data (namely sample size and positional accuracy) on the performance and reliability of SDM, considering the multiplicative impact of SDM algorithms, species specialisation, and grid resolution. We created a set of four 'virtual' species characterized by different specialisation levels. For each of these species, we built the suitable habitat models using five algorithms at two grid resolutions, with varying sample sizes and different levels of positional accuracy. We assessed the performance and reliability of the SDM according to classic model evaluation metrics (Area Under the Curve and True Skill Statistic) and model agreement metrics (Overall Concordance Correlation Coefficient and geographic niche overlap) respectively. Our study revealed that species specialisation had by far the most dominant impact on the SDM. In contrast to previous studies, we found that for widespread species, low sample size and low positional accuracy were acceptable, and useful distribution ranges could be predicted with as few as 10 species occurrences. Range predictions for narrow-ranged species, however, were sensitive to sample size and positional accuracy, such that useful distribution ranges required at least 20 species occurrences. Against expectations, the MAXENT algorithm poorly predicted the distribution of specialist species at low sample size.