Sample records for distributed source term

  1. Source Term Model for Steady Micro Jets in a Navier-Stokes Computer Code

    NASA Technical Reports Server (NTRS)

    Waithe, Kenrick A.

    2005-01-01

    A source term model for steady micro jets was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the mass flow and momentum created by a steady blowing micro jet. The model is obtained by adding the momentum and mass flow created by the jet to the Navier-Stokes equations. The model was tested by comparing with data from numerical simulations of a single, steady micro jet on a flat plate in two and three dimensions. The source term model predicted the velocity distribution well compared to the two-dimensional plate using a steady mass flow boundary condition, which was used to simulate a steady micro jet. The model was also compared to two three-dimensional flat plate cases using a steady mass flow boundary condition to simulate a steady micro jet. The three-dimensional comparison included a case with a grid generated to capture the circular shape of the jet and a case without a grid generated for the micro jet. The case without the jet grid mimics the application of the source term. The source term model compared well with both of the three-dimensional cases. Comparisons of velocity distribution were made before and after the jet and Mach and vorticity contours were examined. The source term model allows a researcher to quickly investigate different locations of individual or several steady micro jets. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  2. Source Localization in Wireless Sensor Networks with Randomly Distributed Elements under Multipath Propagation Conditions

    DTIC Science & Technology

    2009-03-01

    IN WIRELESS SENSOR NETWORKS WITH RANDOMLY DISTRIBUTED ELEMENTS UNDER MULTIPATH PROPAGATION CONDITIONS by Georgios Tsivgoulis March 2009...COVERED Engineer’s Thesis 4. TITLE Source Localization in Wireless Sensor Networks with Randomly Distributed Elements under Multipath Propagation...the non-line-of-sight information. 15. NUMBER OF PAGES 111 14. SUBJECT TERMS Wireless Sensor Network , Direction of Arrival, DOA, Random

  3. A post-implementation evaluation of ceramic water filters distributed to tsunami-affected communities in Sri Lanka.

    PubMed

    Casanova, Lisa M; Walters, Adam; Naghawatte, Ajith; Sobsey, Mark D

    2012-06-01

    Sri Lanka was devastated by the 2004 Indian Ocean tsunami. During recovery, the Red Cross distributed approximately 12,000 free ceramic water filters. This cross-sectional study was an independent post-implementation assessment of 452 households that received filters, to determine the proportion still using filters, household characteristics associated with use, and quality of household drinking water. The proportion of continued users was high (76%). The most common household water sources were taps or shallow wells. The majority (82%) of users used filtered water for drinking only. Mean filter flow rate was 1.12 L/hr (0.80 L/hr for households with taps and 0.71 for those with wells). Water quality varied by source; households using tap water had source water of high microbial quality. Filters improved water quality, reducing Escherichia coli for households (largely well users) with high levels in their source water. Households were satisfied with filters and are potentially long-term users. To promote sustained use, recovery filter distribution efforts should try to identify households at greatest long-term risk, particularly those who have not moved to safer water sources during recovery. They should be joined with long-term commitment to building supply chains and local production capacity to ensure safe water access.

  4. Soundscapes

    DTIC Science & Technology

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Soundscapes Michael B. Porter and Laurel J. Henderson...hindcasts, nowcasts, and forecasts of the time-evolving soundscape . In terms of the types of sound sources, we will focus initially on commercial...modeling of the soundscape due to noise involves running an acoustic model for a grid of source positions over latitude and longitude. Typically

  5. Bremsstrahlung Dose Yield for High-Intensity Short-Pulse Laser–Solid Experiments

    DOE PAGES

    Liang, Taiee; Bauer, Johannes M.; Liu, James C.; ...

    2016-12-01

    A bremsstrahlung source term has been developed by the Radiation Protection (RP) group at SLAC National Accelerator Laboratory for high-intensity short-pulse laser–solid experiments between 10 17 and 10 22 W cm –2. This source term couples the particle-in-cell plasma code EPOCH and the radiation transport code FLUKA to estimate the bremsstrahlung dose yield from laser–solid interactions. EPOCH characterizes the energy distribution, angular distribution, and laser-to-electron conversion efficiency of the hot electrons from laser–solid interactions, and FLUKA utilizes this hot electron source term to calculate a bremsstrahlung dose yield (mSv per J of laser energy on target). The goal of thismore » paper is to provide RP guidelines and hazard analysis for high-intensity laser facilities. In conclusion, a comparison of the calculated bremsstrahlung dose yields to radiation measurement data is also made.« less

  6. Bremsstrahlung Dose Yield for High-Intensity Short-Pulse Laser–Solid Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Taiee; Bauer, Johannes M.; Liu, James C.

    A bremsstrahlung source term has been developed by the Radiation Protection (RP) group at SLAC National Accelerator Laboratory for high-intensity short-pulse laser–solid experiments between 10 17 and 10 22 W cm –2. This source term couples the particle-in-cell plasma code EPOCH and the radiation transport code FLUKA to estimate the bremsstrahlung dose yield from laser–solid interactions. EPOCH characterizes the energy distribution, angular distribution, and laser-to-electron conversion efficiency of the hot electrons from laser–solid interactions, and FLUKA utilizes this hot electron source term to calculate a bremsstrahlung dose yield (mSv per J of laser energy on target). The goal of thismore » paper is to provide RP guidelines and hazard analysis for high-intensity laser facilities. In conclusion, a comparison of the calculated bremsstrahlung dose yields to radiation measurement data is also made.« less

  7. Systematically biological prioritizing remediation sites based on datasets of biological investigations and heavy metals in soil

    NASA Astrophysics Data System (ADS)

    Lin, Wei-Chih; Lin, Yu-Pin; Anthony, Johnathen

    2015-04-01

    Heavy metal pollution has adverse effects on not only the focal invertebrate species of this study, such as reduction in pupa weight and increased larval mortality, but also on the higher trophic level organisms which feed on them, either directly or indirectly, through the process of biomagnification. Despite this, few studies regarding remediation prioritization take species distribution or biological conservation priorities into consideration. This study develops a novel approach for delineating sites which are both contaminated by any of 5 readily bioaccumulated heavy metal soil contaminants and are of high ecological importance for the highly mobile, low trophic level focal species. The conservation priority of each site was based on the projected distributions of 6 moth species simulated via the presence-only maximum entropy species distribution model followed by the subsequent application of a systematic conservation tool. In order to increase the number of available samples, we also integrated crowd-sourced data with professionally-collected data via a novel optimization procedure based on a simulated annealing algorithm. This integration procedure was important since while crowd-sourced data can drastically increase the number of data samples available to ecologists, still the quality or reliability of crowd-sourced data can be called into question, adding yet another source of uncertainty in projecting species distributions. The optimization method screens crowd-sourced data in terms of the environmental variables which correspond to professionally-collected data. The sample distribution data was derived from two different sources, including the EnjoyMoths project in Taiwan (crowd-sourced data) and the Global Biodiversity Information Facility (GBIF) ?eld data (professional data). The distributions of heavy metal concentrations were generated via 1000 iterations of a geostatistical co-simulation approach. The uncertainties in distributions of the heavy metals were then quantified based on the overall consistency between realizations. Finally, Information-Gap Decision Theory (IGDT) was applied to rank the remediation priorities of contaminated sites in terms of both spatial consensus of multiple heavy metal realizations and the priority of specific conservation areas. Our results show that the crowd-sourced optimization algorithm developed in this study is effective at selecting suitable data from crowd-sourced data. By using this technique the available sample data increased to a total number of 96, 162, 72, 62, 69 and 62 or, that is, 2.6, 1.6, 2.5, 1.6, 1.2 and 1.8 times that originally available through the GBIF professionally-assembled database. Additionally, for all species considered the performance of models, in terms of test-AUC values, based on the combination of both data sources exceeded those models which were based on a single data source. Furthermore, the additional optimization-selected data lowered the overall variability, and therefore uncertainty, of model outputs. Based on the projected species distributions, our results revealed that around 30% of high species hotspot areas were also identified as contaminated. The decision-making tool, IGDT, successfully yielded remediation plans in terms of specific ecological value requirements, false positive tolerance rates of contaminated areas, and expected decision robustness. The proposed approach can be applied both to identify high conservation priority sites contaminated by heavy metals, based on the combination of screened crowd-sourced and professionally-collected data, and in making robust remediation decisions.

  8. Multilinear Computing and Multilinear Algebraic Geometry

    DTIC Science & Technology

    2016-08-10

    instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send...performance period of this project. 15. SUBJECT TERMS Tensors , multilinearity, algebraic geometry, numerical computations, computational tractability, high...Reset DISTRIBUTION A: Distribution approved for public release. DISTRIBUTION A: Distribution approved for public release. INSTRUCTIONS FOR COMPLETING

  9. Steady-state solution of the semi-empirical diffusion equation for area sources. [air pollution studies

    NASA Technical Reports Server (NTRS)

    Lebedeff, S. A.; Hameed, S.

    1975-01-01

    The problem investigated can be solved exactly in a simple manner if the equations are written in terms of a similarity variable. The exact solution is used to explore two questions of interest in the modelling of urban air pollution, taking into account the distribution of surface concentration downwind of an area source and the distribution of concentration with height.

  10. Analysis of an entrainment model of the jet in a crossflow

    NASA Technical Reports Server (NTRS)

    Chang, H. S.; Werner, J. E.

    1972-01-01

    A theoretical model has been proposed for the problem of a round jet in an incompressible cross-flow. The method of matched asymptotic expansions has been applied to this problem. For the solution to the flow problem in the inner region, the re-entrant wake flow model was used with the re-entrant flow representing the fluid entrained by the jet. Higher order corrections are obtained in terms of this basic solution. The perturbation terms in the outer region was found to be a line distribution of doublets and sources. The line distribution of sources represents the combined effect of the entrainment and the displacement.

  11. The Effects of Weather Patterns on the Spatio-Temporal Distribution of SO2 over East Asia as Seen from Satellite Measurements

    NASA Astrophysics Data System (ADS)

    Dunlap, L.; Li, C.; Dickerson, R. R.; Krotkov, N. A.

    2015-12-01

    Weather systems, particularly mid-latitude wave cyclones, have been known to play an important role in the short-term variation of near-surface air pollution. Ground measurements and model simulations have demonstrated that stagnant air and minimal precipitation associated with high pressure systems are conducive to pollutant accumulation. With the passage of a cold front, built up pollution is transported downwind of the emission sources or washed out by precipitation. This concept is important to note when studying long-term changes in spatio-temporal pollution distribution, but has not been studied in detail from space. In this study, we focus on East Asia (especially the industrialized eastern China), where numerous large power plants and other point sources as well as area sources emit large amounts of SO2, an important gaseous pollutant and a precursor of aerosols. Using data from the Aura Ozone Monitoring Instrument (OMI) we show that such weather driven distribution can indeed be discerned from satellite data by utilizing probability distribution functions (PDFs) of SO2 column content. These PDFs are multimodal and give insight into the background pollution level at a given location and contribution from local and upwind emission sources. From these PDFs it is possible to determine the frequency for a given region to have SO2 loading that exceeds the background amount. By comparing OMI-observed long-term change in the frequency with meteorological data, we can gain insights into the effects of climate change (e.g., the weakening of Asian monsoon) on regional air quality. Such insight allows for better interpretation of satellite measurements as well as better prediction of future pollution distribution as a changing climate gives way to changing weather patterns.

  12. Time-frequency approach to underdetermined blind source separation.

    PubMed

    Xie, Shengli; Yang, Liu; Yang, Jun-Mei; Zhou, Guoxu; Xiang, Yong

    2012-02-01

    This paper presents a new time-frequency (TF) underdetermined blind source separation approach based on Wigner-Ville distribution (WVD) and Khatri-Rao product to separate N non-stationary sources from M(M <; N) mixtures. First, an improved method is proposed for estimating the mixing matrix, where the negative value of the auto WVD of the sources is fully considered. Then after extracting all the auto-term TF points, the auto WVD value of the sources at every auto-term TF point can be found out exactly with the proposed approach no matter how many active sources there are as long as N ≤ 2M-1. Further discussion about the extraction of auto-term TF points is made and finally the numerical simulation results are presented to show the superiority of the proposed algorithm by comparing it with the existing ones.

  13. Neighborhood Science Stories: Bridging Science Standards and Urban Students' Lives

    ERIC Educational Resources Information Center

    Burke, Christopher

    2007-01-01

    Shelter, distribution of resources, adaptation and food sources are all key topics in teaching fifth grade students ecosystems. These terms and ideas are often presented in value neutral terms in the standard science curriculum. These terms have radically different connotations in different communities. In this paper students' fictional narrative…

  14. An adaptive grid scheme using the boundary element method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munipalli, R.; Anderson, D.A.

    1996-09-01

    A technique to solve the Poisson grid generation equations by Green`s function related methods has been proposed, with the source terms being purely position dependent. The use of distributed singularities in the flow domain coupled with the boundary element method (BEM) formulation is presented in this paper as a natural extension of the Green`s function method. This scheme greatly simplifies the adaption process. The BEM reduces the dimensionality of the given problem by one. Internal grid-point placement can be achieved for a given boundary distribution by adding continuous and discrete source terms in the BEM formulation. A distribution of vortexmore » doublets is suggested as a means of controlling grid-point placement and grid-line orientation. Examples for sample adaption problems are presented and discussed. 15 refs., 20 figs.« less

  15. Comparing the contributions of ionospheric outflow and high-altitude production to O+ loss at Mars

    NASA Astrophysics Data System (ADS)

    Liemohn, Michael; Curry, Shannon; Fang, Xiaohua; Johnson, Blake; Fraenz, Markus; Ma, Yingjuan

    2013-04-01

    The Mars total O+ escape rate is highly dependent on both the ionospheric and high-altitude source terms. Because of their different source locations, they appear in velocity space distributions as distinct populations. The Mars Test Particle model is used (with background parameters from the BATS-R-US magnetohydrodynamic code) to simulate the transport of ions in the near-Mars space environment. Because it is a collisionless model, the MTP's inner boundary is placed at 300 km altitude for this study. The MHD values at this altitude are used to define an ionospheric outflow source of ions for the MTP. The resulting loss distributions (in both real and velocity space) from this ionospheric source term are compared against those from high-altitude ionization mechanisms, in particular photoionization, charge exchange, and electron impact ionization, each of which have their own (albeit overlapping) source regions. In subsequent simulations, the MHD values defining the ionospheric outflow are systematically varied to parametrically explore possible ionospheric outflow scenarios. For the nominal MHD ionospheric outflow settings, this source contributes only 10% to the total O+ loss rate, nearly all via the central tail region. There is very little dependence of this percentage on the initial temperature, but a change in the initial density or bulk velocity directly alters this loss through the central tail. However, a density or bulk velocity increase of a factor of 10 makes the ionospheric outflow loss comparable in magnitude to the loss from the combined high-altitude sources. The spatial and velocity space distributions of escaping O+ are examined and compared for the various source terms, identifying features specific to each ion source mechanism. These results are applied to a specific Mars Express orbit and used to interpret high-altitude observations from the ion mass analyzer onboard MEX.

  16. Elemental composition and size distribution of particulates in Cleveland, Ohio

    NASA Technical Reports Server (NTRS)

    King, R. B.; Fordyce, J. S.; Neustadter, H. E.; Leibecki, H. F.

    1975-01-01

    Measurements were made of the elemental particle size distribution at five contrasting urban environments with different source-type distributions in Cleveland, Ohio. Air quality conditions ranged from normal to air pollution alert levels. A parallel network of high-volume cascade impactors (5-state) were used for simultaneous sampling on glass fiber surfaces for mass determinations and on Whatman-41 surfaces for elemental analysis by neutron activation for 25 elements. The elemental data are assessed in terms of distribution functions and interrelationships and are compared between locations as a function of resultant wind direction in an attempt to relate the findings to sources.

  17. Elemental composition and size distribution of particulates in Cleveland, Ohio

    NASA Technical Reports Server (NTRS)

    Leibecki, H. F.; King, R. B.; Fordyce, J. S.; Neustadter, H. E.

    1975-01-01

    Measurements have been made of the elemental particle size distribution at five contrasting urban environments with different source-type distributions in Cleveland, Ohio. Air quality conditions ranged from normal to air pollution alert levels. A parallel network of high-volume cascade impactors (5-stage) were used for simultaneous sampling on glass fiber surfaces for mass determinations and on Whatman-41 surfaces for elemental analysis by neutron activation for 25 elements. The elemental data are assessed in terms of distribution functions and interrelationships and are compared between locations as a function of resultant wind direction in an attempt to relate the findings to sources.

  18. Impact of routine episodic emissions on the expected frequency distribution of emissions from oil and gas production sources.

    NASA Astrophysics Data System (ADS)

    Smith, N.; Blewitt, D.; Hebert, L. B.

    2015-12-01

    In coordination with oil and gas operators, we developed a high resolution (< 1 min) simulation of temporal variability in well-pad oil and gas emissions over a year. We include routine emissions from condensate tanks, dehydrators, pneumatic devices, fugitive leaks and liquids unloading. We explore the variability in natural gas emissions from these individual well-pad sources, and find that routine short-term episodic emissions such as tank flashing and liquids unloading result in the appearance of a skewed, or 'fat-tail' distribution of emissions, from an individual well-pad over time. Additionally, we explore the expected variability in emissions from multiple wells with different raw gas composition, gas/liquids production volumes and control equipment. Differences in well-level composition, production volume and control equipment translate into differences in well-level emissions leading to a fat-tail distribution of emissions in the absence of operational upsets. Our results have several implications for recent studies focusing on emissions from oil and gas sources. Time scale of emission estimates are important and have important policy implications. Fat tail distributions may not be entirely driven by avoidable mechanical failures, and are expected to occur under routine operational conditions from short-duration emissions (e.g., tank flashing, liquid unloading). An understanding of the expected distribution of emissions for a particular population of wells is necessary to evaluate whether the observed distribution is more skewed than expected. Temporal variability in well-pad emissions make comparisons to annual average emissions inventories difficult and may complicate the interpretation of long-term ambient fenceline monitoring data. Sophisticated change detection algorithms will be necessary to identify when true operational upsets occur versus routine short-term emissions.

  19. An Ultradeep Chandra Catalog of X-Ray Point Sources in the Galactic Center Star Cluster

    NASA Astrophysics Data System (ADS)

    Zhu, Zhenlin; Li, Zhiyuan; Morris, Mark R.

    2018-04-01

    We present an updated catalog of X-ray point sources in the inner 500″ (∼20 pc) of the Galactic center (GC), where the nuclear star cluster (NSC) stands, based on a total of ∼4.5 Ms of Chandra observations taken from 1999 September to 2013 April. This ultradeep data set offers unprecedented sensitivity for detecting X-ray sources in the GC, down to an intrinsic 2–10 keV luminosity of 1.0 × 1031 erg s‑1. A total of 3619 sources are detected in the 2–8 keV band, among which ∼3500 are probable GC sources and ∼1300 are new identifications. The GC sources collectively account for ∼20% of the total 2–8 keV flux from the inner 250″ region where detection sensitivity is the greatest. Taking advantage of this unprecedented sample of faint X-ray sources that primarily traces the old stellar populations in the NSC, we revisit global source properties, including long-term variability, cumulative spectra, luminosity function, and spatial distribution. Based on the equivalent width and relative strength of the iron lines, we suggest that in addition to the arguably predominant population of magnetic cataclysmic variables (CVs), nonmagnetic CVs contribute substantially to the detected sources, especially in the lower-luminosity group. On the other hand, the X-ray sources have a radial distribution closely following the stellar mass distribution in the NSC, but much flatter than that of the known X-ray transients, which are presumably low-mass X-ray binaries (LMXBs) caught in outburst. This, together with the very modest long-term variability of the detected sources, strongly suggests that quiescent LMXBs are a minor (less than a few percent) population.

  20. Ragweed (Ambrosia) pollen source inventory for Austria.

    PubMed

    Karrer, G; Skjøth, C A; Šikoparija, B; Smith, M; Berger, U; Essl, F

    2015-08-01

    This study improves the spatial coverage of top-down Ambrosia pollen source inventories for Europe by expanding the methodology to Austria, a country that is challenging in terms of topography and the distribution of ragweed plants. The inventory combines annual ragweed pollen counts from 19 pollen-monitoring stations in Austria (2004-2013), 657 geographical observations of Ambrosia plants, a Digital Elevation Model (DEM), local knowledge of ragweed ecology and CORINE land cover information from the source area. The highest mean annual ragweed pollen concentrations were generally recorded in the East of Austria where the highest densities of possible growth habitats for Ambrosia were situated. Approximately 99% of all observations of Ambrosia populations were below 745m. The European infection level varies from 0.1% at Freistadt in Northern Austria to 12.8% at Rosalia in Eastern Austria. More top-down Ambrosia pollen source inventories are required for other parts of Europe. A method for constructing top-down pollen source inventories for invasive ragweed plants in Austria, a country that is challenging in terms of topography and ragweed distribution. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.

  1. Supersonic propulsion simulation by incorporating component models in the large perturbation inlet (LAPIN) computer code

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Richard, Jacques C.

    1991-01-01

    An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.

  2. Open source acceleration of wave optics simulations on energy efficient high-performance computing platforms

    NASA Astrophysics Data System (ADS)

    Beck, Jeffrey; Bos, Jeremy P.

    2017-05-01

    We compare several modifications to the open-source wave optics package, WavePy, intended to improve execution time. Specifically, we compare the relative performance of the Intel MKL, a CPU based OpenCV distribution, and GPU-based version. Performance is compared between distributions both on the same compute platform and between a fully-featured computing workstation and the NVIDIA Jetson TX1 platform. Comparisons are drawn in terms of both execution time and power consumption. We have found that substituting the Fast Fourier Transform operation from OpenCV provides a marked improvement on all platforms. In addition, we show that embedded platforms offer some possibility for extensive improvement in terms of efficiency compared to a fully featured workstation.

  3. Nutrients in waters on the inner shelf between Cape Charles and Cape Hatteras

    NASA Technical Reports Server (NTRS)

    Wong, G. T. F.; Todd, J. F.

    1981-01-01

    The distribution of nutrients in the shelf waters of the southern tip of the Middle Atlantic Bight was investigated. It is concluded that the outflow of freshwater from the Chesapeake Bay is a potential source of nutrients to the adjacent shelf waters. However, a quantitative estimation of its importance cannot yet be made because (1) there are other sources of nutrients to the study area and these sources cannot yet be quantified and (2) the concentrations of nutrients in the outflow from Chesapeake Bay exhibit significant short-term and long-term temporal variabilities.

  4. Research in atmospheric chemistry and transport

    NASA Technical Reports Server (NTRS)

    Yung, Y. L.

    1982-01-01

    The carbon monoxide cycle was studied by incorporating the known CO sources and sinks in a tracer model which used the winds generated by a general circulation model. The photochemical production and loss terms, which depended on OH radical concentrations, were calculated in an interactive fashion. Comparison of the computed global distribution and seasonal variations of CO with observations was used to yield constraints on the distribution and magnitude of the sources and sinks of CO, and the abundance of OH radicals in the troposphere.

  5. Quantum key distribution with passive decoy state selection

    NASA Astrophysics Data System (ADS)

    Mauerer, Wolfgang; Silberhorn, Christine

    2007-05-01

    We propose a quantum key distribution scheme which closely matches the performance of a perfect single photon source. It nearly attains the physical upper bound in terms of key generation rate and maximally achievable distance. Our scheme relies on a practical setup based on a parametric downconversion source and present day, nonideal photon-number detection. Arbitrary experimental imperfections which lead to bit errors are included. We select decoy states by classical postprocessing. This allows one to improve the effective signal statistics and achievable distance.

  6. Open-source hardware for medical devices

    PubMed Central

    2016-01-01

    Open-source hardware is hardware whose design is made publicly available so anyone can study, modify, distribute, make and sell the design or the hardware based on that design. Some open-source hardware projects can potentially be used as active medical devices. The open-source approach offers a unique combination of advantages, including reducing costs and faster innovation. This article compares 10 of open-source healthcare projects in terms of how easy it is to obtain the required components and build the device. PMID:27158528

  7. Open-source hardware for medical devices.

    PubMed

    Niezen, Gerrit; Eslambolchilar, Parisa; Thimbleby, Harold

    2016-04-01

    Open-source hardware is hardware whose design is made publicly available so anyone can study, modify, distribute, make and sell the design or the hardware based on that design. Some open-source hardware projects can potentially be used as active medical devices. The open-source approach offers a unique combination of advantages, including reducing costs and faster innovation. This article compares 10 of open-source healthcare projects in terms of how easy it is to obtain the required components and build the device.

  8. Recent skyshine calculations at Jefferson Lab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Degtyarenko, P.

    1997-12-01

    New calculations of the skyshine dose distribution of neutrons and secondary photons have been performed at Jefferson Lab using the Monte Carlo method. The dose dependence on neutron energy, distance to the neutron source, polar angle of a source neutron, and azimuthal angle between the observation point and the momentum direction of a source neutron have been studied. The azimuthally asymmetric term in the skyshine dose distribution is shown to be important in the dose calculations around high-energy accelerator facilities. A parameterization formula and corresponding computer code have been developed which can be used for detailed calculations of the skyshinemore » dose maps.« less

  9. Distribution of tsunami interevent times

    NASA Astrophysics Data System (ADS)

    Geist, Eric L.; Parsons, Tom

    2008-01-01

    The distribution of tsunami interevent times is analyzed using global and site-specific (Hilo, Hawaii) tsunami catalogs. An empirical probability density distribution is determined by binning the observed interevent times during a period in which the observation rate is approximately constant. The empirical distributions for both catalogs exhibit non-Poissonian behavior in which there is an abundance of short interevent times compared to an exponential distribution. Two types of statistical distributions are used to model this clustering behavior: (1) long-term clustering described by a universal scaling law, and (2) Omori law decay of aftershocks and triggered sources. The empirical and theoretical distributions all imply an increased hazard rate after a tsunami, followed by a gradual decrease with time approaching a constant hazard rate. Examination of tsunami sources suggests that many of the short interevent times are caused by triggered earthquakes, though the triggered events are not necessarily on the same fault.

  10. Analysis of jet-airfoil interaction noise sources by using a microphone array technique

    NASA Astrophysics Data System (ADS)

    Fleury, Vincent; Davy, Renaud

    2016-03-01

    The paper is concerned with the characterization of jet noise sources and jet-airfoil interaction sources by using microphone array data. The measurements were carried-out in the anechoic open test section wind tunnel of Onera, Cepra19. The microphone array technique relies on the convected, Lighthill's and Ffowcs-Williams and Hawkings' acoustic analogy equation. The cross-spectrum of the source term of the analogy equation is sought. It is defined as the optimal solution to a minimal error equation using the measured microphone cross-spectra as reference. This inverse problem is ill-posed yet. A penalty term based on a localization operator is therefore added to improve the recovery of jet noise sources. The analysis of isolated jet noise data in subsonic regime shows the contribution of the conventional mixing noise source in the low frequency range, as expected, and of uniformly distributed, uncorrelated noise sources in the jet flow at higher frequencies. In underexpanded supersonic regime, a shock-associated noise source is clearly identified, too. An additional source is detected in the vicinity of the nozzle exit both in supersonic and subsonic regimes. In the presence of the airfoil, the distribution of the noise sources is deeply modified. In particular, a strong noise source is localized on the flap. For high Strouhal numbers, higher than about 2 (based on the jet mixing velocity and diameter), a significant contribution from the shear-layer near the flap is observed, too. Indications of acoustic reflections on the airfoil are also discerned.

  11. The Fukushima releases: an inverse modelling approach to assess the source term by using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Saunier, Olivier; Mathieu, Anne; Didier, Damien; Tombette, Marilyne; Quélo, Denis; Winiarek, Victor; Bocquet, Marc

    2013-04-01

    The Chernobyl nuclear accident and more recently the Fukushima accident highlighted that the largest source of error on consequences assessment is the source term estimation including the time evolution of the release rate and its distribution between radioisotopes. Inverse modelling methods have proved to be efficient to assess the source term due to accidental situation (Gudiksen, 1989, Krysta and Bocquet, 2007, Stohl et al 2011, Winiarek et al 2012). These methods combine environmental measurements and atmospheric dispersion models. They have been recently applied to the Fukushima accident. Most existing approaches are designed to use air sampling measurements (Winiarek et al, 2012) and some of them use also deposition measurements (Stohl et al, 2012, Winiarek et al, 2013). During the Fukushima accident, such measurements are far less numerous and not as well distributed within Japan than the dose rate measurements. To efficiently document the evolution of the contamination, gamma dose rate measurements were numerous, well distributed within Japan and they offered a high temporal frequency. However, dose rate data are not as easy to use as air sampling measurements and until now they were not used in inverse modelling approach. Indeed, dose rate data results from all the gamma emitters present in the ground and in the atmosphere in the vicinity of the receptor. They do not allow one to determine the isotopic composition or to distinguish the plume contribution from wet deposition. The presented approach proposes a way to use dose rate measurement in inverse modeling approach without the need of a-priori information on emissions. The method proved to be efficient and reliable when applied on the Fukushima accident. The emissions for the 8 main isotopes Xe-133, Cs-134, Cs-136, Cs-137, Ba-137m, I-131, I-132 and Te-132 have been assessed. The Daiichi power plant events (such as ventings, explosions…) known to have caused atmospheric releases are well identified in the retrieved source term, except for unit 3 explosion where no measurement was available. The comparisons between the simulations of atmospheric dispersion and deposition of the retrieved source term show a good agreement with environmental observations. Moreover, an important outcome of this study is that the method proved to be perfectly suited to crisis management and should contribute to improve our response in case of a nuclear accident.

  12. Bayesian estimation of a source term of radiation release with approximately known nuclide ratios

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek

    2016-04-01

    We are concerned with estimation of a source term in case of an accidental release from a known location, e.g. a power plant. Usually, the source term of an accidental release of radiation comprises of a mixture of nuclide. The gamma dose rate measurements do not provide a direct information on the source term composition. However, physical properties of respective nuclide (deposition properties, decay half-life) can be used when uncertain information on nuclide ratios is available, e.g. from known reactor inventory. The proposed method is based on linear inverse model where the observation vector y arise as a linear combination y = Mx of a source-receptor-sensitivity (SRS) matrix M and the source term x. The task is to estimate the unknown source term x. The problem is ill-conditioned and further regularization is needed to obtain a reasonable solution. In this contribution, we assume that nuclide ratios of the release is known with some degree of uncertainty. This knowledge is used to form the prior covariance matrix of the source term x. Due to uncertainty in the ratios the diagonal elements of the covariance matrix are considered to be unknown. Positivity of the source term estimate is guaranteed by using multivariate truncated Gaussian distribution. Following Bayesian approach, we estimate all parameters of the model from the data so that y, M, and known ratios are the only inputs of the method. Since the inference of the model is intractable, we follow the Variational Bayes method yielding an iterative algorithm for estimation of all model parameters. Performance of the method is studied on simulated 6 hour power plant release where 3 nuclide are released and 2 nuclide ratios are approximately known. The comparison with method with unknown nuclide ratios will be given to prove the usefulness of the proposed approach. This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).

  13. Evaluation of an unsteady flamelet progress variable model for autoignition and flame development in compositionally stratified mixtures

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Saumyadip; Abraham, John

    2012-07-01

    The unsteady flamelet progress variable (UFPV) model has been proposed by Pitsch and Ihme ["An unsteady/flamelet progress variable method for LES of nonpremixed turbulent combustion," AIAA Paper No. 2005-557, 2005] for modeling the averaged/filtered chemistry source terms in Reynolds averaged simulations and large eddy simulations of reacting non-premixed combustion. In the UFPV model, a look-up table of source terms is generated as a function of mixture fraction Z, scalar dissipation rate χ, and progress variable C by solving the unsteady flamelet equations. The assumption is that the unsteady flamelet represents the evolution of the reacting mixing layer in the non-premixed flame. We assess the accuracy of the model in predicting autoignition and flame development in compositionally stratified n-heptane/air mixtures using direct numerical simulations (DNS). The focus in this work is primarily on the assessment of accuracy of the probability density functions (PDFs) employed for obtaining averaged source terms. The performance of commonly employed presumed functions, such as the dirac-delta distribution function, the β distribution function, and statistically most likely distribution (SMLD) approach in approximating the shapes of the PDFs of the reactive and the conserved scalars is evaluated. For unimodal distributions, it is observed that functions that need two-moment information, e.g., the β distribution function and the SMLD approach with two-moment closure, are able to reasonably approximate the actual PDF. As the distribution becomes multimodal, higher moment information is required. Differences are observed between the ignition trends obtained from DNS and those predicted by the look-up table, especially for smaller gradients where the flamelet assumption becomes less applicable. The formulation assumes that the shape of the χ(Z) profile can be modeled by an error function which remains unchanged in the presence of heat release. We show that this assumption is not accurate.

  14. Non-Poissonian Distribution of Tsunami Waiting Times

    NASA Astrophysics Data System (ADS)

    Geist, E. L.; Parsons, T.

    2007-12-01

    Analysis of the global tsunami catalog indicates that tsunami waiting times deviate from an exponential distribution one would expect from a Poisson process. Empirical density distributions of tsunami waiting times were determined using both global tsunami origin times and tsunami arrival times at a particular site with a sufficient catalog: Hilo, Hawai'i. Most sources for the tsunamis in the catalog are earthquakes; other sources include landslides and volcanogenic processes. Both datasets indicate an over-abundance of short waiting times in comparison to an exponential distribution. Two types of probability models are investigated to explain this observation. Model (1) is a universal scaling law that describes long-term clustering of sources with a gamma distribution. The shape parameter (γ) for the global tsunami distribution is similar to that of the global earthquake catalog γ=0.63-0.67 [Corral, 2004]. For the Hilo catalog, γ is slightly greater (0.75-0.82) and closer to an exponential distribution. This is explained by the fact that tsunamis from smaller triggered earthquakes or landslides are less likely to be recorded at a far-field station such as Hilo in comparison to the global catalog, which includes a greater proportion of local tsunamis. Model (2) is based on two distributions derived from Omori's law for the temporal decay of triggered sources (aftershocks). The first is the ETAS distribution derived by Saichev and Sornette [2007], which is shown to fit the distribution of observed tsunami waiting times. The second is a simpler two-parameter distribution that is the exponential distribution augmented by a linear decay in aftershocks multiplied by a time constant Ta. Examination of the sources associated with short tsunami waiting times indicate that triggered events include both earthquake and landslide tsunamis that begin in the vicinity of the primary source. Triggered seismogenic tsunamis do not necessarily originate from the same fault zone, however. For example, subduction-thrust and outer-rise earthquake pairs are evident, such as the November 2006 and January 2007 Kuril Islands tsunamigenic pair. Because of variations in tsunami source parameters, such as water depth above the source, triggered tsunami events with short waiting times are not systematically smaller than the primary tsunami.

  15. Distributed and Collaborative Software Analysis

    NASA Astrophysics Data System (ADS)

    Ghezzi, Giacomo; Gall, Harald C.

    Throughout the years software engineers have come up with a myriad of specialized tools and techniques that focus on a certain type of software analysissoftware analysis such as source code analysis, co-change analysis or bug prediction. However, easy and straight forward synergies between these analyses and tools rarely exist because of their stand-alone nature, their platform dependence, their different input and output formats and the variety of data to analyze. As a consequence, distributed and collaborative software analysiscollaborative software analysis scenarios and in particular interoperability are severely limited. We describe a distributed and collaborative software analysis platform that allows for a seamless interoperability of software analysis tools across platform, geographical and organizational boundaries. We realize software analysis tools as services that can be accessed and composed over the Internet. These distributed analysis services shall be widely accessible in our incrementally augmented Software Analysis Broker software analysis broker where organizations and tool providers can register and share their tools. To allow (semi-) automatic use and composition of these tools, they are classified and mapped into a software analysis taxonomy and adhere to specific meta-models and ontologiesontologies for their category of analysis.

  16. Updating source term and atmospheric dispersion simulations for the dose reconstruction in Fukushima Daiichi Nuclear Power Station Accident

    NASA Astrophysics Data System (ADS)

    Nagai, Haruyasu; Terada, Hiroaki; Tsuduki, Katsunori; Katata, Genki; Ota, Masakazu; Furuno, Akiko; Akari, Shusaku

    2017-09-01

    In order to assess the radiological dose to the public resulting from the Fukushima Daiichi Nuclear Power Station (FDNPS) accident in Japan, especially for the early phase of the accident when no measured data are available for that purpose, the spatial and temporal distribution of radioactive materials in the environment are reconstructed by computer simulations. In this study, by refining the source term of radioactive materials discharged into the atmosphere and modifying the atmospheric transport, dispersion and deposition model (ATDM), the atmospheric dispersion simulation of radioactive materials is improved. Then, a database of spatiotemporal distribution of radioactive materials in the air and on the ground surface is developed from the output of the simulation. This database is used in other studies for the dose assessment by coupling with the behavioral pattern of evacuees from the FDNPS accident. By the improvement of the ATDM simulation to use a new meteorological model and sophisticated deposition scheme, the ATDM simulations reproduced well the 137Cs and 131I deposition patterns. For the better reproducibility of dispersion processes, further refinement of the source term was carried out by optimizing it to the improved ATDM simulation by using new monitoring data.

  17. The phonological-distributional coherence hypothesis: cross-linguistic evidence in language acquisition.

    PubMed

    Monaghan, Padraic; Christiansen, Morten H; Chater, Nick

    2007-12-01

    Several phonological and prosodic properties of words have been shown to relate to differences between grammatical categories. Distributional information about grammatical categories is also a rich source in the child's language environment. In this paper we hypothesise that such cues operate in tandem for developing the child's knowledge about grammatical categories. We term this the Phonological-Distributional Coherence Hypothesis (PDCH). We tested the PDCH by analysing phonological and distributional information in distinguishing open from closed class words and nouns from verbs in four languages: English, Dutch, French, and Japanese. We found an interaction between phonological and distributional cues for all four languages indicating that when distributional cues were less reliable, phonological cues were stronger. This provides converging evidence that language is structured such that language learning benefits from the integration of information about category from contextual and sound-based sources, and that the child's language environment is less impoverished than we might suspect.

  18. Development of Accommodation Models for Soldiers in Vehicles: Squad

    DTIC Science & Technology

    2014-09-01

    existing data sources, gathering and maintaining the data needed, and completing and reviewing this collection of information. Send comments...Distribution Statement A. Approved for public release; distribution is unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT Data from a previous study...body armor and body borne gear. 15. SUBJECT TERMS Anthropometry , Posture, Vehicle Occupants, Accommodation 16. SECURITY CLASSIFICATION OF

  19. Numerical modeling of heat transfer in the fuel oil storage tank at thermal power plant

    NASA Astrophysics Data System (ADS)

    Kuznetsova, Svetlana A.

    2015-01-01

    Presents results of mathematical modeling of convection of a viscous incompressible fluid in a rectangular cavity with conducting walls of finite thickness in the presence of a local source of heat in the bottom of the field in terms of convective heat exchange with the environment. A mathematical model is formulated in terms of dimensionless variables "stream function - vorticity vector speed - temperature" in the Cartesian coordinate system. As the results show the distributions of hydrodynamic parameters and temperatures using different boundary conditions on the local heat source.

  20. Sensitivity of stream water age to climatic variability and land use change: implications for water quality

    NASA Astrophysics Data System (ADS)

    Soulsby, Chris; Birkel, Christian; Geris, Josie; Tetzlaff, Doerthe

    2016-04-01

    Advances in the use of hydrological tracers and their integration into rainfall runoff models is facilitating improved quantification of stream water age distributions. This is of fundamental importance to understanding water quality dynamics over both short- and long-time scales, particularly as water quality parameters are often associated with water sources of markedly different ages. For example, legacy nitrate pollution may reflect deeper waters that have resided in catchments for decades, whilst more dynamics parameters from anthropogenic sources (e.g. P, pathogens etc) are mobilised by very young (<1 day) near-surface water sources. It is increasingly recognised that water age distributions of stream water is non-stationary in both the short (i.e. event dynamics) and longer-term (i.e. in relation to hydroclimatic variability). This provides a crucial context for interpreting water quality time series. Here, we will use longer-term (>5 year), high resolution (daily) isotope time series in modelling studies for different catchments to show how variable stream water age distributions can be a result of hydroclimatic variability and the implications for understanding water quality. We will also use examples from catchments undergoing rapid urbanisation, how the resulting age distributions of stream water change in a predictable way as a result of modified flow paths. The implication for the management of water quality in urban catchments will be discussed.

  1. Source environment feature related phylogenetic distribution pattern of anoxygenic photosynthetic bacteria as revealed by pufM analysis.

    PubMed

    Zeng, Yonghui; Jiao, Nianzhi

    2007-06-01

    Anoxygenic photosynthesis, performed primarily by anoxygenic photosynthetic bacteria (APB), has been supposed to arise on Earth more than 3 billion years ago. The long established APB are distributed in almost every corner where light can reach. However, the relationship between APB phylogeny and source environments has been largely unexplored. Here we retrieved the pufM sequences and related source information of 89 pufM containing species from the public database. Phylogenetic analysis revealed that horizontal gene transfer (HGT) most likely occurred within 11 out of a total 21 pufM subgroups, not only among species within the same class but also among species of different phyla or subphyla. A clear source environment feature related phylogenetic distribution pattern was observed, with all species from oxic habitats and those from anoxic habitats clustering into independent subgroups, respectively. HGT among ancient APB and subsequent long term evolution and adaptation to separated niches may have contributed to the coupling of environment and pufM phylogeny.

  2. Size distribution, directional source contributions and pollution status of PM from Chengdu, China during a long-term sampling campaign.

    PubMed

    Shi, Guo-Liang; Tian, Ying-Ze; Ma, Tong; Song, Dan-Lin; Zhou, Lai-Dong; Han, Bo; Feng, Yin-Chang; Russell, Armistead G

    2017-06-01

    Long-term and synchronous monitoring of PM 10 and PM 2.5 was conducted in Chengdu in China from 2007 to 2013. The levels, variations, compositions and size distributions were investigated. The sources were quantified by two-way and three-way receptor models (PMF2, ME2-2way and ME2-3way). Consistent results were found: the primary source categories contributed 63.4% (PMF2), 64.8% (ME2-2way) and 66.8% (ME2-3way) to PM 10 , and contributed 60.9% (PMF2), 65.5% (ME2-2way) and 61.0% (ME2-3way) to PM 2.5 . Secondary sources contributed 31.8% (PMF2), 32.9% (ME2-2way) and 31.7% (ME2-3way) to PM 10 , and 35.0% (PMF2), 33.8% (ME2-2way) and 36.0% (ME2-3way) to PM 2.5 . The size distribution of source categories was estimated better by the ME2-3way method. The three-way model can simultaneously consider chemical species, temporal variability and PM sizes, while a two-way model independently computes datasets of different sizes. A method called source directional apportionment (SDA) was employed to quantify the contributions from various directions for each source category. Crustal dust from east-north-east (ENE) contributed the highest to both PM 10 (12.7%) and PM 2.5 (9.7%) in Chengdu, followed by the crustal dust from south-east (SE) for PM 10 (9.8%) and secondary nitrate & secondary organic carbon from ENE for PM 2.5 (9.6%). Source contributions from different directions are associated with meteorological conditions, source locations and emission patterns during the sampling period. These findings and methods provide useful tools to better understand PM pollution status and to develop effective pollution control strategies. Copyright © 2016. Published by Elsevier B.V.

  3. Consistent description of kinetic equation with triangle anomaly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pu Shi; Gao Jianhua; Wang Qun

    2011-05-01

    We provide a consistent description of the kinetic equation with a triangle anomaly which is compatible with the entropy principle of the second law of thermodynamics and the charge/energy-momentum conservation equations. In general an anomalous source term is necessary to ensure that the equations for the charge and energy-momentum conservation are satisfied and that the correction terms of distribution functions are compatible to these equations. The constraining equations from the entropy principle are derived for the anomaly-induced leading order corrections to the particle distribution functions. The correction terms can be determined for the minimum number of unknown coefficients in onemore » charge and two charge cases by solving the constraining equations.« less

  4. The Funding of Long-Term Care in Canada: What Do We Know, What Should We Know?

    PubMed

    Grignon, Michel; Spencer, Byron G

    2018-06-01

    ABSTRACTLong-term care is a growing component of health care spending but how much is spent or who bears the cost is uncertain, and the measures vary depending on the source used. We drew on regularly published series and ad hoc publications to compile preferred estimates of the share of long-term care spending in total health care spending, the private share of long-term care spending, and the share of residential care within long-term care. For each series, we compared estimates obtainable from published sources (CIHI [Canadian Institute for Health Information] and OECD [Organization for Economic Cooperation and Development]) with our preferred estimates. We conclude that using published series without adjustment would lead to spurious conclusions on the level and evolution of spending on long-term care in Canada as well as on the distribution of costs between private and public funders and between residential and home care.

  5. Modified ensemble Kalman filter for nuclear accident atmospheric dispersion: prediction improved and source estimated.

    PubMed

    Zhang, X L; Su, G F; Yuan, H Y; Chen, J G; Huang, Q Y

    2014-09-15

    Atmospheric dispersion models play an important role in nuclear power plant accident management. A reliable estimation of radioactive material distribution in short range (about 50 km) is in urgent need for population sheltering and evacuation planning. However, the meteorological data and the source term which greatly influence the accuracy of the atmospheric dispersion models are usually poorly known at the early phase of the emergency. In this study, a modified ensemble Kalman filter data assimilation method in conjunction with a Lagrangian puff-model is proposed to simultaneously improve the model prediction and reconstruct the source terms for short range atmospheric dispersion using the off-site environmental monitoring data. Four main uncertainty parameters are considered: source release rate, plume rise height, wind speed and wind direction. Twin experiments show that the method effectively improves the predicted concentration distribution, and the temporal profiles of source release rate and plume rise height are also successfully reconstructed. Moreover, the time lag in the response of ensemble Kalman filter is shortened. The method proposed here can be a useful tool not only in the nuclear power plant accident emergency management but also in other similar situation where hazardous material is released into the atmosphere. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. [Spatio-temporal characteristics and source identification of water pollutants in Wenruitang River watershed].

    PubMed

    Ma, Xiao-xue; Wang, La-chun; Liao, Ling-ling

    2015-01-01

    Identifying the temp-spatial distribution and sources of water pollutants is of great significance for efficient water quality management pollution control in Wenruitang River watershed, China. A total of twelve water quality parameters, including temperature, pH, dissolved oxygen (DO), total nitrogen (TN), ammonia nitrogen (NH4+ -N), electrical conductivity (EC), turbidity (Turb), nitrite-N (NO2-), nitrate-N(NO3-), phosphate-P(PO4(3-), total organic carbon (TOC) and silicate (SiO3(2-)), were analyzed from September, 2008 to October, 2009. Geographic information system(GIS) and principal component analysis(PCA) were used to determine the spatial distribution and to apportion the sources of pollutants. The results demonstrated that TN, NH4+ -N, PO4(3-) were the main pollutants during flow period, wet period, dry period, respectively, which was mainly caused by urban point sources and agricultural and rural non-point sources. In spatial terms, the order of pollution was tertiary river > secondary river > primary river, while the water quality was worse in city zones than in the suburb and wetland zone regardless of the river classification. In temporal terms, the order of pollution was dry period > wet period > flow period. Population density, land use type and water transfer affected the water quality in Wenruitang River.

  7. Using natural archives to track sources and long-term trends of pollution: an introduction

    USGS Publications Warehouse

    Jules Blais,; Rosen, Michael R.; John Smol,

    2015-01-01

    This book explores the myriad ways that environmental archives can be used to study the distribution and long-term trajectories of contaminants. The volume first focuses on reviews that examine the integrity of the historic record, including factors related to hydrology, post-depositional diffusion, and mixing processes. This is followed by a series of chapters dealing with the diverse archives available for long-term studies of environmental pollution.

  8. A model for jet-noise analysis using pressure-gradient correlations on an imaginary cone

    NASA Technical Reports Server (NTRS)

    Norum, T. D.

    1974-01-01

    The technique for determining the near and far acoustic field of a jet through measurements of pressure-gradient correlations on an imaginary conical surface surrounding the jet is discussed. The necessary analytical developments are presented, and their feasibility is checked by using a point source as the sound generator. The distribution of the apparent sources on the cone, equivalent to the point source, is determined in terms of the pressure-gradient correlations.

  9. Characterization and Remediation of Contaminated Sites:Modeling, Measurement and Assessment

    NASA Astrophysics Data System (ADS)

    Basu, N. B.; Rao, P. C.; Poyer, I. C.; Christ, J. A.; Zhang, C. Y.; Jawitz, J. W.; Werth, C. J.; Annable, M. D.; Hatfield, K.

    2008-05-01

    The complexity of natural systems makes it impossible to estimate parameters at the required level of spatial and temporal detail. Thus, it becomes necessary to transition from spatially distributed parameters to spatially integrated parameters that are capable of adequately capturing the system dynamics, without always accounting for local process behavior. Contaminant flux across the source control plane is proposed as an integrated metric that captures source behavior and links it to plume dynamics. Contaminant fluxes were measured using an innovative technology, the passive flux meter at field sites contaminated with dense non-aqueous phase liquids or DNAPLs in the US and Australia. Flux distributions were observed to be positively or negatively correlated with the conductivity distribution, depending on the source characteristics of the site. The impact of partial source depletion on the mean contaminant flux and flux architecture was investigated in three-dimensional complex heterogeneous settings using the multiphase transport code UTCHEM and the reactive transport code ISCO3D. Source mass depletion reduced the mean contaminant flux approximately linearly, while the contaminant flux standard deviation reduced proportionally with the mean (i.e., coefficient of variation of flux distribution is constant with time). Similar analysis was performed using data from field sites, and the results confirmed the numerical simulations. The linearity of the mass depletion-flux reduction relationship indicates the ability to design remediation systems that deplete mass to achieve target reduction in source strength. Stability of the flux distribution indicates the ability to characterize the distributions in time once the initial distribution is known. Lagrangian techniques were used to predict contaminant flux behavior during source depletion in terms of the statistics of the hydrodynamic and DNAPL distribution. The advantage of the Lagrangian techniques lies in their small computation time and their inclusion of spatially integrated parameters that can be measured in the field using tracer tests. Analytical models that couple source depletion to plume transport were used for optimization of source and plume treatment. These models are being used for the development of decision and management tools (for DNAPL sites) that consider uncertainty assessments as an integral part of the decision-making process for contaminated site remediation.

  10. Analysis and Application of Microgrids

    NASA Astrophysics Data System (ADS)

    Yue, Lu

    New trends of generating electricity locally and utilizing non-conventional or renewable energy sources have attracted increasing interests due to the gradual depletion of conventional fossil fuel energy sources. The new type of power generation is called Distributed Generation (DG) and the energy sources utilized by Distributed Generation are termed Distributed Energy Sources (DERs). With DGs embedded in the distribution networks, they evolve from passive distribution networks to active distribution networks enabling bidirectional power flows in the networks. Further incorporating flexible and intelligent controllers and employing future technologies, active distribution networks will turn to a Microgrid. A Microgrid is a small-scale, low voltage Combined with Heat and Power (CHP) supply network designed to supply electrical and heat loads for a small community. To further implement Microgrids, a sophisticated Microgrid Management System must be integrated. However, due to the fact that a Microgrid has multiple DERs integrated and is likely to be deregulated, the ability to perform real-time OPF and economic dispatch with fast speed advanced communication network is necessary. In this thesis, first, problems such as, power system modelling, power flow solving and power system optimization, are studied. Then, Distributed Generation and Microgrid are studied and reviewed, including a comprehensive review over current distributed generation technologies and Microgrid Management Systems, etc. Finally, a computer-based AC optimization method which minimizes the total transmission loss and generation cost of a Microgrid is proposed and a wireless communication scheme based on synchronized Code Division Multiple Access (sCDMA) is proposed. The algorithm is tested with a 6-bus power system and a 9-bus power system.

  11. Development of surrogate models for the prediction of the flow around an aircraft propeller

    NASA Astrophysics Data System (ADS)

    Salpigidou, Christina; Misirlis, Dimitris; Vlahostergios, Zinon; Yakinthos, Kyros

    2018-05-01

    In the present work, the derivation of two surrogate models (SMs) for modelling the flow around a propeller for small aircrafts is presented. Both methodologies use derived functions based on computations with the detailed propeller geometry. The computations were performed using k-ω shear stress transport for modelling turbulence. In the SMs, the modelling of the propeller was performed in a computational domain of disk-like geometry, where source terms were introduced in the momentum equations. In the first SM, the source terms were polynomial functions of swirl and thrust, mainly related to the propeller radius. In the second SM, regression analysis was used to correlate the source terms with the velocity distribution through the propeller. The proposed SMs achieved faster convergence, in relation to the detail model, by providing also results closer to the available operational data. The regression-based model was the most accurate and required less computational time for convergence.

  12. High-Order Residual-Distribution Hyperbolic Advection-Diffusion Schemes: 3rd-, 4th-, and 6th-Order

    NASA Technical Reports Server (NTRS)

    Mazaheri, Alireza R.; Nishikawa, Hiroaki

    2014-01-01

    In this paper, spatially high-order Residual-Distribution (RD) schemes using the first-order hyperbolic system method are proposed for general time-dependent advection-diffusion problems. The corresponding second-order time-dependent hyperbolic advection- diffusion scheme was first introduced in [NASA/TM-2014-218175, 2014], where rapid convergences over each physical time step, with typically less than five Newton iterations, were shown. In that method, the time-dependent hyperbolic advection-diffusion system (linear and nonlinear) was discretized by the second-order upwind RD scheme in a unified manner, and the system of implicit-residual-equations was solved efficiently by Newton's method over every physical time step. In this paper, two techniques for the source term discretization are proposed; 1) reformulation of the source terms with their divergence forms, and 2) correction to the trapezoidal rule for the source term discretization. Third-, fourth, and sixth-order RD schemes are then proposed with the above techniques that, relative to the second-order RD scheme, only cost the evaluation of either the first derivative or both the first and the second derivatives of the source terms. A special fourth-order RD scheme is also proposed that is even less computationally expensive than the third-order RD schemes. The second-order Jacobian formulation was used for all the proposed high-order schemes. The numerical results are then presented for both steady and time-dependent linear and nonlinear advection-diffusion problems. It is shown that these newly developed high-order RD schemes are remarkably efficient and capable of producing the solutions and the gradients to the same order of accuracy of the proposed RD schemes with rapid convergence over each physical time step, typically less than ten Newton iterations.

  13. Effect of source location and listener location on ILD cues in a reverberant room

    NASA Astrophysics Data System (ADS)

    Ihlefeld, Antje; Shinn-Cunningham, Barbara G.

    2004-05-01

    Short-term interaural level differences (ILDs) were analyzed for simulations of the signals that would reach a listener in a reverberant room. White noise was convolved with manikin head-related impulse responses measured in a classroom to simulate different locations of the source relative to the manikin and different manikin positions in the room. The ILDs of the signals were computed within each third-octave band over a relatively short time window to investigate how reliably ILD cues encode source laterality. Overall, the mean of the ILD magnitude increases with lateral angle and decreases with distance, as expected. Increasing reverberation decreases the mean ILD magnitude and increases the variance of the short-term ILD, so that the spatial information carried by ILD cues is degraded by reverberation. These results suggest that the mean ILD is not a reliable cue for determining source laterality in a reverberant room. However, by taking into account both the mean and variance, the distribution of high-frequency short-term ILDs provides some spatial information. This analysis suggests that, in order to use ILDs to judge source direction in reverberant space, listeners must accumulate information about how the short-term ILD varies over time. [Work supported by NIDCD and AFOSR.

  14. Frequent long-distance plant colonization in the changing Arctic.

    PubMed

    Alsos, Inger Greve; Eidesen, Pernille Bronken; Ehrich, Dorothee; Skrede, Inger; Westergaard, Kristine; Jacobsen, Gro Hilde; Landvik, Jon Y; Taberlet, Pierre; Brochmann, Christian

    2007-06-15

    The ability of species to track their ecological niche after climate change is a major source of uncertainty in predicting their future distribution. By analyzing DNA fingerprinting (amplified fragment-length polymorphism) of nine plant species, we show that long-distance colonization of a remote arctic archipelago, Svalbard, has occurred repeatedly and from several source regions. Propagules are likely carried by wind and drifting sea ice. The genetic effect of restricted colonization was strongly correlated with the temperature requirements of the species, indicating that establishment limits distribution more than dispersal. Thus, it may be appropriate to assume unlimited dispersal when predicting long-term range shifts in the Arctic.

  15. Discrimination of particulate matter emission sources using stochastic methods

    NASA Astrophysics Data System (ADS)

    Szczurek, Andrzej; Maciejewska, Monika; Wyłomańska, Agnieszka; Sikora, Grzegorz; Balcerek, Michał; Teuerle, Marek

    2016-12-01

    Particulate matter (PM) is one of the criteria pollutants which has been determined as harmful to public health and the environment. For this reason the ability to recognize its emission sources is very important. There are a number of measurement methods which allow to characterize PM in terms of concentration, particles size distribution, and chemical composition. All these information are useful to establish a link between the dust found in the air, its emission sources and influence on human as well as the environment. However, the methods are typically quite sophisticated and not applicable outside laboratories. In this work, we considered PM emission source discrimination method which is based on continuous measurements of PM concentration with a relatively cheap instrument and stochastic analysis of the obtained data. The stochastic analysis is focused on the temporal variation of PM concentration and it involves two steps: (1) recognition of the category of distribution for the data i.e. stable or the domain of attraction of stable distribution and (2) finding best matching distribution out of Gaussian, stable and normal-inverse Gaussian (NIG). We examined six PM emission sources. They were associated with material processing in industrial environment, namely machining and welding aluminum, forged carbon steel and plastic with various tools. As shown by the obtained results, PM emission sources may be distinguished based on statistical distribution of PM concentration variations. Major factor responsible for the differences detectable with our method was the type of material processing and the tool applied. In case different materials were processed by the same tool the distinction of emission sources was difficult. For successful discrimination it was crucial to consider size-segregated mass fraction concentrations. In our opinion the presented approach is very promising. It deserves further study and development.

  16. Methods for assessing long-term mean pathogen count in drinking water and risk management implications.

    PubMed

    Englehardt, James D; Ashbolt, Nicholas J; Loewenstine, Chad; Gadzinski, Erik R; Ayenu-Prah, Albert Y

    2012-06-01

    Recently pathogen counts in drinking and source waters were shown theoretically to have the discrete Weibull (DW) or closely related discrete growth distribution (DGD). The result was demonstrated versus nine short-term and three simulated long-term water quality datasets. These distributions are highly skewed such that available datasets seldom represent the rare but important high-count events, making estimation of the long-term mean difficult. In the current work the methods, and data record length, required to assess long-term mean microbial count were evaluated by simulation of representative DW and DGD waterborne pathogen count distributions. Also, microbial count data were analyzed spectrally for correlation and cycles. In general, longer data records were required for more highly skewed distributions, conceptually associated with more highly treated water. In particular, 500-1,000 random samples were required for reliable assessment of the population mean ±10%, though 50-100 samples produced an estimate within one log (45%) below. A simple correlated first order model was shown to produce count series with 1/f signal, and such periodicity over many scales was shown in empirical microbial count data, for consideration in sampling. A tiered management strategy is recommended, including a plan for rapid response to unusual levels of routinely-monitored water quality indicators.

  17. Design and characterization of electron beam focusing for X-ray generation in novel medical imaging architecturea

    PubMed Central

    Bogdan Neculaes, V.; Zou, Yun; Zavodszky, Peter; Inzinna, Louis; Zhang, Xi; Conway, Kenneth; Caiafa, Antonio; Frutschy, Kristopher; Waters, William; De Man, Bruno

    2014-01-01

    A novel electron beam focusing scheme for medical X-ray sources is described in this paper. Most vacuum based medical X-ray sources today employ a tungsten filament operated in temperature limited regime, with electrostatic focusing tabs for limited range beam optics. This paper presents the electron beam optics designed for the first distributed X-ray source in the world for Computed Tomography (CT) applications. This distributed source includes 32 electron beamlets in a common vacuum chamber, with 32 circular dispenser cathodes operated in space charge limited regime, where the initial circular beam is transformed into an elliptical beam before being collected at the anode. The electron beam optics designed and validated here are at the heart of the first Inverse Geometry CT system, with potential benefits in terms of improved image quality and dramatic X-ray dose reduction for the patient. PMID:24826066

  18. ON THE CONNECTION OF THE APPARENT PROPER MOTION AND THE VLBI STRUCTURE OF COMPACT RADIO SOURCES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moor, A.; Frey, S.; Lambert, S. B.

    2011-06-15

    Many of the compact extragalactic radio sources that are used as fiducial points to define the celestial reference frame are known to have proper motions detectable with long-term geodetic/astrometric very long baseline interferometry (VLBI) measurements. These changes can be as high as several hundred microarcseconds per year for certain objects. When imaged with VLBI at milliarcsecond (mas) angular resolution, these sources (radio-loud active galactic nuclei) typically show structures dominated by a compact, often unresolved 'core' and a one-sided 'jet'. The positional instability of compact radio sources is believed to be connected with changes in their brightness distribution structure. For themore » first time, we test this assumption in a statistical sense on a large sample rather than on only individual objects. We investigate a sample of 62 radio sources for which reliable long-term time series of astrometric positions as well as detailed 8 GHz VLBI brightness distribution models are available. We compare the characteristic direction of their extended jet structure and the direction of their apparent proper motion. We present our data and analysis method, and conclude that there is indeed a correlation between the two characteristic directions. However, there are cases where the {approx}1-10 mas scale VLBI jet directions are significantly misaligned with respect to the apparent proper motion direction.« less

  19. Laser induced heat source distribution in bio-tissues

    NASA Astrophysics Data System (ADS)

    Li, Xiaoxia; Fan, Shifu; Zhao, Youquan

    2006-09-01

    During numerical simulation of laser and tissue thermal interaction, the light fluence rate distribution should be formularized and constituted to the source term in the heat transfer equation. Usually the solution of light irradiative transport equation is given in extreme conditions such as full absorption (Lambert-Beer Law), full scattering (Lubelka-Munk theory), most scattering (Diffusion Approximation) et al. But in specific conditions, these solutions will induce different errors. The usually used Monte Carlo simulation (MCS) is more universal and exact but has difficulty to deal with dynamic parameter and fast simulation. Its area partition pattern has limits when applying FEM (finite element method) to solve the bio-heat transfer partial differential coefficient equation. Laser heat source plots of above methods showed much difference with MCS. In order to solve this problem, through analyzing different optical actions such as reflection, scattering and absorption on the laser induced heat generation in bio-tissue, a new attempt was made out which combined the modified beam broaden model and the diffusion approximation model. First the scattering coefficient was replaced by reduced scattering coefficient in the beam broaden model, which is more reasonable when scattering was treated as anisotropic scattering. Secondly the attenuation coefficient was replaced by effective attenuation coefficient in scattering dominating turbid bio-tissue. The computation results of the modified method were compared with Monte Carlo simulation and showed the model provided reasonable predictions of heat source term distribution than past methods. Such a research is useful for explaining the physical characteristics of heat source in the heat transfer equation, establishing effective photo-thermal model, and providing theory contrast for related laser medicine experiments.

  20. Toward an Integrated Executable Architecture and M&S Based Analysis for Counter Terrorism and Homeland Security

    DTIC Science & Technology

    2006-09-01

    Lavoie, D. Kurts, SYNTHETIC ENVIRONMENTS AT THE ENTREPRISE LEVEL: OVERVIEW OF A GOVERNMENT OF CANADA (GOC), ACADEMIA and INDUSTRY DISTRIBUTED...vehicle (UAV) focused to locate the radiological source, and by comparing the performance of these assets in terms of various capability based...framework to analyze homeland security capabilities • Illustrate how a rapidly configured distributed simulation involving academia, industry and

  1. Analysis of temporal decay of diffuse broadband sound fields in enclosures by decomposition in powers of an absorption parameter

    NASA Astrophysics Data System (ADS)

    Bliss, Donald; Franzoni, Linda; Rouse, Jerry; Manning, Ben

    2005-09-01

    An analysis method for time-dependent broadband diffuse sound fields in enclosures is described. Beginning with a formulation utilizing time-dependent broadband intensity boundary sources, the strength of these wall sources is expanded in a series in powers of an absorption parameter, thereby giving a separate boundary integral problem for each power. The temporal behavior is characterized by a Taylor expansion in the delay time for a source to influence an evaluation point. The lowest-order problem has a uniform interior field proportional to the reciprocal of the absorption parameter, as expected, and exhibits relatively slow exponential decay. The next-order problem gives a mean-square pressure distribution that is independent of the absorption parameter and is primarily responsible for the spatial variation of the reverberant field. This problem, which is driven by input sources and the lowest-order reverberant field, depends on source location and the spatial distribution of absorption. Additional problems proceed at integer powers of the absorption parameter, but are essentially higher-order corrections to the spatial variation. Temporal behavior is expressed in terms of an eigenvalue problem, with boundary source strength distributions expressed as eigenmodes. Solutions exhibit rapid short-time spatial redistribution followed by long-time decay of a predominant spatial mode.

  2. Free-space quantum key distribution with a high generation rate potassium titanyl phosphate waveguide photon-pair source

    NASA Astrophysics Data System (ADS)

    Wilson, Jeffrey D.; Chaffee, Dalton W.; Wilson, Nathaniel C.; Lekki, John D.; Tokars, Roger P.; Pouch, John J.; Roberts, Tony D.; Battle, Philip R.; Floyd, Bertram; Lind, Alexander J.; Cavin, John D.; Helmick, Spencer R.

    2016-09-01

    A high generation rate photon-pair source using a dual element periodically-poled potassium titanyl phosphate (PP KTP) waveguide is described. The fully integrated photon-pair source consists of a 1064-nm pump diode laser, fiber-coupled to a dual element waveguide within which a pair of 1064-nm photons are up-converted to a single 532-nm photon in the first stage. In the second stage, the 532-nm photon is down-converted to an entangled photon-pair at 800 nm and 1600 nm which are fiber-coupled at the waveguide output. The photon-pair source features a high pair generation rate, a compact power-efficient package, and continuous wave (CW) or pulsed operation. This is a significant step towards the long term goal of developing sources for high-rate Quantum Key Distribution (QKD) to enable Earth-space secure communications. Characterization and test results are presented. Details and preliminary results of a laboratory free space QKD experiment with the B92 protocol are also presented.

  3. Household food waste separation behavior and the importance of convenience.

    PubMed

    Bernstad, Anna

    2014-07-01

    Two different strategies aiming at increasing household source-separation of food waste were assessed through a case-study in a Swedish residential area (a) use of written information, distributed as leaflets amongst households and (b) installation of equipment for source-segregation of waste with the aim of increasing convenience food waste sorting in kitchens. Weightings of separately collected food waste before and after distribution of written information suggest that this resulted in neither a significant increased amount of separately collected food waste, nor an increased source-separation ratio. After installation of sorting equipment in households, both the amount of separately collected food waste as well as the source-separation ratio increased vastly. Long-term monitoring shows that results where longstanding. Results emphasize the importance of convenience and existence of infrastructure necessary for source-segregation of waste as important factors for household waste recycling, but also highlight the need of addressing these aspects where waste is generated, i.e. already inside the household. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Free-Space Quantum Key Distribution with a High Generation Rate Potassium Titanyl Phosphate Waveguide Photon-Pair Source

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey D.; Chaffee, Dalton W.; Wilson, Nathaniel C.; Lekki, John D.; Tokars, Roger P.; Pouch, John J.; Roberts, Tony D.; Battle, Philip; Floyd, Bertram M.; Lind, Alexander J.; hide

    2016-01-01

    A high generation rate photon-pair source using a dual element periodically-poled potassium titanyl phosphate (PP KTP) waveguide is described. The fully integrated photon-pair source consists of a 1064-nanometer pump diode laser, fiber-coupled to a dual element waveguide within which a pair of 1064-nanometer photons are up-converted to a single 532-nanometer photon in the first stage. In the second stage, the 532-nanometer photon is down-converted to an entangled photon-pair at 800 nanometer and 1600 nanometer which are fiber-coupled at the waveguide output. The photon-pair source features a high pair generation rate, a compact power-efficient package, and continuous wave (CW) or pulsed operation. This is a significant step towards the long term goal of developing sources for high-rate Quantum Key Distribution (QKD) to enable Earth-space secure communications. Characterization and test results are presented. Details and preliminary results of a laboratory free-space QKD experiment with the B92 protocol are also presented.

  5. Photocounting distributions for exponentially decaying sources.

    PubMed

    Teich, M C; Card, H C

    1979-05-01

    Exact photocounting distributions are obtained for a pulse of light whose intensity is exponentially decaying in time, when the underlying photon statistics are Poisson. It is assumed that the starting time for the sampling interval (which is of arbitrary duration) is uniformly distributed. The probability of registering n counts in the fixed time T is given in terms of the incomplete gamma function for n >/= 1 and in terms of the exponential integral for n = 0. Simple closed-form expressions are obtained for the count mean and variance. The results are expected to be of interest in certain studies involving spontaneous emission, radiation damage in solids, and nuclear counting. They will also be useful in neurobiology and psychophysics, since habituation and sensitization processes may sometimes be characterized by the same stochastic model.

  6. The Lung Surfactant System in Adult Respiratory Distress Syndrome.

    DTIC Science & Technology

    1979-12-01

    jfugation. No significant differences in phospholipid distribution or phosphatidyl- choline (PC) fatty acid composition can be detected in...the fact that amniotic fluid from uncomplicated term pregnancies could be readily used as the source for normal surfactant. During the course of our

  7. Analysis of Unmanned Systems in Military Logistics

    DTIC Science & Technology

    2016-12-01

    opportunities to employ unmanned systems to support logistic operations. 14. SUBJECT TERMS unmanned systems, robotics , UAVs, UGVs, USVs, UUVs, military...Industrial Robots at Warehouses / Distribution Centers .............................................................................. 17 2. Unmanned...Autonomous Robot Gun Turret. Source: Blain (2010)................................................... 33 Figure 4. Robot Sentries for Base Patrol

  8. Blind source separation based on time-frequency morphological characteristics for rigid acoustic scattering by underwater objects

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Li, Xiukun

    2016-06-01

    Separation of the components of rigid acoustic scattering by underwater objects is essential in obtaining the structural characteristics of such objects. To overcome the problem of rigid structures appearing to have the same spectral structure in the time domain, time-frequency Blind Source Separation (BSS) can be used in combination with image morphology to separate the rigid scattering components of different objects. Based on a highlight model, the separation of the rigid scattering structure of objects with time-frequency distribution is deduced. Using a morphological filter, different characteristics in a Wigner-Ville Distribution (WVD) observed for single auto term and cross terms can be simplified to remove any cross-term interference. By selecting time and frequency points of the auto terms signal, the accuracy of BSS can be improved. An experimental simulation has been used, with changes in the pulse width of the transmitted signal, the relative amplitude and the time delay parameter, in order to analyzing the feasibility of this new method. Simulation results show that the new method is not only able to separate rigid scattering components, but can also separate the components when elastic scattering and rigid scattering exist at the same time. Experimental results confirm that the new method can be used in separating the rigid scattering structure of underwater objects.

  9. Experimental and numerical study of impact of voltage fluctuate, flicker and power factor wave electric generator to local distribution

    NASA Astrophysics Data System (ADS)

    Hadi, Nik Azran Ab; Rashid, Wan Norhisyam Abd; Hashim, Nik Mohd Zarifie; Mohamad, Najmiah Radiah; Kadmin, Ahmad Fauzan

    2017-10-01

    Electricity is the most powerful energy source in the world. Engineer and technologist combined and cooperated to invent a new low-cost technology and free carbon emission where the carbon emission issue is a major concern now due to global warming. Renewable energy sources such as hydro, wind and wave are becoming widespread to reduce the carbon emissions, on the other hand, this effort needs several novel methods, techniques and technologies compared to coal-based power. Power quality of renewable sources needs in depth research and endless study to improve renewable energy technologies. The aim of this project is to investigate the impact of renewable electric generator on its local distribution system. The power farm was designed to connect to the local distribution system and it will be investigated and analyzed to make sure that energy which is supplied to customer is clean. The MATLAB tools are used to simulate the overall analysis. At the end of the project, a summary of identifying various voltage fluctuates data sources is presented in terms of voltage flicker. A suggestion of the analysis impact of wave power generation on its local distribution is also presented for the development of wave generator farms.

  10. Small-scale geochemical cycles and the distribution of uranium in central and north Florida organic deposits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bond, P.A.

    1993-03-01

    The global geochemical cycle for an element tracks its path from its various sources to its sinks via processes of weathering and transportation. The cycle may then be quantified in a necessarily approximate manner. The geochemical cycle (thus quantified) reveals constraints (known and unknown) on an element's behavior imposed by the various processes which act on it. In the context of a global geochemical cycle, a continent becomes essentially a source term. If, however, an element's behavior is examined in a local or regional context, sources and their related sinks may be identified. This suggests that small-scale geochemical cycles maymore » be superimposed on global geochemical cycles. Definition of such sub-cycles may clarify the distribution of an element in the earth's near-surface environment. In Florida, phosphate minerals of the Hawthorn Group act as a widely distributed source of uranium. Uranium is transported by surface- and ground-waters. Florida is the site of extensive wetlands and peatlands. The organic matter associated with these deposits adsorbs uranium and may act as a local sink depending on its hydrogeologic setting. This work examines the role of organic matter in the distribution of uranium in the surface and shallow subsurface environments of central and north Florida.« less

  11. Different Water Use Strategies of Juvenile and Adult Caragana intermedia Plantations in the Gonghe Basin, Tibet Plateau

    PubMed Central

    Jia, Zhiqing; Zhu, Yajuan; Liu, Liying

    2012-01-01

    Background In a semi-arid ecosystem, water is one of the most important factors that affect vegetation dynamics, such as shrub plantation. A water use strategy, including the main water source that a plant species utilizes and water use efficiency (WUE), plays an important role in plant survival and growth. The water use strategy of a shrub is one of the key factors in the evaluation of stability and sustainability of a plantation. Methodology/Principal Findings Caragana intermedia is a dominant shrub of sand-binding plantations on sand dunes in the Gonghe Basin in northeastern Tibet Plateau. Understanding the water use strategy of a shrub plantation can be used to evaluate its sustainability and long-term stability. We hypothesized that C. intermedia uses mainly deep soil water and its WUE increases with plantation age. Stable isotopes of hydrogen and oxygen were used to determine the main water source and leaf carbon isotope discrimination was used to estimate long-term WUE. The root system was investigated to determine the depth of the main distribution. The results showed that a 5-year-old C. intermedia plantation used soil water mainly at a depth of 0–30 cm, which was coincident with the distribution of its fine roots. However, 9- or 25-year-old C. intermedia plantations used mainly 0–50 cm soil depth water and the fine root system was distributed primarily at soil depths of 0–50 cm and 0–60 cm, respectively. These sources of soil water are recharged directly by rainfall. Moreover, the long-term WUE of adult plantations was greater than that of juvenile plantations. Conclusions The C. intermedia plantation can change its water use strategy over time as an adaptation to a semi-arid environment, including increasing the depth of soil water used for root growth, and increasing long-term WUE. PMID:23029303

  12. Coarse Grid Modeling of Turbine Film Cooling Flows Using Volumetric Source Terms

    NASA Technical Reports Server (NTRS)

    Heidmann, James D.; Hunter, Scott D.

    2001-01-01

    The recent trend in numerical modeling of turbine film cooling flows has been toward higher fidelity grids and more complex geometries. This trend has been enabled by the rapid increase in computing power available to researchers. However, the turbine design community requires fast turnaround time in its design computations, rendering these comprehensive simulations ineffective in the design cycle. The present study describes a methodology for implementing a volumetric source term distribution in a coarse grid calculation that can model the small-scale and three-dimensional effects present in turbine film cooling flows. This model could be implemented in turbine design codes or in multistage turbomachinery codes such as APNASA, where the computational grid size may be larger than the film hole size. Detailed computations of a single row of 35 deg round holes on a flat plate have been obtained for blowing ratios of 0.5, 0.8, and 1.0, and density ratios of 1.0 and 2.0 using a multiblock grid system to resolve the flows on both sides of the plate as well as inside the hole itself. These detailed flow fields were spatially averaged to generate a field of volumetric source terms for each conservative flow variable. Solutions were also obtained using three coarse grids having streamwise and spanwise grid spacings of 3d, 1d, and d/3. These coarse grid solutions used the integrated hole exit mass, momentum, energy, and turbulence quantities from the detailed solutions as volumetric source terms. It is shown that a uniform source term addition over a distance from the wall on the order of the hole diameter is able to predict adiabatic film effectiveness better than a near-wall source term model, while strictly enforcing correct values of integrated boundary layer quantities.

  13. Inner Magnetospheric Superthermal Electron Transport: Photoelectron and Plasma Sheet Electron Sources

    NASA Technical Reports Server (NTRS)

    Khazanov, G. V.; Liemohn, M. W.; Kozyra, J. U.; Moore, T. E.

    1998-01-01

    Two time-dependent kinetic models of superthermal electron transport are combined to conduct global calculations of the nonthermal electron distribution function throughout the inner magnetosphere. It is shown that the energy range of validity for this combined model extends down to the superthermal-thermal intersection at a few eV, allowing for the calculation of the en- tire distribution function and thus an accurate heating rate to the thermal plasma. Because of the linearity of the formulas, the source terms are separated to calculate the distributions from the various populations, namely photoelectrons (PEs) and plasma sheet electrons (PSEs). These distributions are discussed in detail, examining the processes responsible for their formation in the various regions of the inner magnetosphere. It is shown that convection, corotation, and Coulomb collisions are the dominant processes in the formation of the PE distribution function and that PSEs are dominated by the interplay between the drift terms. Of note is that the PEs propagate around the nightside in a narrow channel at the edge of the plasmasphere as Coulomb collisions reduce the fluxes inside of this and convection compresses the flux tubes inward. These distributions are then recombined to show the development of the total superthermal electron distribution function in the inner magnetosphere and their influence on the thermal plasma. PEs usually dominate the dayside heating, with integral energy fluxes to the ionosphere reaching 10(exp 10) eV/sq cm/s in the plasmasphere, while heating from the PSEs typically does not exceed 10(exp 8) eV/sq cm/s. On the nightside, the inner plasmasphere is usually unheated by superthermal electrons. A feature of these combined spectra is that the distribution often has upward slopes with energy, particularly at the crossover from PE to PSE dominance, indicating that instabilities are possible.

  14. Local tsunamis and earthquake source parameters

    USGS Publications Warehouse

    Geist, Eric L.; Dmowska, Renata; Saltzman, Barry

    1999-01-01

    This chapter establishes the relationship among earthquake source parameters and the generation, propagation, and run-up of local tsunamis. In general terms, displacement of the seafloor during the earthquake rupture is modeled using the elastic dislocation theory for which the displacement field is dependent on the slip distribution, fault geometry, and the elastic response and properties of the medium. Specifically, nonlinear long-wave theory governs the propagation and run-up of tsunamis. A parametric study is devised to examine the relative importance of individual earthquake source parameters on local tsunamis, because the physics that describes tsunamis from generation through run-up is complex. Analysis of the source parameters of various tsunamigenic earthquakes have indicated that the details of the earthquake source, namely, nonuniform distribution of slip along the fault plane, have a significant effect on the local tsunami run-up. Numerical methods have been developed to address the realistic bathymetric and shoreline conditions. The accuracy of determining the run-up on shore is directly dependent on the source parameters of the earthquake, which provide the initial conditions used for the hydrodynamic models.

  15. Simulation of a beam rotation system for a spallation source

    NASA Astrophysics Data System (ADS)

    Reiss, Tibor; Reggiani, Davide; Seidel, Mike; Talanov, Vadim; Wohlmuther, Michael

    2015-04-01

    With a nominal beam power of nearly 1 MW on target, the Swiss Spallation Neutron Source (SINQ), ranks among the world's most powerful spallation neutron sources. The proton beam transport to the SINQ target is carried out exclusively by means of linear magnetic elements. In the transport line to SINQ the beam is scattered in two meson production targets and as a consequence, at the SINQ target entrance the beam shape can be described by Gaussian distributions in transverse x and y directions with tails cut short by collimators. This leads to a highly nonuniform power distribution inside the SINQ target, giving rise to thermal and mechanical stresses. In view of a future proton beam intensity upgrade, the possibility of homogenizing the beam distribution by means of a fast beam rotation system is currently under investigation. Important aspects which need to be studied are the impact of a rotating proton beam on the resulting neutron spectra, spatial flux distributions and additional—previously not present—proton losses causing unwanted activation of accelerator components. Hence a new source description method was developed for the radiation transport code MCNPX. This new feature makes direct use of the results from the proton beam optics code TURTLE. Its advantage to existing MCNPX source options is that all phase space information and correlations of each primary beam particle computed with TURTLE are preserved and transferred to MCNPX. Simulations of the different beam distributions together with their consequences in terms of neutron production are presented in this publication. Additionally, a detailed description of the coupling method between TURTLE and MCNPX is provided.

  16. A new traffic model with a lane-changing viscosity term

    NASA Astrophysics Data System (ADS)

    Ko, Hung-Tang; Liu, Xiao-He; Guo, Ming-Min; Wu, Zheng

    2015-09-01

    In this paper, a new continuum traffic flow model is proposed, with a lane-changing source term in the continuity equation and a lane-changing viscosity term in the acceleration equation. Based on previous literature, the source term addresses the impact of speed difference and density difference between adjacent lanes, which provides better precision for free lane-changing simulation; the viscosity term turns lane-changing behavior to a “force” that may influence speed distribution. Using a flux-splitting scheme for the model discretization, two cases are investigated numerically. The case under a homogeneous initial condition shows that the numerical results by our model agree well with the analytical ones; the case with a small initial disturbance shows that our model can simulate the evolution of perturbation, including propagation, dissipation, cluster effect and stop-and-go phenomenon. Project supported by the National Natural Science Foundation of China (Grant Nos. 11002035 and 11372147) and Hui-Chun Chin and Tsung-Dao Lee Chinese Undergraduate Research Endowment (Grant No. CURE 14024).

  17. POI Summarization by Aesthetics Evaluation From Crowd Source Social Media.

    PubMed

    Qian, Xueming; Li, Cheng; Lan, Ke; Hou, Xingsong; Li, Zhetao; Han, Junwei

    2018-03-01

    Place-of-Interest (POI) summarization by aesthetics evaluation can recommend a set of POI images to the user and it is significant in image retrieval. In this paper, we propose a system that summarizes a collection of POI images regarding both aesthetics and diversity of the distribution of cameras. First, we generate visual albums by a coarse-to-fine POI clustering approach and then generate 3D models for each album by the collected images from social media. Second, based on the 3D to 2D projection relationship, we select candidate photos in terms of the proposed crowd source saliency model. Third, in order to improve the performance of aesthetic measurement model, we propose a crowd-sourced saliency detection approach by exploring the distribution of salient regions in the 3D model. Then, we measure the composition aesthetics of each image and we explore crowd source salient feature to yield saliency map, based on which, we propose an adaptive image adoption approach. Finally, we combine the diversity and the aesthetics to recommend aesthetic pictures. Experimental results show that the proposed POI summarization approach can return images with diverse camera distributions and aesthetics.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, J.A.; Brasseur, G.P.; Zimmerman, P.R.

    Using the hydroxyl radical field calibrated to the methyl chloroform observations, the globally averaged release of methane and its spatial and temporal distribution were investigated. Two source function models of the spatial and temporal distribution of the flux of methane to the atmosphere were developed. The first model was based on the assumption that methane is emitted as a proportion of net primary productivity (NPP). With the average hydroxyl radical concentration fixed, the methane source term was computed as {approximately}623 Tg CH{sub 4}, giving an atmospheric lifetime for methane {approximately}8.3 years. The second model identified source regions for methane frommore » rice paddies, wetlands, enteric fermentation, termites, and biomass burning based on high-resolution land use data. This methane source distribution resulted in an estimate of the global total methane source of {approximately}611 Tg CH{sub 4}, giving an atmospheric lifetime for methane {approximately}8.5 years. The most significant difference between the two models were predictions of methane fluxes over China and South East Asia, the location of most of the world's rice paddies. Using a recent measurement of the reaction rate of hydroxyl radical and methane leads to estimates of the global total methane source for SF1 of {approximately}524 Tg CH{sub 4} giving an atmospheric lifetime of {approximately}10.0 years and for SF2{approximately}514 Tg CH{sub 4} yielding a lifetime of {approximately}10.2 years.« less

  19. 3-D time-domain induced polarization tomography: a new approach based on a source current density formulation

    NASA Astrophysics Data System (ADS)

    Soueid Ahmed, A.; Revil, A.

    2018-04-01

    Induced polarization (IP) of porous rocks can be associated with a secondary source current density, which is proportional to both the intrinsic chargeability and the primary (applied) current density. This gives the possibility of reformulating the time domain induced polarization (TDIP) problem as a time-dependent self-potential-type problem. This new approach implies a change of strategy regarding data acquisition and inversion, allowing major time savings for both. For inverting TDIP data, we first retrieve the electrical resistivity distribution. Then, we use this electrical resistivity distribution to reconstruct the primary current density during the injection/retrieval of the (primary) current between the current electrodes A and B. The time-lapse secondary source current density distribution is determined given the primary source current density and a distribution of chargeability (forward modelling step). The inverse problem is linear between the secondary voltages (measured at all the electrodes) and the computed secondary source current density. A kernel matrix relating the secondary observed voltages data to the source current density model is computed once (using the electrical conductivity distribution), and then used throughout the inversion process. This recovered source current density model is in turn used to estimate the time-dependent chargeability (normalized voltages) in each cell of the domain of interest. Assuming a Cole-Cole model for simplicity, we can reconstruct the 3-D distributions of the relaxation time τ and the Cole-Cole exponent c by fitting the intrinsic chargeability decay curve to a Cole-Cole relaxation model for each cell. Two simple cases are studied in details to explain this new approach. In the first case, we estimate the Cole-Cole parameters as well as the source current density field from a synthetic TDIP data set. Our approach is successfully able to reveal the presence of the anomaly and to invert its Cole-Cole parameters. In the second case, we perform a laboratory sandbox experiment in which we mix a volume of burning coal and sand. The algorithm is able to localize the burning coal both in terms of electrical conductivity and chargeability.

  20. Flow Instability Tests for a Particle Bed Reactor Nuclear Thermal Rocket Fuel Element

    DTIC Science & Technology

    1993-05-01

    2.0 with GWBASIC or higher (DOS 5.0 was installed on the machine). Since the source code was written in BASIC, it was easy to make modifications...8217 AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE Approved for Public Release IAW 190-1 Distribution Unlimited MICHAEL M. BRICKER, SMSgt, USAF Chief...Administration 13. ABSTRACT (Maximum 200 words) i.14. SUBJECT TERMS 15. NUMBER OF PAGES 339 16. PRICE CODE 󈧕. SECURITY CLASSIFICATION 18. SECURITY

  1. Passage relevance models for genomics search.

    PubMed

    Urbain, Jay; Frieder, Ophir; Goharian, Nazli

    2009-03-19

    We present a passage relevance model for integrating syntactic and semantic evidence of biomedical concepts and topics using a probabilistic graphical model. Component models of topics, concepts, terms, and document are represented as potential functions within a Markov Random Field. The probability of a passage being relevant to a biologist's information need is represented as the joint distribution across all potential functions. Relevance model feedback of top ranked passages is used to improve distributional estimates of query concepts and topics in context, and a dimensional indexing strategy is used for efficient aggregation of concept and term statistics. By integrating multiple sources of evidence including dependencies between topics, concepts, and terms, we seek to improve genomics literature passage retrieval precision. Using this model, we are able to demonstrate statistically significant improvements in retrieval precision using a large genomics literature corpus.

  2. Mass discharge assessment at a brominated DNAPL site: Effects of known DNAPL source mass removal

    NASA Astrophysics Data System (ADS)

    Johnston, C. D.; Davis, G. B.; Bastow, T. P.; Woodbury, R. J.; Rao, P. S. C.; Annable, M. D.; Rhodes, S.

    2014-08-01

    Management and closure of contaminated sites is increasingly being proposed on the basis of mass flux of dissolved contaminants in groundwater. Better understanding of the links between source mass removal and contaminant mass fluxes in groundwater would allow greater acceptance of this metric in dealing with contaminated sites. Our objectives here were to show how measurements of the distribution of contaminant mass flux and the overall mass discharge emanating from the source under undisturbed groundwater conditions could be related to the processes and extent of source mass depletion. In addition, these estimates of mass discharge were sought in the application of agreed remediation targets set in terms of pumped groundwater quality from offsite wells. Results are reported from field studies conducted over a 5-year period at a brominated DNAPL (tetrabromoethane, TBA; and tribromoethene, TriBE) site located in suburban Perth, Western Australia. Groundwater fluxes (qw; L3/L2/T) and mass fluxes (Jc; M/L2/T) of dissolved brominated compounds were simultaneously estimated by deploying Passive Flux Meters (PFMs) in wells in a heterogeneous layered aquifer. PFMs were deployed in control plane (CP) wells immediately down-gradient of the source zone, before (2006) and after (2011) 69-85% of the source mass was removed, mainly by groundwater pumping from the source zone. The high-resolution (26-cm depth interval) measures of qw and Jc along the source CP allowed investigation of the DNAPL source-zone architecture and impacts of source mass removal. Comparable estimates of total mass discharge (MD; M/T) across the source zone CP reduced from 104 g day- 1 to 24-31 g day- 1 (70-77% reductions). Importantly, this mass discharge reduction was consistent with the estimated proportion of source mass remaining at the site (15-31%). That is, a linear relationship between mass discharge and source mass is suggested. The spatial detail of groundwater and mass flux distributions also provided further evidence of the source zone architecture and DNAPL mass depletion processes. This was especially apparent in different mass-depletion rates from distinct parts of the CP. High mass fluxes and groundwater fluxes located near the base of the aquifer dominated in terms of the dissolved mass flux in the profile, although not in terms of concentrations. Reductions observed in Jc and MD were used to better target future remedial efforts. Integration of the observations from the PFM deployments and the source mass depletion provided a basis for establishing flux-based management criteria for the site.

  3. Mass discharge assessment at a brominated DNAPL site: Effects of known DNAPL source mass removal.

    PubMed

    Johnston, C D; Davis, G B; Bastow, T P; Woodbury, R J; Rao, P S C; Annable, M D; Rhodes, S

    2014-08-01

    Management and closure of contaminated sites is increasingly being proposed on the basis of mass flux of dissolved contaminants in groundwater. Better understanding of the links between source mass removal and contaminant mass fluxes in groundwater would allow greater acceptance of this metric in dealing with contaminated sites. Our objectives here were to show how measurements of the distribution of contaminant mass flux and the overall mass discharge emanating from the source under undisturbed groundwater conditions could be related to the processes and extent of source mass depletion. In addition, these estimates of mass discharge were sought in the application of agreed remediation targets set in terms of pumped groundwater quality from offsite wells. Results are reported from field studies conducted over a 5-year period at a brominated DNAPL (tetrabromoethane, TBA; and tribromoethene, TriBE) site located in suburban Perth, Western Australia. Groundwater fluxes (qw; L(3)/L(2)/T) and mass fluxes (Jc; M/L(2)/T) of dissolved brominated compounds were simultaneously estimated by deploying Passive Flux Meters (PFMs) in wells in a heterogeneous layered aquifer. PFMs were deployed in control plane (CP) wells immediately down-gradient of the source zone, before (2006) and after (2011) 69-85% of the source mass was removed, mainly by groundwater pumping from the source zone. The high-resolution (26-cm depth interval) measures of qw and Jc along the source CP allowed investigation of the DNAPL source-zone architecture and impacts of source mass removal. Comparable estimates of total mass discharge (MD; M/T) across the source zone CP reduced from 104gday(-1) to 24-31gday(-1) (70-77% reductions). Importantly, this mass discharge reduction was consistent with the estimated proportion of source mass remaining at the site (15-31%). That is, a linear relationship between mass discharge and source mass is suggested. The spatial detail of groundwater and mass flux distributions also provided further evidence of the source zone architecture and DNAPL mass depletion processes. This was especially apparent in different mass-depletion rates from distinct parts of the CP. High mass fluxes and groundwater fluxes located near the base of the aquifer dominated in terms of the dissolved mass flux in the profile, although not in terms of concentrations. Reductions observed in Jc and MD were used to better target future remedial efforts. Integration of the observations from the PFM deployments and the source mass depletion provided a basis for establishing flux-based management criteria for the site. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. A mesostate-space model for EEG and MEG.

    PubMed

    Daunizeau, Jean; Friston, Karl J

    2007-10-15

    We present a multi-scale generative model for EEG, that entails a minimum number of assumptions about evoked brain responses, namely: (1) bioelectric activity is generated by a set of distributed sources, (2) the dynamics of these sources can be modelled as random fluctuations about a small number of mesostates, (3) mesostates evolve in a temporal structured way and are functionally connected (i.e. influence each other), and (4) the number of mesostates engaged by a cognitive task is small (e.g. between one and a few). A Variational Bayesian learning scheme is described that furnishes the posterior density on the models parameters and its evidence. Since the number of meso-sources specifies the model, the model evidence can be used to compare models and find the optimum number of meso-sources. In addition to estimating the dynamics at each cortical dipole, the mesostate-space model and its inversion provide a description of brain activity at the level of the mesostates (i.e. in terms of the dynamics of meso-sources that are distributed over dipoles). The inclusion of a mesostate level allows one to compute posterior probability maps of each dipole being active (i.e. belonging to an active mesostate). Critically, this model accommodates constraints on the number of meso-sources, while retaining the flexibility of distributed source models in explaining data. In short, it bridges the gap between standard distributed and equivalent current dipole models. Furthermore, because it is explicitly spatiotemporal, the model can embed any stochastic dynamical causal model (e.g. a neural mass model) as a Markov process prior on the mesostate dynamics. The approach is evaluated and compared to standard inverse EEG techniques, using synthetic data and real data. The results demonstrate the added-value of the mesostate-space model and its variational inversion.

  5. Computational studies for a multiple-frequency electron cyclotron resonance ion source (abstract)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alton, G.D.

    1996-03-01

    The number density of electrons, the energy (electron temperature), and energy distribution are three of the fundamental properties which govern the performance of electron cyclotron resonance (ECR) ion sources in terms of their capability to produce high charge state ions. The maximum electron energy is affected by several processes including the ability of the plasma to absorb power. In principle, the performances of an ECR ion source can be realized by increasing the physical size of the ECR zone in relation to the total plasma volume. The ECR zones can be increased either in the spatial or frequency domains inmore » any ECR ion source based on B-minimum plasma confinement principles. The former technique requires the design of a carefully tailored magnetic field geometry so that the central region of the plasma volume is a large, uniformly distributed plasma volume which surrounds the axis of symmetry, as proposed in Ref. . Present art forms of the ECR source utilize single frequency microwave power supplies to maintain the plasma discharge; because the magnetic field distribution continually changes in this source design, the ECR zones are relegated to thin {open_quote}{open_quote}surfaces{close_quote}{close_quote} which surround the axis of symmetry. As a consequence of the small ECR zone in relation to the total plasma volume, the probability for stochastic heating of the electrons is quite low, thereby compromising the source performance. This handicap can be overcome by use of broadband, multiple frequency microwave power as evidenced by the enhanced performances of the CAPRICE and AECR ion sources when two frequency microwave power was utilized. We have used particle-in-cell codes to simulate the magnetic field distributions in these sources and to demonstrate the advantages of using multiple, discrete frequencies over single frequencies to power conventional ECR ion sources. (Abstract Truncated)« less

  6. Dust temperature distributions in star-forming condensations

    NASA Technical Reports Server (NTRS)

    Xie, Taoling; Goldsmith, Paul F.; Snell, Ronald L.; Zhou, Weimin

    1993-01-01

    The FIR spectra of the central IR condensations in the dense cores of molecular clouds AFGL 2591. B335, L1551, Mon R2, and Sgr B2 are reanalyzed here in terms of the distribution of dust mass as a function of temperature. FIR spectra of these objects can be characterized reasonably well by a given functional form. The general shapes of the dust temperature distributions of these objects are similar and closely resemble the theoretical computations of de Muizon and Rouan (1985) for a sample of 'hot centered' clouds with active star formation. Specifically, the model yields a 'cutoff' temperature below which essentially no dust is needed to interpret the dust emission spectra, and most of the dust mass is distributed in a broad temperature range of a few tens of degrees above the cutoff temperature. Mass, luminosity, average temperature, and column density are obtained, and it is found that the physical quantities differ considerably from source to source in a meaningful way.

  7. Poster - Thur Eve - 06: Comparison of an open source genetic algorithm to the commercially used IPSA for generation of seed distributions in LDR prostate brachytherapy.

    PubMed

    McGeachy, P; Khan, R

    2012-07-01

    In early stage prostate cancer, low dose rate (LDR) prostate brachytherapy is a favorable treatment modality, where small radioactive seeds are permanently implanted throughout the prostate. Treatment centres currently rely on a commercial optimization algorithm, IPSA, to generate seed distributions for treatment plans. However, commercial software does not allow the user access to the source code, thus reducing the flexibility for treatment planning and impeding any implementation of new and, perhaps, improved clinical techniques. An open source genetic algorithm (GA) has been encoded in MATLAB to generate seed distributions for a simplified prostate and urethra model. To assess the quality of the seed distributions created by the GA, both the GA and IPSA were used to generate seed distributions for two clinically relevant scenarios and the quality of the GA distributions relative to IPSA distributions and clinically accepted standards for seed distributions was investigated. The first clinically relevant scenario involved generating seed distributions for three different prostate volumes (19.2 cc, 32.4 cc, and 54.7 cc). The second scenario involved generating distributions for three separate seed activities (0.397 mCi, 0.455 mCi, and 0.5 mCi). Both GA and IPSA met the clinically accepted criteria for the two scenarios, where distributions produced by the GA were comparable to IPSA in terms of full coverage of the prostate by the prescribed dose, and minimized dose to the urethra, which passed straight through the prostate. Further, the GA offered improved reduction of high dose regions (i.e hot spots) within the planned target volume. © 2012 American Association of Physicists in Medicine.

  8. Measuring Spatial Variability of Vapor Flux to Characterize Vadose-zone VOC Sources: Flow-cell Experiments

    DOE PAGES

    Mainhagu, Jon; Morrison, C.; Truex, Michael J.; ...

    2014-08-05

    A method termed vapor-phase tomography has recently been proposed to characterize the distribution of volatile organic contaminant mass in vadose-zone source areas, and to measure associated three-dimensional distributions of local contaminant mass discharge. The method is based on measuring the spatial variability of vapor flux, and thus inherent to its effectiveness is the premise that the magnitudes and temporal variability of vapor concentrations measured at different monitoring points within the interrogated area will be a function of the geospatial positions of the points relative to the source location. A series of flow-cell experiments was conducted to evaluate this premise. Amore » well-defined source zone was created by injection and extraction of a non-reactive gas (SF6). Spatial and temporal concentration distributions obtained from the tests were compared to simulations produced with a mathematical model describing advective and diffusive transport. Tests were conducted to characterize both areal and vertical components of the application. Decreases in concentration over time were observed for monitoring points located on the opposite side of the source zone from the local–extraction point, whereas increases were observed for monitoring points located between the local–extraction point and the source zone. We found that the results illustrate that comparison of temporal concentration profiles obtained at various monitoring points gives a general indication of the source location with respect to the extraction and monitoring points.« less

  9. Dispersion modeling of polycyclic aromatic hydrocarbons from combustion of biomass and fossil fuels and production of coke in Tianjin, China.

    PubMed

    Tao, Shu; Li, Xinrong; Yang, Yu; Coveney, Raymond M; Lu, Xiaoxia; Chen, Haitao; Shen, Weiran

    2006-08-01

    A USEPA, procedure, ISCLT3 (Industrial Source Complex Long-Term), was applied to model the spatial distribution of polycyclic aromatic hydrocarbons (PAHs) emitted from various sources including coal, petroleum, natural gas, and biomass into the atmosphere of Tianjin, China. Benzo[a]pyrene equivalent concentrations (BaPeq) were calculated for risk assessment. Model results were provisionally validated for concentrations and profiles based on the observed data at two monitoring stations. The dominant emission sources in the area were domestic coal combustion, coke production, and biomass burning. Mainly because of the difference in the emission heights, the contributions of various sources to the average concentrations at receptors differ from proportions emitted. The shares of domestic coal increased from approximately 43% at the sources to 56% at the receptors, while the contributions of coking industry decreased from approximately 23% at the sources to 7% at the receptors. The spatial distributions of gaseous and particulate PAHs were similar, with higher concentrations occurring within urban districts because of domestic coal combustion. With relatively smaller contributions, the other minor sources had limited influences on the overall spatial distribution. The calculated average BaPeq value in air was 2.54 +/- 2.87 ng/m3 on an annual basis. Although only 2.3% of the area in Tianjin exceeded the national standard of 10 ng/m3, 41% of the entire population lives within this area.

  10. Operating envelopes of particle sizing instrumentation used for icing research

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.

    1987-01-01

    The Forward Scattering Spectrometer Probe and the Optical Array Probe are analyzed in terms of their ability to make accurate determinations of water droplet size distributions. Sources of counting and sizing errors are explained. The paper describes ways of identifying these errors and how they can affect measurement.

  11. Assessment of Methane Emissions from Oil and Gas Production Pads using Mobile Measurements

    EPA Science Inventory

    Journal Article Abstract --- "A mobile source inspection approach called OTM 33A was used to quantify short-term methane emission rates from 218 oil and gas production pads in Texas, Colorado, and Wyoming from 2010 to 2013. The emission rates were log-normally distributed with ...

  12. Non-Gaussian limit fluctuations in active swimmer suspensions

    NASA Astrophysics Data System (ADS)

    Kurihara, Takashi; Aridome, Msato; Ayade, Heev; Zaid, Irwin; Mizuno, Daisuke

    2017-03-01

    We investigate the hydrodynamic fluctuations in suspensions of swimming microorganisms (Chlamydomonas) by observing the probe particles dispersed in the media. Short-term fluctuations of probe particles were superdiffusive and displayed heavily tailed non-Gaussian distributions. The analytical theory that explains the observed distribution was derived by summing the power-law-decaying hydrodynamic interactions from spatially distributed field sources (here, swimming microorganisms). The summing procedure, which we refer to as the physical limit operation, is applicable to a variety of physical fluctuations to which the classical central limiting theory does not apply. Extending the analytical formula to compare to experiments in active swimmer suspensions, we show that the non-Gaussian shape of the observed distribution obeys the analytic theory concomitantly with independently determined parameters such as the strength of force generations and the concentration of Chlamydomonas. Time evolution of the distributions collapsed to a single master curve, except for their extreme tails, for which our theory presents a qualitative explanation. Investigations thereof and the complete agreement with theoretical predictions revealed broad applicability of the formula to dispersions of active sources of fluctuations.

  13. True gender ratios and stereotype rating norms

    PubMed Central

    Garnham, Alan; Doehren, Sam; Gygax, Pascal

    2015-01-01

    We present a study comparing, in English, perceived distributions of men and women in 422 named occupations with actual real world distributions. The first set of data was obtained from previous a large-scale norming study, whereas the second set was mostly drawn from UK governmental sources. In total, real world ratios for 290 occupations were obtained for our perceive vs. real world comparison, of which 205 were deemed to be unproblematic. The means for the two sources were similar and the correlation between them was high, suggesting that people are generally accurate at judging real gender ratios, though there were some notable exceptions. Beside this correlation, some interesting patterns emerged from the two sources, suggesting some response strategies when people complete norming studies. We discuss these patterns in terms of the way real world data might complement norming studies in determining gender stereotypicality. PMID:26257681

  14. A general circulation model study of atmospheric carbon monoxide

    NASA Technical Reports Server (NTRS)

    Pinto, J. P.; Rind, D.; Russell, G. L.; Lerner, J. A.; Hansen, J. E.; Yung, Y. L.; Hameed, S.

    1983-01-01

    The carbon monoxide cycle is studied by incorporating the known and hypothetical sources and sinks in a tracer model that uses the winds generated by a general circulation model. Photochemical production and loss terms, which depend on OH radical concentrations, are calculated in an interactive fashion. The computed global distribution and seasonal variations of CO are compared with observations to obtain constraints on the distribution and magnitude of the sources and sinks of CO, and on the tropospheric abundance of OH. The simplest model that accounts for available observations requires a low latitude plant source of about 1.3 x 10 to the 15th g/yr, in addition to sources from incomplete combustion of fossil fuels and oxidation of methane. The globally averaged OH concentration calculated in the model is 750,000/cu cm. Models that calculate globally averaged OH concentrations much lower than this nominal value are not consistent with the observed variability of CO. Such models are also inconsistent with measurements of CO isotopic abundances, which imply the existence of plant sources.

  15. Distributions of occupied and vacant butterfly habitats in fragmented landscapes.

    PubMed

    Thomas, C D; Thomas, J A; Warren, M S

    1992-12-01

    We found several rare UK butterflies to be restricted to relatively large and non-isolated habitat patches, while small patches and those that are isolated from population sources remain vacant. These patterns of occurrence are generated by the dynamic processes of local extinction and colonization. Habitat patches act as terrestrial archipelagos in which long-term population persistence, and hence effective long-term conservation, rely on networks of suitable habitats, sufficiently close to allow natural dispersal.

  16. Near-field sound radiation of fan tones from an installed turbofan aero-engine.

    PubMed

    McAlpine, Alan; Gaffney, James; Kingan, Michael J

    2015-09-01

    The development of a distributed source model to predict fan tone noise levels of an installed turbofan aero-engine is reported. The key objective is to examine a canonical problem: how to predict the pressure field due to a distributed source located near an infinite, rigid cylinder. This canonical problem is a simple representation of an installed turbofan, where the distributed source is based on the pressure pattern generated by a spinning duct mode, and the rigid cylinder represents an aircraft fuselage. The radiation of fan tones can be modelled in terms of spinning modes. In this analysis, based on duct modes, theoretical expressions for the near-field acoustic pressures on the cylinder, or at the same locations without the cylinder, have been formulated. Simulations of the near-field acoustic pressures are compared against measurements obtained from a fan rig test. Also, the installation effect is quantified by calculating the difference in the sound pressure levels with and without the adjacent cylindrical fuselage. Results are shown for the blade passing frequency fan tone radiated at a supersonic fan operating condition.

  17. Levels and distributions of organochlorine pesticides in the soil-groundwater system of vegetable planting area in Tianjin City, Northern China.

    PubMed

    Pan, Hong-Wei; Lei, Hong-Jun; He, Xiao-Song; Xi, Bei-Dou; Han, Yu-Ping; Xu, Qi-Gong

    2017-04-01

    To study the influence of long-term pesticide application on the distribution of organochlorine pesticides (OCPs) in the soil-groundwater system, 19 soil samples and 19 groundwater samples were collected from agricultural area with long-term pesticide application history in Northern China. Results showed that the composition of OCPs changed significantly from soil to groundwater. For example, ∑DDT, ∑HCH, and ∑heptachlor had high levels in the soil and low levels in the groundwater; in contrast, endrin had low level in the soil and high level in the groundwater. Further study showed that OCP distribution in the soil was significantly influenced by its residue time, soil organic carbon level, and small soil particle contents (i.d. <0.0002 mm). Correlation analysis also indicates that the distribution of OCPs in the groundwater was closely related to the levels of OCPs in the soil layer, which may act as a pollution source.

  18. On the Development of Spray Submodels Based on Droplet Size Moments

    NASA Astrophysics Data System (ADS)

    Beck, J. C.; Watkins, A. P.

    2002-11-01

    Hitherto, all polydisperse spray models have been based on discretising the liquid flow field into groups of equally sized droplets. The authors have recently developed a spray model that captures the full polydisperse nature of the spray flow without using droplet size classes (Beck, 2000, Ph.D thesis, UMIST; Beck and Watkins, 2001, Proc. R. Soc. London A). The parameters used to describe the distribution of droplet sizes are the moments of the droplet size distribution function. Transport equations are written for the two moments which represent the liquid mass and surface area, and two more moments representing the sum of drop radii and droplet number are approximated via use of a presumed distribution function, which is allowed to vary in space and time. The velocities to be used in the two transport equations are obtained by defining moment-average quantities and constructing further transport equations for the relevant moment-average velocities. An equation for the energy of the liquid phase and standard gas phase equations, including a k-ɛ turbulence model, are also solved. All the equations are solved in an Eulerian framework using the finite-volume approach, and the phases are coupled through source terms. Effects such as interphase drag, droplet breakup, and droplet-droplet collisions are also captured through the use of source terms. The development of the submodels to describe these effects is the subject of this paper. All the source terms for the hydrodynamics of the spray are derived in this paper in terms of the four moments of the droplet size distribution in order to find the net effect on the whole spray flow field. The development of similar submodels to describe heat and mass transfer effects between the phases is the subject of a further paper (Beck and Watkins, 2001, J. Heat Fluid Flow). The model has been applied to a wide variety of different sprays, including high-pressure diesel sprays, wide-angle solid-cone water sprays, hollow-cone spray s, and evaporating sprays. The comparisons of the results with experimental data show that the model performs well. The interphase drag model, along with the model for the turbulent dispersion of the liquid, produces excellent agreement in the spray penetration results, and the moment-average velocity approach gives good radial distributions of droplet size, showing the capability of the model to predict polydisperse behaviour. Good submodel performance results in droplet breakup, collisions, and evaporation effects (see (Beck and Watkins, 2001, J. Heat Fluid Flow)) also being captured successfully.

  19. Homogenization of the Brush Problem with a Source Term in L 1

    NASA Astrophysics Data System (ADS)

    Gaudiello, Antonio; Guibé, Olivier; Murat, François

    2017-07-01

    We consider a domain which has the form of a brush in 3 D or the form of a comb in 2 D, i.e. an open set which is composed of cylindrical vertical teeth distributed over a fixed basis. All the teeth have a similar fixed height; their cross sections can vary from one tooth to another and are not supposed to be smooth; moreover the teeth can be adjacent, i.e. they can share parts of their boundaries. The diameter of every tooth is supposed to be less than or equal to ɛ, and the asymptotic volume fraction of the teeth (as ɛ tends to zero) is supposed to be bounded from below away from zero, but no periodicity is assumed on the distribution of the teeth. In this domain we study the asymptotic behavior (as ɛ tends to zero) of the solution of a second order elliptic equation with a zeroth order term which is bounded from below away from zero, when the homogeneous Neumann boundary condition is satisfied on the whole of the boundary. First, we revisit the problem where the source term belongs to L 2. This is a classical problem, but our homogenization result takes place in a geometry which is more general that the ones which have been considered before. Moreover we prove a corrector result which is new. Then, we study the case where the source term belongs to L 1. Working in the framework of renormalized solutions and introducing a definition of renormalized solutions for degenerate elliptic equations where only the vertical derivative is involved (such a definition is new), we identify the limit problem and prove a corrector result.

  20. Coherent attacking continuous-variable quantum key distribution with entanglement in the middle

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoyuan; Shi, Ronghua; Zeng, Guihua; Guo, Ying

    2018-06-01

    We suggest an approach on the coherent attack of continuous-variable quantum key distribution (CVQKD) with an untrusted entangled source in the middle. The coherent attack strategy can be performed on the double links of quantum system, enabling the eavesdropper to steal more information from the proposed scheme using the entanglement correlation. Numeric simulation results show the improved performance of the attacked CVQKD system in terms of the derived secret key rate with the controllable parameters maximizing the stolen information.

  1. Parameter Measurement Methods for Interfacing Hydraulic Systems with Microelectronic Instruments and Controllers.

    DTIC Science & Technology

    1983-11-01

    successfully. I- Accession For NTIS -GO iiiONa DTIC TAB t Unannounced - Justificatio Distribution/ I Availability Codes vail and/or DIst Special IA-11...terms of initial signal power. An active sensor must be excited externally. Such a sensor receives its power from an external source and merely modulates...electrons in the material to gain L enough energy to be emitted. The voltage source causes a positive potential to be felt on the collector, thus causing the

  2. Noise-enhanced CVQKD with untrusted source

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoqun; Huang, Chunhui

    2017-06-01

    The performance of one-way and two-way continuous variable quantum key distribution (CVQKD) protocols can be increased by adding some noise on the reconciliation side. In this paper, we propose to add noise at the reconciliation end to improve the performance of CVQKD with untrusted source. We derive the key rate of this case and analyze the impact of the additive noise. The simulation results show that the optimal additive noise can improve the performance of the system in terms of maximum transmission distance and tolerable excess noise.

  3. Development of Load Duration Curve System in Data Scarce Watersheds Based on a Distributed Hydrological Model

    NASA Astrophysics Data System (ADS)

    WANG, J.

    2017-12-01

    In stream water quality control, the total maximum daily load (TMDL) program is very effective. However, the load duration curves (LDC) of TMDL are difficult to be established because no sufficient observed flow and pollutant data can be provided in data-scarce watersheds in which no hydrological stations or consecutively long-term hydrological data are available. Although the point sources or a non-point sources of pollutants can be clarified easily with the aid of LDC, where does the pollutant come from and to where it will be transported in the watershed cannot be traced by LDC. To seek out the best management practices (BMPs) of pollutants in a watershed, and to overcome the limitation of LDC, we proposed to develop LDC based on a distributed hydrological model of SWAT for the water quality management in data scarce river basins. In this study, firstly, the distributed hydrological model of SWAT was established with the scarce-hydrological data. Then, the long-term daily flows were generated with the established SWAT model and rainfall data from the adjacent weather station. Flow duration curves (FDC) was then developed with the aid of generated daily flows by SWAT model. Considering the goal of water quality management, LDC curves of different pollutants can be obtained based on the FDC. With the monitored water quality data and the LDC curves, the water quality problems caused by the point or non-point source pollutants in different seasons can be ascertained. Finally, the distributed hydrological model of SWAT was employed again to tracing the spatial distribution and the origination of the pollutants of coming from what kind of agricultural practices and/or other human activities. A case study was conducted in the Jian-jiang river, a tributary of Yangtze river, of Duyun city, Guizhou province. Results indicate that this kind of method can realize the water quality management based on TMDL and find out the suitable BMPs for reducing pollutant in a watershed.

  4. Nuclear risk analysis of the Ulysses mission

    NASA Astrophysics Data System (ADS)

    Bartram, Bart W.; Vaughan, Frank R.; Englehart, Richard W., Dr.

    1991-01-01

    The use of a radioisotope thermoelectric generator fueled with plutonium-238 dioxide on the Space Shuttle-launched Ulysses mission implies some level of risk due to potential accidents. This paper describes the method used to quantify risks in the Ulysses mission Final Safety Analysis Report prepared for the U.S. Department of Energy. The starting point for the analysis described herein is following input of source term probability distributions from the General Electric Company. A Monte Carlo technique is used to develop probability distributions of radiological consequences for a range of accident scenarios thoughout the mission. Factors affecting radiological consequences are identified, the probability distribution of the effect of each factor determined, and the functional relationship among all the factors established. The probability distributions of all the factor effects are then combined using a Monte Carlo technique. The results of the analysis are presented in terms of complementary cumulative distribution functions (CCDF) by mission sub-phase, phase, and the overall mission. The CCDFs show the total probability that consequences (calculated health effects) would be equal to or greater than a given value.

  5. Spatial distribution and migration of nonylphenol in groundwater following long-term wastewater irrigation.

    PubMed

    Wang, Shiyu; Wu, Wenyong; Liu, Fei; Yin, Shiyang; Bao, Zhe; Liu, Honglu

    2015-01-01

    Seen as a solution to water shortages, wastewater reuse for crop irrigation does however poses a risk owing to the potential release of organic contaminants into soil and water. The frequency of detection (FOD), concentration, and migration of nonylphenol (NP) isomers in reclaimed water (FODRW), surface water (FODSW), and groundwater (FODGW) were investigated in a long-term wastewater irrigation area in Beijing. The FODRW, FODSW and FODGW of any or all of 12 NP isomers were 66.7% to 100%, 76.9% to 100% and 13.3% to 60%, respectively. The mean (±standard deviation) NP concentrations of the reclaimed water, surface water, and groundwater (NPRW, NPSW, NPGW, repectively) were 469.4±73.4 ng L(-1), 694.6±248.7 ng(-1) and 244.4±230.8 ng(-1), respectively. The existence of external pollution sources during water transmission and distribution resulted in NPSW exceeding NPRW. NP distribution in groundwater was related to the duration and quantity of wastewater irrigation, the sources of aquifer recharge, and was seen to decrease with increasing aquifer depth. Higher riverside infiltration rate nearby leads to higher FODGW values. The migration rate of NP isomers was classified as high, moderate or low. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. A hemispherical Langmuir probe array detector for angular resolved measurements on droplet-based laser-produced plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gambino, Nadia, E-mail: gambinon@ethz.ch; Brandstätter, Markus; Rollinger, Bob

    2014-09-15

    In this work, a new diagnostic tool for laser-produced plasmas (LPPs) is presented. The detector is based on a multiple array of six motorized Langmuir probes. It allows to measure the dynamics of a LPP in terms of charged particles detection with particular attention to droplet-based LPP sources for EUV lithography. The system design permits to temporally resolve the angular and radial plasma charge distribution and to obtain a hemispherical mapping of the ions and electrons around the droplet plasma. The understanding of these dynamics is fundamental to improve the debris mitigation techniques for droplet-based LPP sources. The device hasmore » been developed, built, and employed at the Laboratory for Energy Conversion, ETH Zürich. The experimental results have been obtained on the droplet-based LPP source ALPS II. For the first time, 2D mappings of the ion kinetic energy distribution around the droplet plasma have been obtained with an array of multiple Langmuir probes. These measurements show an anisotropic expansion of the ions in terms of kinetic energy and amount of ion charge around the droplet target. First estimations of the plasma density and electron temperature were also obtained from the analysis of the probe current signals.« less

  7. SISSY: An efficient and automatic algorithm for the analysis of EEG sources based on structured sparsity.

    PubMed

    Becker, H; Albera, L; Comon, P; Nunes, J-C; Gribonval, R; Fleureau, J; Guillotel, P; Merlet, I

    2017-08-15

    Over the past decades, a multitude of different brain source imaging algorithms have been developed to identify the neural generators underlying the surface electroencephalography measurements. While most of these techniques focus on determining the source positions, only a small number of recently developed algorithms provides an indication of the spatial extent of the distributed sources. In a recent comparison of brain source imaging approaches, the VB-SCCD algorithm has been shown to be one of the most promising algorithms among these methods. However, this technique suffers from several problems: it leads to amplitude-biased source estimates, it has difficulties in separating close sources, and it has a high computational complexity due to its implementation using second order cone programming. To overcome these problems, we propose to include an additional regularization term that imposes sparsity in the original source domain and to solve the resulting optimization problem using the alternating direction method of multipliers. Furthermore, we show that the algorithm yields more robust solutions by taking into account the temporal structure of the data. We also propose a new method to automatically threshold the estimated source distribution, which permits to delineate the active brain regions. The new algorithm, called Source Imaging based on Structured Sparsity (SISSY), is analyzed by means of realistic computer simulations and is validated on the clinical data of four patients. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Forecasting the Rupture Directivity of Large Earthquakes: Centroid Bias of the Conditional Hypocenter Distribution

    NASA Astrophysics Data System (ADS)

    Donovan, J.; Jordan, T. H.

    2012-12-01

    Forecasting the rupture directivity of large earthquakes is an important problem in probabilistic seismic hazard analysis (PSHA), because directivity is known to strongly influence ground motions. We describe how rupture directivity can be forecast in terms of the "conditional hypocenter distribution" or CHD, defined to be the probability distribution of a hypocenter given the spatial distribution of moment release (fault slip). The simplest CHD is a uniform distribution, in which the hypocenter probability density equals the moment-release probability density. For rupture models in which the rupture velocity and rise time depend only on the local slip, the CHD completely specifies the distribution of the directivity parameter D, defined in terms of the degree-two polynomial moments of the source space-time function. This parameter, which is zero for a bilateral rupture and unity for a unilateral rupture, can be estimated from finite-source models or by the direct inversion of seismograms (McGuire et al., 2002). We compile D-values from published studies of 65 large earthquakes and show that these data are statistically inconsistent with the uniform CHD advocated by McGuire et al. (2002). Instead, the data indicate a "centroid biased" CHD, in which the expected distance between the hypocenter and the hypocentroid is less than that of a uniform CHD. In other words, the observed directivities appear to be closer to bilateral than predicted by this simple model. We discuss the implications of these results for rupture dynamics and fault-zone heterogeneities. We also explore their PSHA implications by modifying the CyberShake simulation-based hazard model for the Los Angeles region, which assumed a uniform CHD (Graves et al., 2011).

  9. Development a computer codes to couple PWR-GALE output and PC-CREAM input

    NASA Astrophysics Data System (ADS)

    Kuntjoro, S.; Budi Setiawan, M.; Nursinta Adi, W.; Deswandri; Sunaryo, G. R.

    2018-02-01

    Radionuclide dispersion analysis is part of an important reactor safety analysis. From the analysis it can be obtained the amount of doses received by radiation workers and communities around nuclear reactor. The radionuclide dispersion analysis under normal operating conditions is carried out using the PC-CREAM code, and it requires input data such as source term and population distribution. Input data is derived from the output of another program that is PWR-GALE and written Population Distribution data in certain format. Compiling inputs for PC-CREAM programs manually requires high accuracy, as it involves large amounts of data in certain formats and often errors in compiling inputs manually. To minimize errors in input generation, than it is make coupling program for PWR-GALE and PC-CREAM programs and a program for writing population distribution according to the PC-CREAM input format. This work was conducted to create the coupling programming between PWR-GALE output and PC-CREAM input and programming to written population data in the required formats. Programming is done by using Python programming language which has advantages of multiplatform, object-oriented and interactive. The result of this work is software for coupling data of source term and written population distribution data. So that input to PC-CREAM program can be done easily and avoid formatting errors. Programming sourceterm coupling program PWR-GALE and PC-CREAM is completed, so that the creation of PC-CREAM inputs in souceterm and distribution data can be done easily and according to the desired format.

  10. Fabrication and In Situ Testing of Scalable Nitrate-Selective Electrodes for Distributed Observations

    NASA Astrophysics Data System (ADS)

    Harmon, T. C.; Rat'ko, A.; Dietrich, H.; Park, Y.; Wijsboom, Y. H.; Bendikov, M.

    2008-12-01

    Inorganic nitrogen (nitrate (NO3-) and ammonium (NH+)) from chemical fertilizer and livestock waste is a major source of pollution in groundwater, surface water and the air. While some sources of these chemicals, such as waste lagoons, are well-defined, their application as fertilizer has the potential to create distributed or non-point source pollution problems. Scalable nitrate sensors (small and inexpensive) would enable us to better assess non-point source pollution processes in agronomic soils, groundwater and rivers subject to non-point source inputs. This work describes the fabrication and testing of inexpensive PVC-membrane- based ion selective electrodes (ISEs) for monitoring nitrate levels in soil water environments. ISE-based sensors have the advantages of being easy to fabricate and use, but suffer several shortcomings, including limited sensitivity, poor precision, and calibration drift. However, modern materials have begun to yield more robust ISE types in laboratory settings. This work emphasizes the in situ behavior of commercial and fabricated sensors in soils subject to irrigation with dairy manure water. Results are presented in the context of deployment techniques (in situ versus soil lysimeters), temperature compensation, and uncertainty analysis. Observed temporal responses of the nitrate sensors exhibited diurnal cycling with elevated nitrate levels at night and depressed levels during the day. Conventional samples collected via lysimeters validated this response. It is concluded that while modern ISEs are not yet ready for long-term, unattended deployment, short-term installations (on the order of 2 to 4 days) are viable and may provide valuable insights into nitrogen dynamics in complex soil systems.

  11. The classification of flaring states of blazars

    NASA Astrophysics Data System (ADS)

    Resconi, E.; Franco, D.; Gross, A.; Costamante, L.; Flaccomio, E.

    2009-08-01

    Aims: The time evolution of the electromagnetic emission from blazars, in particular high-frequency peaked sources (HBLs), displays irregular activity that has not yet been understood. In this work we report a methodology capable of characterizing the time behavior of these variable objects. Methods: The maximum likelihood blocks (MLBs) is a model-independent estimator that subdivides the light curve into time blocks, whose length and amplitude are compatible with states of constant emission rate of the observed source. The MLBs yield the statistical significance in the rate variations and strongly suppresses the noise fluctuations in the light curves. We applied the MLBs for the first time on the long term X-ray light curves (RXTE/ASM) of Mkn 421, Mkn 501, 1ES 1959+650, and 1ES 2155-304, more than 10 years of observational data (1996-2007). Using the MLBs interpretation of RXTE/ASM data, the integrated time flux distribution is determined for each single source considered. We identify in these distributions the characteristic level, as well as the flaring states of the blazars. Results: All the distributions show a significant component at negative flux values, most probably caused by an uncertainty in the background subtraction and by intrinsic fluctuations of RXTE/ASM. This effect concerns in particular short time observations. To quantify the probability that the intrinsic fluctuations give rise to a false identification of a flare, we study a population of very faint sources and their integrated time-flux distribution. We determine duty cycle or fraction of time a source spent in the flaring state of the source Mkn 421, Mkn 501, 1ES 1959+650 and 1ES 2155-304. Moreover, we study the random coincidences between flares and generic sporadic events such as high-energy neutrinos or flares in other wavelengths.

  12. Coast of California Storm and Tidal Waves Study. Southern California Coastal Processes Data Summary,

    DTIC Science & Technology

    1986-02-01

    distribution of tracers injected on the beach. The suspended load was obtained from in situ measurements of the water column in the surf zone (Zampol and...wind waves. 3.2.2 Wave Climate There are relatively few in situ long-term measurements of the deep ocean (i.e. unaffected by the channel islands and...climate parameters and were not intended for that purpose. In the literature reviewed, the principal source of long-term in situ measurements is the

  13. Soundscapes

    DTIC Science & Technology

    2013-09-30

    STATEMENT A. Approved for public release; distribution is unlimited. . Soundscapes Michael B...models to provide hindcasts, nowcasts, and forecasts of the time-evolving soundscape . In terms of the types of sound sources, we will focus initially on...APPROACH The research has two principle thrusts: 1) the modeling of the soundscape , and 2) verification using datasets that have been collected

  14. Soundscapes

    DTIC Science & Technology

    2012-09-30

    STATEMENT A. Approved for public release; distribution is unlimited. . Soundscapes Michael B...models to provide hindcasts, nowcasts, and forecasts of the time-evolving soundscape . In terms of the types of sound sources, we will focus initially on...APPROACH The research has two principle thrusts: 1) the modeling of the soundscape , and 2) verification using datasets that have been collected

  15. Polarization and long-term variability of Sgr A* X-ray echo

    NASA Astrophysics Data System (ADS)

    Churazov, E.; Khabibullin, I.; Ponti, G.; Sunyaev, R.

    2017-06-01

    We use a model of the molecular gas distribution within ˜100 pc from the centre of the Milky Way (Kruijssen, Dale & Longmore) to simulate time evolution and polarization properties of the reflected X-ray emission, associated with the past outbursts from Sgr A*. While this model is too simple to describe the complexity of the true gas distribution, it illustrates the importance and power of long-term observations of the reflected emission. We show that the variable part of X-ray emission observed by Chandra and XMM-Newton from prominent molecular clouds is well described by a pure reflection model, providing strong support of the reflection scenario. While the identification of Sgr A* as a primary source for this reflected emission is already a very appealing hypothesis, a decisive test of this model can be provided by future X-ray polarimetric observations, which will allow placing constraints on the location of the primary source. In addition, X-ray polarimeters (like, e.g. XIPE) have sufficient sensitivity to constrain the line-of-sight positions of molecular complexes, removing major uncertainty in the model.

  16. CUMPOIS- CUMULATIVE POISSON DISTRIBUTION PROGRAM

    NASA Technical Reports Server (NTRS)

    Bowerman, P. N.

    1994-01-01

    The Cumulative Poisson distribution program, CUMPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), can be used independently of one another. CUMPOIS determines the approximate cumulative binomial distribution, evaluates the cumulative distribution function (cdf) for gamma distributions with integer shape parameters, and evaluates the cdf for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. CUMPOIS calculates the probability that n or less events (ie. cumulative) will occur within any unit when the expected number of events is given as lambda. Normally, this probability is calculated by a direct summation, from i=0 to n, of terms involving the exponential function, lambda, and inverse factorials. This approach, however, eventually fails due to underflow for sufficiently large values of n. Additionally, when the exponential term is moved outside of the summation for simplification purposes, there is a risk that the terms remaining within the summation, and the summation itself, will overflow for certain values of i and lambda. CUMPOIS eliminates these possibilities by multiplying an additional exponential factor into the summation terms and the partial sum whenever overflow/underflow situations threaten. The reciprocal of this term is then multiplied into the completed sum giving the cumulative probability. The CUMPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting lambda and n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMPOIS was developed in 1988.

  17. Microwave imaging of a solar limb flare - Comparison of spectra and spatial geometry with hard X-rays

    NASA Technical Reports Server (NTRS)

    Schmahl, E. J.; Kundu, M. R.; Dennis, B. R.

    1985-01-01

    A solar limb flare was mapped using the Very Large Array (VLA) together with hard X-ray (HXR) spectral and spatial observations of the Solar Maximum Mission satellite. Microwave flux records from 2.8 to 19.6 GHz were instrumental in determining the burst spectrum, which has a maximum at 10 GHz. The flux spectrum and area of the burst sources were used to determine the number of electrons producing gyrosynchrotron emission, magnetic field strength, and the energy distribution of gyrosynchrotron-emitting electrons. Applying the thick target model to the HXR spectrum, the number of high energy electrons responsible for the X-ray bursts was found to be 10 to the 36th, and the electron energy distribution was approximately E exp -5, significantly different from the parameters derived from the microwave observations. The HXR imaging observations exhibit some similiarities in size and structure o the first two burst sources mapped with the VLA. However, during the initial burst, the HXR source was single and lower in the corona than the double 6 cm source. The observations are explained in terms of a single loop with an isotropic high-energy electron distribution which produced the microwaves, and a larger beamed component which produced the HXR at the feet of the loop.

  18. Detection and Estimation of 2-D Distributions of Greenhouse Gas Source Concentrations and Emissions over Complex Urban Environments and Industrial Sites

    NASA Astrophysics Data System (ADS)

    Zaccheo, T. S.; Pernini, T.; Dobler, J. T.; Blume, N.; Braun, M.

    2017-12-01

    This work highlights the use of the greenhouse-gas laser imaging tomography experiment (GreenLITETM) data in conjunction with a sparse tomography approach to identify and quantify both urban and industrial sources of CO2 and CH4. The GreenLITETM system provides a user-defined set of time-sequenced intersecting chords or integrated column measurements at a fixed height through a quasi-horizontal plane of interest. This plane, with unobstructed views along the lines of sight, may range from complex industrial facilities to a small city scale or urban sector. The continuous time phased absorption measurements are converted to column concentrations and combined with a plume based model to estimate the 2-D distribution of gas concentration over extended areas ranging from 0.04-25 km2. Finally, these 2-D maps of concentration are combined with ancillary meteorological and atmospheric data to identify potential emission sources and provide first order estimates of their associated fluxes. In this presentation, we will provide a brief overview of the systems and results from both controlled release experiments and a long-term system deployment in Paris, FR. These results provide a quantitative assessment of the system's ability to detect and estimate CO2 and CH4 sources, and demonstrate its ability to perform long-term autonomous monitoring and quantification of either persistent or sporadic emissions that may have both health and safety as well as environmental impacts.

  19. Full Waveform Inversion Using Student's t Distribution: a Numerical Study for Elastic Waveform Inversion and Simultaneous-Source Method

    NASA Astrophysics Data System (ADS)

    Jeong, Woodon; Kang, Minji; Kim, Shinwoong; Min, Dong-Joo; Kim, Won-Ki

    2015-06-01

    Seismic full waveform inversion (FWI) has primarily been based on a least-squares optimization problem for data residuals. However, the least-squares objective function can suffer from its weakness and sensitivity to noise. There have been numerous studies to enhance the robustness of FWI by using robust objective functions, such as l 1-norm-based objective functions. However, the l 1-norm can suffer from a singularity problem when the residual wavefield is very close to zero. Recently, Student's t distribution has been applied to acoustic FWI to give reasonable results for noisy data. Student's t distribution has an overdispersed density function compared with the normal distribution, and is thus useful for data with outliers. In this study, we investigate the feasibility of Student's t distribution for elastic FWI by comparing its basic properties with those of the l 2-norm and l 1-norm objective functions and by applying the three methods to noisy data. Our experiments show that the l 2-norm is sensitive to noise, whereas the l 1-norm and Student's t distribution objective functions give relatively stable and reasonable results for noisy data. When noise patterns are complicated, i.e., due to a combination of missing traces, unexpected outliers, and random noise, FWI based on Student's t distribution gives better results than l 1- and l 2-norm FWI. We also examine the application of simultaneous-source methods to acoustic FWI based on Student's t distribution. Computing the expectation of the coefficients of gradient and crosstalk noise terms and plotting the signal-to-noise ratio with iteration, we were able to confirm that crosstalk noise is suppressed as the iteration progresses, even when simultaneous-source FWI is combined with Student's t distribution. From our experiments, we conclude that FWI based on Student's t distribution can retrieve subsurface material properties with less distortion from noise than l 1- and l 2-norm FWI, and the simultaneous-source method can be adopted to improve the computational efficiency of FWI based on Student's t distribution.

  20. A new method of optimal capacitor switching based on minimum spanning tree theory in distribution systems

    NASA Astrophysics Data System (ADS)

    Li, H. W.; Pan, Z. Y.; Ren, Y. B.; Wang, J.; Gan, Y. L.; Zheng, Z. Z.; Wang, W.

    2018-03-01

    According to the radial operation characteristics in distribution systems, this paper proposes a new method based on minimum spanning trees method for optimal capacitor switching. Firstly, taking the minimal active power loss as objective function and not considering the capacity constraints of capacitors and source, this paper uses Prim algorithm among minimum spanning trees algorithms to get the power supply ranges of capacitors and source. Then with the capacity constraints of capacitors considered, capacitors are ranked by the method of breadth-first search. In term of the order from high to low of capacitor ranking, capacitor compensation capacity based on their power supply range is calculated. Finally, IEEE 69 bus system is adopted to test the accuracy and practicality of the proposed algorithm.

  1. The uncertainty of nitrous oxide emissions from grazed grasslands: A New Zealand case study

    NASA Astrophysics Data System (ADS)

    Kelliher, Francis M.; Henderson, Harold V.; Cox, Neil R.

    2017-01-01

    Agricultural soils emit nitrous oxide (N2O), a greenhouse gas and the primary source of nitrogen oxides which deplete stratospheric ozone. Agriculture has been estimated to be the largest anthropogenic N2O source. In New Zealand (NZ), pastoral agriculture uses half the land area. To estimate the annual N2O emissions from NZ's agricultural soils, the nitrogen (N) inputs have been determined and multiplied by an emission factor (EF), the mass fraction of N inputs emitted as N2Osbnd N. To estimate the associated uncertainty, we developed an analytical method. For comparison, another estimate was determined by Monte Carlo numerical simulation. For both methods, expert judgement was used to estimate the N input uncertainty. The EF uncertainty was estimated by meta-analysis of the results from 185 NZ field trials. For the analytical method, assuming a normal distribution and independence of the terms used to calculate the emissions (correlation = 0), the estimated 95% confidence limit was ±57%. When there was a normal distribution and an estimated correlation of 0.4 between N input and EF, the latter inferred from experimental data involving six NZ soils, the analytical method estimated a 95% confidence limit of ±61%. The EF data from 185 NZ field trials had a logarithmic normal distribution. For the Monte Carlo method, assuming a logarithmic normal distribution for EF, a normal distribution for the other terms and independence of all terms, the estimated 95% confidence limits were -32% and +88% or ±60% on average. When there were the same distribution assumptions and a correlation of 0.4 between N input and EF, the Monte Carlo method estimated 95% confidence limits were -34% and +94% or ±64% on average. For the analytical and Monte Carlo methods, EF uncertainty accounted for 95% and 83% of the emissions uncertainty when the correlation between N input and EF was 0 and 0.4, respectively. As the first uncertainty analysis of an agricultural soils N2O emissions inventory using "country-specific" field trials to estimate EF uncertainty, this can be a potentially informative case study for the international scientific community.

  2. Feeding ducks, bacterial chemotaxis, and the Gini index

    NASA Astrophysics Data System (ADS)

    Peaudecerf, François J.; Goldstein, Raymond E.

    2015-08-01

    Classic experiments on the distribution of ducks around separated food sources found consistency with the "ideal free" distribution in which the local population is proportional to the local supply rate. Motivated by this experiment and others, we examine the analogous problem in the microbial world: the distribution of chemotactic bacteria around multiple nearby food sources. In contrast to the optimization of uptake rate that may hold at the level of a single cell in a spatially varying nutrient field, nutrient consumption by a population of chemotactic cells will modify the nutrient field, and the uptake rate will generally vary throughout the population. Through a simple model we study the distribution of resource uptake in the presence of chemotaxis, consumption, and diffusion of both bacteria and nutrients. Borrowing from the field of theoretical economics, we explore how the Gini index can be used as a means to quantify the inequalities of uptake. The redistributive effect of chemotaxis can lead to a phenomenon we term "chemotactic levelling," and the influence of these results on population fitness are briefly considered.

  3. A novel approach for characterizing broad-band radio spectral energy distributions

    NASA Astrophysics Data System (ADS)

    Harvey, V. M.; Franzen, T.; Morgan, J.; Seymour, N.

    2018-05-01

    We present a new broad-band radio frequency catalogue across 0.12 GHz ≤ ν ≤ 20 GHz created by combining data from the Murchison Widefield Array Commissioning Survey, the Australia Telescope 20 GHz survey, and the literature. Our catalogue consists of 1285 sources limited by S20 GHz > 40 mJy at 5σ, and contains flux density measurements (or estimates) and uncertainties at 0.074, 0.080, 0.119, 0.150, 0.180, 0.408, 0.843, 1.4, 4.8, 8.6, and 20 GHz. We fit a second-order polynomial in log-log space to the spectral energy distributions of all these sources in order to characterize their broad-band emission. For the 994 sources that are well described by a linear or quadratic model we present a new diagnostic plot arranging sources by the linear and curvature terms. We demonstrate the advantages of such a plot over the traditional radio colour-colour diagram. We also present astrophysical descriptions of the sources found in each segment of this new parameter space and discuss the utility of these plots in the upcoming era of large area, deep, broad-band radio surveys.

  4. Phosphorus export across an urban to rural gradient in the Chesapeake Bay watershed

    Treesearch

    Shuiwang Duan; Sujay S. Kaushal; Peter Groffman; Lawrence E. Band; Kenneth Belt

    2012-01-01

    Watershed export of phosphorus (P) from anthropogenic sources has contributed to eutrophication in freshwater and coastal ecosystems. We explore impacts of watershed urbanization on the magnitude and export flow distribution of P along an urban-rural gradient in eight watersheds monitored as part of the Baltimore Ecosystem Study Long-Term Ecological Research site....

  5. The Sources of American Inequality, 1896-1948.

    ERIC Educational Resources Information Center

    Williamson, Jeffrey G.

    This paper discusses American long-term experience with changes in the distribution of income since the turn of the century. It supplies quantitative documentation of a pronunced secular swing in inequality. Inequality indicators were on the rise up to 1914, exhibited no trend to 1926 or 1929, and traced out a well known egalitatian leveling up to…

  6. SEMI-ANALYTIC CALCULATION OF THE TEMPERATURE DISTRIBUTION IN A PERFORATED CIRCLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kennedy, J.M.; Fowler, J.K.

    The flow of heat in a tube-in-shell fuel element is closely related to the two-dimensional heat flow in a circular region perforated by a number of circular holes. Mathematical expressions for the two-dimensional temperature distribution were obtained in terms of sources and sinks of increasing complexity located within the holes and beyond the outer circle. A computer program, TINS, which solves the temperature problem for an array of one or two rings of holes, with or without a center hole, is also described. (auth)

  7. Noise from Two-Blade Propellers

    NASA Technical Reports Server (NTRS)

    Stowell, E Z; Deming, A F

    1936-01-01

    The two-blade propeller, one of the most powerful sources of sound known, has been studied with the view of obtaining fundamental information concerning the noise emission. In order to eliminate engine noise, the propeller was mounted on an electric motor. A microphone was used to pick up the sound whose characteristics were studied electrically. The distribution of noise throughout the frequency range, as well as the spatial distribution about the propeller, was studied. The results are given in the form of polar diagrams. An appendix of common acoustical terms is included.

  8. Vorticity Transfer in Shock Wave Interactions with Turbulence and Vortices

    NASA Astrophysics Data System (ADS)

    Agui, J. H.; Andreopoulos, J.

    1998-11-01

    Time-dependent, three-dimensional vorticity measurements of shock waves interacting with grid generated turbulence and concentrated tip vortices were conducted in a large diameter shock tube facility. Two different mesh size grids and a NACA-0012 semi-span wing acting as a tip vortex generator were used to carry out different relative Mach number interactions. The turbulence interactions produced a clear amplification of the lateral and spanwise vorticity rms, while the longitudinal component remained mostly unaffected. By comparison, the tip vortex/shock wave interactions produced a two fold increase in the rms of longitudinal vorticity. Considerable attention was given to the vorticity source terms. The mean and rms of the vorticity stretching terms dominated by 5 to 7 orders of magnitude over the dilitational compression terms in all the interactions. All three signals of the stretching terms manifested very intermittent, large amplitude peak events which indicated the bursting character of the stretching process. Distributions of these signals were characterized by extremely large levels of flatness with varying degrees of skewness. These distribution patterns were found to change only slightly through the turbulence interactions. However, the tip vortex/shock wave interactions brought about significant changes in these distributions which were associated with the abrupt structural changes of the vortex after the interaction.

  9. Transverse distribution of beam current oscillations of a 14 GHz electron cyclotron resonance ion source.

    PubMed

    Tarvainen, O; Toivanen, V; Komppula, J; Kalvas, T; Koivisto, H

    2014-02-01

    The temporal stability of oxygen ion beams has been studied with the 14 GHz A-ECR at JYFL (University of Jyvaskyla, Department of Physics). A sector Faraday cup was employed to measure the distribution of the beam current oscillations across the beam profile. The spatial and temporal characteristics of two different oscillation "modes" often observed with the JYFL 14 GHz ECRIS are discussed. It was observed that the low frequency oscillations below 200 Hz are distributed almost uniformly. In the high frequency oscillation "mode," with frequencies >300 Hz at the core of the beam, carrying most of the current, oscillates with smaller amplitude than the peripheral parts of the beam. The results help to explain differences observed between the two oscillation modes in terms of the transport efficiency through the JYFL K-130 cyclotron. The dependence of the oscillation pattern on ion source parameters is a strong indication that the mechanisms driving the fluctuations are plasma effects.

  10. Availability of added sugars in Brazil: distribution, food sources and time trends.

    PubMed

    Levy, Renata Bertazzi; Claro, Rafael Moreira; Bandoni, Daniel Henrique; Mondini, Lenise; Monteiro, Carlos Augusto

    2012-03-01

    To describe the regional and socio-economic distribution of consumption of added sugar in Brazil in 2002/03, particularly products, sources of sugar and trends in the past 15 years. The study used data from Household Budget Surveys since the 1980s about the type and quantity of food and beverages bought by Brazilian families. Different indicators were analyzed: % of sugar calories over the total diet energy and caloric % of table sugar fractions and sugar added to processed food/ sugar calories of diet. In 2002/03, of the total energy available for consumption, 16.7% came from added sugar in all regional and socio-economic strata. The table sugar/ sugar added to processed food ratio was inversely proportional to increase in income. Although this proportion fell in the past 15 years, sugar added to processed food doubled, especially in terms of consumption of soft drinks and cookies. Brazilians consume more sugar than the recommended levels determined by the WHO and the sources of consumption of sugar have changed significantly.

  11. Stockholm Arlanda Airport as a source of per- and polyfluoroalkyl substances to water, sediment and fish.

    PubMed

    Ahrens, Lutz; Norström, Karin; Viktor, Tomas; Cousins, Anna Palm; Josefsson, Sarah

    2015-06-01

    Fire training facilities are potential sources of per- and polyfluoroalkyl substances (PFASs) to the nearby environment due to the usage of PFAS-containing aqueous fire-fighting foams (AFFFs). The multimedia distribution of perfluoroalkyl carboxylates (PFCAs), perfluoroalkyl sulfonates (PFSAs), perfluorooctanesulfonamide (PFOSA) and 6:2 fluorotelomer sulfonate (FTSA) was investigated near a fire training facility at Stockholm Arlanda Airport in Sweden. The whole body burden of PFASs in European perch (Perca fluviatilis) was 334±80μg absolute and was distributed as follows: Gonad>liver≈muscle>blood>gill. The bioconcentration factor (BCF) and sediment/water partition coefficient (Kd) increased by 0.6-1.7 and 0.2-0.5 log units, respectively, for each additional CF2 moiety for PFCAs and PFSAs. PFAS concentrations in water showed no significant decreasing trend between 2009 and 2013 (p>0.05), which indicates that Stockholm Arlanda Airport may be an important source for long-term contamination of the nearby environment with PFASs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Family Income at the Bottom and at the Top: Income Sources and Family Characteristics

    PubMed Central

    Raffalovich, Lawrence E.; Monnat, Shannon M.; Tsao, Hui-shien

    2009-01-01

    Attention has recently been focused on wealth as a source of long-term economic security and on wealth ownership as a crucial aspect of the racial economic divisions in the United States. This literature, however has been concerned primarily with the wealth gap between poor and middle-class families, and between the white and black middle class. In this paper, we investigate the incomes of families at the top and bottom of the family income distribution. We examine the sources of income and the demographic characteristics of these high-income and low-income families using family level data from the 1988-2003 Current Population Surveys. We find that, at the bottom of the distribution, transfer income is the major income source; in particular, income from social security, supplemental security, and public assistance. At the top, employment income is the largest component of family income. Non-white, female, and non-married householders are disproportionately located at the bottom of the family income distribution. These families consist of both young and old adults, with high-school educations or less, in low-level service occupations. Many are disabled, many are retired. Householders at the top of the income distribution are typically male, white, and married. Householders and spouses at the top are typically middle-age, with college educations, employed in professional service and managerial occupations. We find that wealth is not an important source of income for families at the highest percentiles. The highest income families during this period in the U.S. were not a “property elite”: their income is mostly from employment. We speculate, however, that they will join the “property elite” later in the life-course as they retire and receive income from their investments. PMID:20161570

  13. Family Income at the Bottom and at the Top: Income Sources and Family Characteristics.

    PubMed

    Raffalovich, Lawrence E; Monnat, Shannon M; Tsao, Hui-Shien

    2009-12-01

    Attention has recently been focused on wealth as a source of long-term economic security and on wealth ownership as a crucial aspect of the racial economic divisions in the United States. This literature, however has been concerned primarily with the wealth gap between poor and middle-class families, and between the white and black middle class. In this paper, we investigate the incomes of families at the top and bottom of the family income distribution. We examine the sources of income and the demographic characteristics of these high-income and low-income families using family level data from the 1988-2003 Current Population Surveys.We find that, at the bottom of the distribution, transfer income is the major income source; in particular, income from social security, supplemental security, and public assistance. At the top, employment income is the largest component of family income. Non-white, female, and non-married householders are disproportionately located at the bottom of the family income distribution. These families consist of both young and old adults, with high-school educations or less, in low-level service occupations. Many are disabled, many are retired. Householders at the top of the income distribution are typically male, white, and married. Householders and spouses at the top are typically middle-age, with college educations, employed in professional service and managerial occupations.We find that wealth is not an important source of income for families at the highest percentiles. The highest income families during this period in the U.S. were not a "property elite": their income is mostly from employment. We speculate, however, that they will join the "property elite" later in the life-course as they retire and receive income from their investments.

  14. Maximization Network Throughput Based on Improved Genetic Algorithm and Network Coding for Optical Multicast Networks

    NASA Astrophysics Data System (ADS)

    Wei, Chengying; Xiong, Cuilian; Liu, Huanlin

    2017-12-01

    Maximal multicast stream algorithm based on network coding (NC) can improve the network's throughput for wavelength-division multiplexing (WDM) networks, which however is far less than the network's maximal throughput in terms of theory. And the existing multicast stream algorithms do not give the information distribution pattern and routing in the meantime. In the paper, an improved genetic algorithm is brought forward to maximize the optical multicast throughput by NC and to determine the multicast stream distribution by hybrid chromosomes construction for multicast with single source and multiple destinations. The proposed hybrid chromosomes are constructed by the binary chromosomes and integer chromosomes, while the binary chromosomes represent optical multicast routing and the integer chromosomes indicate the multicast stream distribution. A fitness function is designed to guarantee that each destination can receive the maximum number of decoding multicast streams. The simulation results showed that the proposed method is far superior over the typical maximal multicast stream algorithms based on NC in terms of network throughput in WDM networks.

  15. Polybrominated diphenyl ethers in residential and agricultural soils from an electronic waste polluted region in South China: distribution, compositional profile, and sources.

    PubMed

    Zhang, Shaohui; Xu, Xijin; Wu, Yousheng; Ge, Jingjing; Li, Weiqiu; Huo, Xia

    2014-05-01

    A detailed investigation was conducted to understand the concentration, distribution, profile and possible source of polybrominated diphenyl ethers (PBDEs) in residential and agricultural soils from Guiyu, Shantou, China, one of the largest electronic waste (e-waste) recycling and dismantling areas in the world. Ten PBDEs were analyzed in 46 surface soil samples in terms of individual and total concentrations, together with soil organic matter concentrations. Much higher concentrations of the total PBDEs were predicted in the residential areas (more than 2000 ng g(-1)), exhibiting a clear urban source, while in the agricultural areas, concentrations were lower than 1500 ng g(-1). PBDE-209 was the most dominant congener among the study sites, indicating the prevalence of commercial deca-PBDE. However signature congeners from commercial octa-PBDE were also found. The total PBDE concentrations were significantly correlated with each individual PBDE. Principal component analysis indicated that PBDEs were mainly distributed in three groups according to the number of bromine atoms on the phenyl rings, and potential source. This study showed that the informal e-waste recycling has already introduced PBDEs into surrounding areas as pollutant which thus warrants an urgent investigation into the transport of PBDEs in the soil-plant system of agricultural areas. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Depth to the bottom of magnetic sources (DBMS) from aeromagnetic data of Central India using modified centroid method for fractal distribution of sources

    NASA Astrophysics Data System (ADS)

    Bansal, A. R.; Anand, S.; Rajaram, M.; Rao, V.; Dimri, V. P.

    2012-12-01

    The depth to the bottom of the magnetic sources (DBMS) may be used as an estimate of the Curie - point depth. The DBMSs can also be interpreted in term of thermal structure of the crust. The thermal structure of the crust is a sensitive parameter and depends on the many properties of crust e.g. modes of deformation, depths of brittle and ductile deformation zones, regional heat flow variations, seismicity, subsidence/uplift patterns and maturity of organic matter in sedimentary basins. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on fractal distribution has been proposed. We applied this modified centroid method to the aeromagnetic data of the central Indian region and selected 29 half overlapping blocks of dimension 200 km x 200 km covering different parts of the central India. Shallower values of the DBMS are found for the western and southern portion of Indian shield. The DBMSs values are found as low as close to middle crust in the south west Deccan trap and probably deeper than Moho in the Chhatisgarh basin. In few places DBMS are close to the Moho depth found from the seismic study and others places shallower than the Moho. The DBMS indicate complex nature of the Indian crust.

  17. Glacial lakes in Austria - Distribution and formation since the Little Ice Age

    NASA Astrophysics Data System (ADS)

    Buckel, J.; Otto, J. C.; Prasicek, G.; Keuschnig, M.

    2018-05-01

    Glacial lakes constitute a substantial part of the legacy of vanishing mountain glaciation and act as water storage, sediment traps and sources of both natural hazards and leisure activities. For these reasons, they receive growing attention by scientists and society. However, while the evolution of glacial lakes has been studied intensively over timescales tied to remote sensing-based approaches, the longer-term perspective has been omitted due a lack of suitable data sources. We mapped and analyzed the spatial distribution of glacial lakes in the Austrian Alps. We trace the development of number and area of glacial lakes in the Austrian Alps since the Little Ice Age (LIA) based on a unique combination of a lake inventory and an extensive record of glacier retreat. We find that bedrock-dammed lakes are the dominant lake type in the inventory. Bedrock- and moraine-dammed lakes populate the highest landscape domains located in cirques and hanging valleys. We observe lakes embedded in glacial deposits at lower locations on average below 2000 m a.s.l. In general, the distribution of glacial lakes over elevation reflects glacier erosional and depositional dynamics rather than the distribution of total area. The rate of formation of new glacial lakes (number, area) has continuously accelerated over time with present rates showing an eight-fold increase since LIA. At the same time the total glacier area decreased by two-thirds. This development coincides with a long-term trend of rising temperatures and a significant stepping up of this trend within the last 20 years in the Austrian Alps.

  18. Characterizing open and non-uniform vertical heat sources: towards the identification of real vertical cracks in vibrothermography experiments

    NASA Astrophysics Data System (ADS)

    Castelo, A.; Mendioroz, A.; Celorrio, R.; Salazar, A.; López de Uralde, P.; Gorosmendi, I.; Gorostegui-Colinas, E.

    2017-05-01

    Lock-in vibrothermography is used to characterize vertical kissing and open cracks in metals. In this technique the crack heats up during ultrasound excitation due mainly to friction between the defect's faces. We have solved the inverse problem, consisting in determining the heat source distribution produced at cracks under amplitude modulated ultrasound excitation, which is an ill-posed inverse problem. As a consequence the minimization of the residual is unstable. We have stabilized the algorithm introducing a penalty term based on Total Variation functional. In the inversion, we combine amplitude and phase surface temperature data obtained at several modulation frequencies. Inversions of synthetic data with added noise indicate that compact heat sources are characterized accurately and that the particular upper contours can be retrieved for shallow heat sources. The overall shape of open and homogeneous semicircular strip-shaped heat sources representing open half-penny cracks can also be retrieved but the reconstruction of the deeper end of the heat source loses contrast. Angle-, radius- and depth-dependent inhomogeneous heat flux distributions within these semicircular strips can also be qualitatively characterized. Reconstructions of experimental data taken on samples containing calibrated heat sources confirm the predictions from reconstructions of synthetic data. We also present inversions of experimental data obtained from a real welded Inconel 718 specimen. The results are in good qualitative agreement with the results of liquids penetrants testing.

  19. Non-Point Source Pollutant Load Variation in Rapid Urbanization Areas by Remote Sensing, Gis and the L-THIA Model: A Case in Bao'an District, Shenzhen, China.

    PubMed

    Li, Tianhong; Bai, Fengjiao; Han, Peng; Zhang, Yuanyan

    2016-11-01

    Urban sprawl is a major driving force that alters local and regional hydrology and increases non-point source pollution. Using the Bao'an District in Shenzhen, China, a typical rapid urbanization area, as the study area and land-use change maps from 1988 to 2014 that were obtained by remote sensing, the contributions of different land-use types to NPS pollutant production were assessed with a localized long-term hydrologic impact assessment (L-THIA) model. The results show that the non-point source pollution load changed significantly both in terms of magnitude and spatial distribution. The loads of chemical oxygen demand, total suspended substances, total nitrogen and total phosphorus were affected by the interactions between event mean concentration and the magnitude of changes in land-use acreages and the spatial distribution. From 1988 to 2014, the loads of chemical oxygen demand, suspended substances and total phosphorus showed clearly increasing trends with rates of 132.48 %, 32.52 % and 38.76 %, respectively, while the load of total nitrogen decreased by 71.52 %. The immigrant population ratio was selected as an indicator to represent the level of rapid urbanization and industrialization in the study area, and a comparison analysis of the indicator with the four non-point source loads demonstrated that the chemical oxygen demand, total phosphorus and total nitrogen loads are linearly related to the immigrant population ratio. The results provide useful information for environmental improvement and city management in the study area.

  20. Non-Point Source Pollutant Load Variation in Rapid Urbanization Areas by Remote Sensing, Gis and the L-THIA Model: A Case in Bao'an District, Shenzhen, China

    NASA Astrophysics Data System (ADS)

    Li, Tianhong; Bai, Fengjiao; Han, Peng; Zhang, Yuanyan

    2016-11-01

    Urban sprawl is a major driving force that alters local and regional hydrology and increases non-point source pollution. Using the Bao'an District in Shenzhen, China, a typical rapid urbanization area, as the study area and land-use change maps from 1988 to 2014 that were obtained by remote sensing, the contributions of different land-use types to NPS pollutant production were assessed with a localized long-term hydrologic impact assessment (L-THIA) model. The results show that the non-point source pollution load changed significantly both in terms of magnitude and spatial distribution. The loads of chemical oxygen demand, total suspended substances, total nitrogen and total phosphorus were affected by the interactions between event mean concentration and the magnitude of changes in land-use acreages and the spatial distribution. From 1988 to 2014, the loads of chemical oxygen demand, suspended substances and total phosphorus showed clearly increasing trends with rates of 132.48 %, 32.52 % and 38.76 %, respectively, while the load of total nitrogen decreased by 71.52 %. The immigrant population ratio was selected as an indicator to represent the level of rapid urbanization and industrialization in the study area, and a comparison analysis of the indicator with the four non-point source loads demonstrated that the chemical oxygen demand, total phosphorus and total nitrogen loads are linearly related to the immigrant population ratio. The results provide useful information for environmental improvement and city management in the study area.

  1. Amplitude and Recurrence Time of LP activity at Mt. Etna, Italy

    NASA Astrophysics Data System (ADS)

    Cauchie, Léna; Saccorotti, Gilberto; Bean, Christopher

    2013-04-01

    The manifestation of Long-Period (LP) activity is attested on many volcanoes worldwide and is thought to be associated with the resonant oscillations of subsurface, fluid-filled, cracks and conduits. Nonetheless the actual source mechanism that originates the resonance is still unclear. Different models have been proposed so far, including (i) fluid flow instabilities as periodic degassing and (ii) brittle failure in viscous magmas. Since LP activity usually precedes and accompanies volcanic eruption, the understanding of these sources is crucial for the hazard assessment and eruption early warning. The work is aimed at improving the understanding of the LP source mechanism through a statistical analysis of detailed LP catalogues. The behaviour of LP activity is compared with the empirical laws governing earthquakes recurrence (e.g., Gutenberg-Richter [GR] and Gamma-law distributions), in order to understand what relationships, if any, exist between these two apparently different earthquake classes. In particular, about 13000 events were detected on Mount Etna in August 2005 through a STA/LTA method. For this given period, the volcano does not present particular sign of unrest. The manifestation of the LP events is sustained in time over all the period of analysis. From the analysis of the directional properties, it turns out that the events of this first catalog propagate from 2 distinct sources . Furthermore, the events exhibit a high degree of waveform similarity, and provide a criterion for classification/source separation. The events were then grouped into families of comparable waveforms, resulting also in a separation for their source locations. We then used template signals of each family for a Matched-Filtering of the continuous data streams, in order to discriminate small-amplitude events previously undetected by the STA/LTA triggering method. This procedure allowed for a significant enrichment of the catalogues. The retrieved amplitude distributions, similar for both families, differ instead significantly from the Gutenberg-Richter law, and the inter-event times distributions don't follow a typical Gamma-law. In order to compare these results with a catalogue for which the source mechanism is well-established, we applied the same analysis procedure to a dataset from Stromboli Volcano, where LP activity is closely related to VLP (Very-Long-Period) pulses, in turn associated with the summit explosions. Again, catalogues of thousands of LP events were achieved over one month of seismic records (July 2011). Our results indicate a similar behaviour in terms of both amplitude and inter-event time distributions, with respect to what observed at Mt. Etna. This suggests that the Etna's LP data are likely related with a degassing process occurring at depth. Nonetheless, further studies are needed in order to quantify the time recurrence and amplitude distribution of brittle failure in viscous, stressed magmas. Hopefully, these steps will lead to an improved understanding of LP activity in different volcanic contexts, in turn clarifying its significance in terms of eruption forecasting.

  2. Mathematical models of Neospora caninum infection in dairy cattle: transmission and options for control.

    PubMed

    French, N P; Clancy, D; Davison, H C; Trees, A J

    1999-10-01

    The transmission and control of Neospora caninum infection in dairy cattle was examined using deterministic and stochastic models. Parameter estimates were derived from recent studies conducted in the UK and from the published literature. Three routes of transmission were considered: maternal vertical transmission with a high probability (0.95), horizontal transmission from infected cattle within the herd, and horizontal transmission from an independent external source. Putative infection via pooled colostrum was used as an example of within-herd horizontal transmission, and the recent finding that the dog is a definitive host of N. caninum supported the inclusion of an external independent source of infection. The predicted amount of horizontal transmission required to maintain infection at levels commonly observed in field studies in the UK and elsewhere, was consistent with that observed in studies of post-natal seroconversion (0.85-9.0 per 100 cow-years). A stochastic version of the model was used to simulate the spread of infection in herds of 100 cattle, with a mean infection prevalence similar to that observed in UK studies (around 20%). The distributions of infected and uninfected cattle corresponded closely to Normal distributions, with S.D.s of 6.3 and 7.0, respectively. Control measures were considered by altering birth, death and horizontal transmission parameters. A policy of annual culling of infected cattle very rapidly reduced the prevalence of infection, and was shown to be the most effective method of control in the short term. Not breeding replacements from infected cattle was also effective in the short term, particularly in herds with a higher turnover of cattle. However, the long-term effectiveness of these measures depended on the amount and source of horizontal infection. If the level of within-herd transmission was above a critical threshold, then a combination of reducing within-herd, and blocking external sources of transmission was required to permanently eliminate infection.

  3. Turbidite carbon distribution by Ramped PyrOx, Astoria Canyon

    NASA Astrophysics Data System (ADS)

    Childress, L. B.; Galy, V.; McNichol, A. P.

    2017-12-01

    The magnitude and nature of carbon preserved in marine sediments can be affected by long-term processes such as climate change and tectonic transport; preservation of carbon can also be affected by short-term, episodic disturbances such as storm events, landslides, and earthquakes. In margins with active canyons, these systems can be efficient burial networks for carbon. The downslope displacement and reorganization of sediment and associated organic carbon (OC) during turbidite formation alters oxygen diffusion and the potential for aerobic oxidation, thereby modifying the redox geochemistry of the sediment package. Generally termed as a `burn-down', reactions at the subsurface oxidation front are linked to a loss of OC preservation within turbidite sequences. Still debated is the source of the OC residual within `burn-down' events, primarily whether the preserved material is dominated by terrestrial or marine components. To better understand the significance of canyon systems and turbidite deposits in the transport, preservation, and `burn-down' of organic carbon, samples from these systems can be studied using the Ramped PyrOx (RPO) technique. Whereas bulk radiocarbon measurements are unsuitable within turbidite deposits, RPO is well suited for characterizing the distribution of carbon sources within a turbidite interval. To complement RPO analyses, OC and N content, stable carbon isotope composition, gamma ray attenuation bulk density, computerized tomography, and magnetic susceptibility were determined. The turbidite systems of the Cascadia Subduction Zone have been extensively studied in relation to the Holocene paleoseismic record. Gravity cores collected in 2011 aboard the R/V Wecoma capture turbidite deposits in Astoria Canyon and demonstrate characteristics of `burn down' intervals. RPO data from within a 15 cm turbidite interval indicate minimal variation in reactivity structure, stable carbon isotope values and radiocarbon age, suggesting a shared source of sediment input. Such similarities imply minimal source-selective OC alteration and are consistent with a singular event (e.g. - flood) associated with late Holocene warm interval influence on the Columbia River Basin.

  4. On the structure of pressure fluctuations in simulated turbulent channel flow

    NASA Technical Reports Server (NTRS)

    Kim, John

    1989-01-01

    Pressure fluctuations in a turbulent channel flow are investigated by analyzing a database obtained from a direct numerical simulation. Detailed statistics associated with the pressure fluctuations are presented. Characteristics associated with the rapid (linear) and slow (nonlinear) pressure are discussed. It is found that the slow pressure fluctuations are larger than the rapid pressure fluctuations throughout the channel except very near the wall, where they are about the same magnitude. This is contrary to the common belief that the nonlinear source terms are negligible compared to the linear source terms. Probability density distributions, power spectra, and two-point correlations are examined to reveal the characteristics of the pressure fluctuations. The global dependence of the pressure fluctuations and pressure-strain correlations are also examined by evaluating the integral associated with Green's function representations of them. In the wall region where the pressure-strain terms are large, most contributions to the pressure-strain terms are from the wall region (i.e., local), whereas away from the wall where the pressure-strain terms are small, contributions are global. Structures of instantaneous pressure and pressure gradients at the wall and the corresponding vorticity field are examined.

  5. Aerosol Microphysics and Radiation Integration

    DTIC Science & Technology

    2007-09-30

    http://www.nrlmry.navy.mil/ flambe / LONG-TERM GOALS This project works toward the development and support of real time global prognostic aerosol...Burning Emissions ( FLAMBE ) project were transition to the Fleet Numerical Oceanographic Center (FNMOC) Monterey in FY07. Meteorological guidance...Hyer, E. J. and J. S. Reid (2006), Evaluating the impact of improvements to the FLAMBE smoke source model on forecasts of aerosol distribution

  6. Plasma Radiation Source Development Program

    DTIC Science & Technology

    2006-03-01

    shell mass distributions perform belter than thin shells. The dual plenum, double shell load has unique diagnostic features that enhance our...as implosion time increases. 13. SUBJECT TERMS Zpinch x-ray diagnostics Rayleigh-Taylor instability pulsed-power x-ray spectroscopy supersonic...feature permits some very useful diagnostics that shed light on critical details of the implosion process. See Section 3 for details. We have

  7. Forensic fingerprinting of oil-spill hydrocarbons in a methanogenic environment-Mandan, ND and Bemidji, MN

    USGS Publications Warehouse

    Hostettler, F.D.; Wang, Y.; Huang, Y.; Cao, W.; Bekins, B.A.; Rostad, C.E.; Kulpa, C.F.; Laursen, Andrew E.

    2007-01-01

    In recent decades forensic fingerprinting of oil-spill hydrocarbons has emerged as an important tool for correlating oils and for evaluating their source and character. Two long-term hydrocarbon spills, an off-road diesel spill (Mandan, ND) and a crude oil spill (Bemidji, MN) experiencing methanogenic biodegradation were previously shown to be undergoing an unexpected progression of homologous n-alkane and n-alkylated cyclohexane loss. Both exhibited degradative losses proceeding from the high-molecular-weight end of the distributions, along with transitory concentration increases of lower-molecular-weight homologs. Particularly in the case of the diesel fuel spill, these methanogenic degradative patterns can result in series distributions that mimic lower cut refinery fuels or admixture with lower cut fuels. Forensic fingerprinting in this long-term spill must therefore rely on more recalcitrant series, such as polycyclic aromatic hydrocarbon or drimane sesquiterpane profiles, to prove if the spilled oil is single-sourced or whether there is verifiable admixture with other extraneous refinery fuels. Degradation processes impacting n-alkanes and n-alkylated ring compounds, which make these compounds unsuitable for fingerprinting, nevertheless are of interest in understanding methanogenic biodegradation. Copyright ?? Taylor & Francis Group, LLC.

  8. Analysis of the Reactor Physics of Low-Enrichment Fuel for the INL Advanced Test Reactor in support of RERTR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mark DeHart; William Skerjanc; Sean Morrell

    2012-06-01

    Analysis of the performance of the ATR with a LEU fuel design shows promise in terms of a core design that will yield the same neutron sources in target locations. A proposed integral cladding burnable absorber design appears to meet power profile requirements that will satisfy power distributions for safety limits. Performance of this fuel design is ongoing; the current work is the initial evaluation of the core performance of this fuel design with increasing burnup. Results show that LEU fuel may have a longer lifetime that HEU fuel however, such limits may be set by mechanical performance of themore » fuel rather that available reactivity. Changes seen in the radial fuel power distribution with burnup in LEU fuel will require further study to ascertain the impact on neutron fluxes in target locations. Source terms for discharged fuel have also been studied. By its very nature, LEU fuel produces much more plutonium than is present in HEU fuel at discharge. However, the effect of the plutonium inventory appears to have little affect on radiotoxicity or decay heat in the fuel.« less

  9. Detecting and analyzing soil phosphorus loss associated with critical source areas using a remote sensing approach.

    PubMed

    Lou, Hezhen; Yang, Shengtian; Zhao, Changsen; Shi, Liuhua; Wu, Linna; Wang, Yue; Wang, Zhiwei

    2016-12-15

    The detection of critical source areas (CSAs) is a key step in managing soil phosphorus (P) loss and preventing the long-term eutrophication of water bodies at regional scale. Most related studies, however, focus on a local scale, which prevents a clear understanding of the spatial distribution of CSAs for soil P loss at regional scale. Moreover, the continual, long-term variation in CSAs was scarcely reported. It is impossible to identify the factors driving the variation in CSAs, or to collect land surface information essential for CSAs detection, by merely using the conventional methodologies at regional scale. This study proposes a new regional-scale approach, based on three satellite sensors (ASTER, TM/ETM and MODIS), that were implemented successfully to detect CSAs at regional scale over 15years (2000-2014). The approach incorporated five factors (precipitation, slope, soil erosion, land use, soil total phosphorus) that drive soil P loss from CSAs. Results show that the average area of critical phosphorus source areas (CPSAs) was 15,056km 2 over the 15-year period, and it occupied 13.8% of the total area, with a range varying from 1.2% to 23.0%, in a representative, intensive agricultural area of China. In contrast to previous studies, we found that the locations of CSAs with P loss are spatially variable, and are more dispersed in their distribution over the long term. We also found that precipitation acts as a key driving factor in the variation of CSAs at regional scale. The regional-scale method can provide scientific guidance for managing soil phosphorus loss and preventing the long-term eutrophication of water bodies at regional scale, and shows great potential for exploring factors that drive the variation in CSAs at global scale. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Sediment delivery to the Gulf of Alaska: source mechanisms along a glaciated transform margin

    USGS Publications Warehouse

    Dobson, M.R.; O'Leary, D.; Veart, M.

    1998-01-01

    Sediment delivery to the Gulf of Alaska occurs via four areally extensive deep-water fans, sourced from grounded tidewater glaciers. During periods of climatic cooling, glaciers cross a narrow shelf and discharge sediment down the continental slope. Because the coastal terrain is dominated by fjords and a narrow, high-relief Pacific watershed, deposition is dominated by channellized point-source fan accumulations, the volumes of which are primarily a function of climate. The sediment distribution is modified by a long-term tectonic translation of the Pacific plate to the north along the transform margin. As a result, the deep-water fans are gradually moved away from the climatically controlled point sources. Sets of abandoned channels record the effect of translation during the Plio-Pleistocene.

  11. Managing multicentre clinical trials with open source.

    PubMed

    Raptis, Dimitri Aristotle; Mettler, Tobias; Fischer, Michael Alexander; Patak, Michael; Lesurtel, Mickael; Eshmuminov, Dilmurodjon; de Rougemont, Olivier; Graf, Rolf; Clavien, Pierre-Alain; Breitenstein, Stefan

    2014-03-01

    Multicentre clinical trials are challenged by high administrative burden, data management pitfalls and costs. This leads to a reduced enthusiasm and commitment of the physicians involved and thus to a reluctance in conducting multicentre clinical trials. The purpose of this study was to develop a web-based open source platform to support a multi-centre clinical trial. We developed on Drupal, an open source software distributed under the terms of the General Public License, a web-based, multi-centre clinical trial management system with the design science research approach. This system was evaluated by user-testing and well supported several completed and on-going clinical trials and is available for free download. Open source clinical trial management systems are capable in supporting multi-centre clinical trials by enhancing efficiency, quality of data management and collaboration.

  12. Quantum connectivity optimization algorithms for entanglement source deployment in a quantum multi-hop network

    NASA Astrophysics Data System (ADS)

    Zou, Zhen-Zhen; Yu, Xu-Tao; Zhang, Zai-Chen

    2018-04-01

    At first, the entanglement source deployment problem is studied in a quantum multi-hop network, which has a significant influence on quantum connectivity. Two optimization algorithms are introduced with limited entanglement sources in this paper. A deployment algorithm based on node position (DNP) improves connectivity by guaranteeing that all overlapping areas of the distribution ranges of the entanglement sources contain nodes. In addition, a deployment algorithm based on an improved genetic algorithm (DIGA) is implemented by dividing the region into grids. From the simulation results, DNP and DIGA improve quantum connectivity by 213.73% and 248.83% compared to random deployment, respectively, and the latter performs better in terms of connectivity. However, DNP is more flexible and adaptive to change, as it stops running when all nodes are covered.

  13. Understanding the electrical behavior of the action potential in terms of elementary electrical sources.

    PubMed

    Rodriguez-Falces, Javier

    2015-03-01

    A concept of major importance in human electrophysiology studies is the process by which activation of an excitable cell results in a rapid rise and fall of the electrical membrane potential, the so-called action potential. Hodgkin and Huxley proposed a model to explain the ionic mechanisms underlying the formation of action potentials. However, this model is unsuitably complex for teaching purposes. In addition, the Hodgkin and Huxley approach describes the shape of the action potential only in terms of ionic currents, i.e., it is unable to explain the electrical significance of the action potential or describe the electrical field arising from this source using basic concepts of electromagnetic theory. The goal of the present report was to propose a new model to describe the electrical behaviour of the action potential in terms of elementary electrical sources (in particular, dipoles). The efficacy of this model was tested through a closed-book written exam. The proposed model increased the ability of students to appreciate the distributed character of the action potential and also to recognize that this source spreads out along the fiber as function of space. In addition, the new approach allowed students to realize that the amplitude and sign of the extracellular electrical potential arising from the action potential are determined by the spatial derivative of this intracellular source. The proposed model, which incorporates intuitive graphical representations, has improved students' understanding of the electrical potentials generated by bioelectrical sources and has heightened their interest in bioelectricity. Copyright © 2015 The American Physiological Society.

  14. A Cost to Benefit Analysis of a Next Generation Electric Power Distribution System

    NASA Astrophysics Data System (ADS)

    Raman, Apurva

    This thesis provides a cost to benefit analysis of the proposed next generation of distribution systems- the Future Renewable Electric Energy Distribution Management (FREEDM) system. With the increasing penetration of renewable energy sources onto the grid, it becomes necessary to have an infrastructure that allows for easy integration of these resources coupled with features like enhanced reliability of the system and fast protection from faults. The Solid State Transformer (SST) and the Fault Isolation Device (FID) make for the core of the FREEDM system and have huge investment costs. Some key features of the FREEDM system include improved power flow control, compact design and unity power factor operation. Customers may observe a reduction in the electricity bill by a certain fraction for using renewable sources of generation. There is also a possibility of huge subsidies given to encourage use of renewable energy. This thesis is an attempt to quantify the benefits offered by the FREEDM system in monetary terms and to calculate the time in years required to gain a return on investments made. The elevated cost of FIDs needs to be justified by the advantages they offer. The result of different rates of interest and how they influence the payback period is also studied. The payback periods calculated are observed for viability. A comparison is made between the active power losses on a certain distribution feeder that makes use of distribution level magnetic transformers versus one that makes use of SSTs. The reduction in the annual active power losses in the case of the feeder using SSTs is translated onto annual savings in terms of cost when compared to the conventional case with magnetic transformers. Since the FREEDM system encourages operation at unity power factor, the need for installing capacitor banks for improving the power factor is eliminated and this reflects in savings in terms of cost. The FREEDM system offers enhanced reliability when compared to a conventional system. The payback periods observed support the concept of introducing the FREEDM system.

  15. Size distribution and coating thickness of black carbon from the Canadian oil sands operations

    NASA Astrophysics Data System (ADS)

    Cheng, Yuan; Li, Shao-Meng; Gordon, Mark; Liu, Peter

    2018-02-01

    Black carbon (BC) plays an important role in the Earth's climate system. However, parameterizations of BC size and mixing state have not been well addressed in aerosol-climate models, introducing substantial uncertainties into the estimation of radiative forcing by BC. In this study, we focused on BC emissions from the oil sands (OS) surface mining activities in northern Alberta, based on an aircraft campaign conducted over the Athabasca OS region in 2013. A total of 14 flights were made over the OS source area, in which the aircraft was typically flown in a four- or five-sided polygon pattern along flight tracks encircling an OS facility. Another 3 flights were performed downwind of the OS source area, each of which involved at least three intercepting locations where the well-mixed OS plume was measured along flight tracks perpendicular to the wind direction. Comparable size distributions were observed for refractory black carbon (rBC) over and downwind of the OS facilities, with rBC mass median diameters (MMDs) between ˜ 135 and 145 nm that were characteristic of fresh urban emissions. This MMD range corresponded to rBC number median diameters (NMDs) of ˜ 60-70 nm, approximately 100 % higher than the NMD settings in some aerosol-climate models. The typical in- and out-of-plume segments of a flight, which had different rBC concentrations and photochemical ages, showed consistent rBC size distributions in terms of MMD, NMD and the corresponding distribution widths. Moreover, rBC size distributions remained unchanged at different downwind distances from the source area, suggesting that atmospheric aging would not necessarily change rBC size distribution. However, aging indeed influenced rBC mixing state. Coating thickness for rBC cores in the diameter range of 130-160 nm was nearly doubled (from ˜ 20 to 40 nm) within 3 h when the OS plume was transported over a distance of 90 km from the source area.

  16. Medical tourism's impacts on health worker migration in the Caribbean: five examples and their implications for global justice.

    PubMed

    Snyder, Jeremy; Crooks, Valorie A; Johnston, Rory; Adams, Krystyna; Whitmore, Rebecca

    2015-01-01

    Medical tourism is a practice where individuals cross international borders in order to access medical care. This practice can impact the global distribution of health workers by potentially reducing the emigration of health workers from destination countries for medical tourists and affecting the internal distribution of these workers. Little has been said, however, about the impacts of medical tourism on the immigration of health workers to medical tourism destinations. We discuss five patterns of medical tourism-driven health worker migration to medical tourism destinations: 1) long-term international migration; 2) long-term diasporic migration; 3) long-term migration and 'black sheep'; 4) short-term migration via time share; and 5) short-term migration via patient-provider dyad. These patterns of health worker migration have repercussions for global justice that include potential negative impacts on the following: 1) health worker training; 2) health worker distributions; 3) local provision of care; and 4) local economies. In order to address these potential negative impacts, policy makers in destination countries should work to ensure that changes in health worker training and licensure aimed at promoting the medical tourism sector are also supportive of the health needs of the domestic population. Policy makers in both source and destination countries should be aware of the effects of medical tourism on health worker flows both into and out of medical tourism destinations and work to ensure that the potential harms of these worker flows to both groups are mitigated.

  17. Medical tourism's impacts on health worker migration in the Caribbean: five examples and their implications for global justice

    PubMed Central

    Snyder, Jeremy; Crooks, Valorie A.; Johnston, Rory; Adams, Krystyna; Whitmore, Rebecca

    2015-01-01

    Medical tourism is a practice where individuals cross international borders in order to access medical care. This practice can impact the global distribution of health workers by potentially reducing the emigration of health workers from destination countries for medical tourists and affecting the internal distribution of these workers. Little has been said, however, about the impacts of medical tourism on the immigration of health workers to medical tourism destinations. We discuss five patterns of medical tourism-driven health worker migration to medical tourism destinations: 1) long-term international migration; 2) long-term diasporic migration; 3) long-term migration and ‘black sheep’; 4) short-term migration via time share; and 5) short-term migration via patient-provider dyad. These patterns of health worker migration have repercussions for global justice that include potential negative impacts on the following: 1) health worker training; 2) health worker distributions; 3) local provision of care; and 4) local economies. In order to address these potential negative impacts, policy makers in destination countries should work to ensure that changes in health worker training and licensure aimed at promoting the medical tourism sector are also supportive of the health needs of the domestic population. Policy makers in both source and destination countries should be aware of the effects of medical tourism on health worker flows both into and out of medical tourism destinations and work to ensure that the potential harms of these worker flows to both groups are mitigated. PMID:25865122

  18. Do forests represent a long-term source of contaminated particulate matter in the Fukushima Prefecture?

    PubMed

    Laceby, J Patrick; Huon, Sylvain; Onda, Yuichi; Vaury, Veronique; Evrard, Olivier

    2016-12-01

    The Fukushima Daiichi Nuclear Power Plant (FDNPP) accident resulted in radiocesium fallout contaminating coastal catchments of the Fukushima Prefecture. As the decontamination effort progresses, the potential downstream migration of radiocesium contaminated particulate matter from forests, which cover over 65% of the most contaminated region, requires investigation. Carbon and nitrogen elemental concentrations and stable isotope ratios are thus used to model the relative contributions of forest, cultivated and subsoil sources to deposited particulate matter in three contaminated coastal catchments. Samples were taken from the main identified sources: cultivated (n = 28), forest (n = 46), and subsoils (n = 25). Deposited particulate matter (n = 82) was sampled during four fieldwork campaigns from November 2012 to November 2014. A distribution modelling approach quantified relative source contributions with multiple combinations of element parameters (carbon only, nitrogen only, and four parameters) for two particle size fractions (<63 μm and <2 mm). Although there was significant particle size enrichment for the particulate matter parameters, these differences only resulted in a 6% (SD 3%) mean difference in relative source contributions. Further, the three different modelling approaches only resulted in a 4% (SD 3%) difference between relative source contributions. For each particulate matter sample, six models (i.e. <63 μm and <2 mm from the three modelling approaches) were used to incorporate a broader definition of potential uncertainty into model results. Forest sources were modelled to contribute 17% (SD 10%) of particulate matter indicating they present a long term potential source of radiocesium contaminated material in fallout impacted catchments. Subsoils contributed 45% (SD 26%) of particulate matter and cultivated sources contributed 38% (SD 19%). The reservoir of radiocesium in forested landscapes in the Fukushima region represents a potential long-term source of particulate contaminated matter that will require diligent management for the foreseeable future. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Temperature distribution in a stellar atmosphere diagnostic basis

    NASA Technical Reports Server (NTRS)

    Jefferies, J. T.; Morrison, N. D.

    1973-01-01

    A stellar chromosphere is considered a region where the temperature increases outward and where the temperature structure of the gas controls the shape of the spectral lines. It is shown that lines which have collision-dominated source sink terms, like the Ca(+) and Mg(+) H and K lines, can be used to obtain the distribution of temperature with height from observed line profiles. Intrinsic emission lines and geometrical emission lines are found in spectral regions where the continuum is depressed. In visual regions, where the continuum is not depressed, emission core in absorption lines are attributed to reflections of intrinsic emission lines.

  20. Phase 1 of the near term hybrid passenger vehicle development program, appendix A. Mission analysis and performance specification studies. Volume 2: Appendices

    NASA Technical Reports Server (NTRS)

    Traversi, M.; Barbarek, L. A. C.

    1979-01-01

    A handy reference for JPL minimum requirements and guidelines is presented as well as information on the use of the fundamental information source represented by the Nationwide Personal Transportation Survey. Data on U.S. demographic statistics and highway speeds are included along with methodology for normal parameters evaluation, synthesis of daily distance distributions, and projection of car ownership distributions. The synthesis of tentative mission quantification results, of intermediate mission quantification results, and of mission quantification parameters are considered and 1985 in place fleet fuel economy data are included.

  1. Drinking water quality standards and standard tests: Worldwide. (Latest citations from the Food Science and Technology Abstracts database). Published Search

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1993-06-01

    The bibliography contains citations concerning standards and standard tests for water quality in drinking water sources, reservoirs, and distribution systems. Standards from domestic and international sources are presented. Glossaries and vocabularies that concern water quality analysis, testing, and evaluation are included. Standard test methods for individual elements, selected chemicals, sensory properties, radioactivity, and other chemical and physical properties are described. Discussions for proposed standards on new pollutant materials are briefly considered. (Contains a minimum of 203 citations and includes a subject term index and title list.)

  2. Photovoltaics as a terrestrial energy source. Volume 1: An introduction

    NASA Technical Reports Server (NTRS)

    Smith, J. L.

    1980-01-01

    Photovoltaic (PV) systems were examined their potential for terrestrial application and future development. Photovoltaic technology, existing and potential photovoltaic applications, and the National Photovoltaics Program are reviewed. The competitive environment for this electrical source, affected by the presence or absence of utility supplied power is evaluated in term of systems prices. The roles of technological breakthroughs, directed research and technology development, learning curves, and commercial demonstrations in the National Program are discussed. The potential for photovoltaics to displace oil consumption is examined, as are the potential benefits of employing PV in either central-station or non-utility owned, small, distributed systems.

  3. Quality evaluation of carbonaceous industrial by-products and its effect on properties of autoclave aerated concrete

    NASA Astrophysics Data System (ADS)

    Fomina, E. V.; Lesovik, V. S.; Fomin, A. E.; Kozhukhova, N. I.; Lebedev, M. S.

    2018-03-01

    Argillite is a carbonaceous industrial by-product that is a potential source in environmentally friendly and source-saving construction industry. In this research, chemical and mineral composition as well as particle size distribution of argillite were studied and used to develop autoclave aerated concrete as partial substitute of quartz sand. Effect of the argillite as a mineral admixture in autoclave aerated concrete was investigated in terms of compressive and tensile strength, density, heat conductivity etc. The obtained results demonstrated an efficiency of argillite as an energy-saving material in autoclave construction composites.

  4. Long-Term Stability of Radio Sources in VLBI Analysis

    NASA Technical Reports Server (NTRS)

    Engelhardt, Gerald; Thorandt, Volkmar

    2010-01-01

    Positional stability of radio sources is an important requirement for modeling of only one source position for the complete length of VLBI data of presently more than 20 years. The stability of radio sources can be verified by analyzing time series of radio source coordinates. One approach is a statistical test for normal distribution of residuals to the weighted mean for each radio source component of the time series. Systematic phenomena in the time series can thus be detected. Nevertheless, an inspection of rate estimation and weighted root-mean-square (WRMS) variations about the mean is also necessary. On the basis of the time series computed by the BKG group in the frame of the ICRF2 working group, 226 stable radio sources with an axis stability of 10 as could be identified. They include 100 ICRF2 axes-defining sources which are determined independently of the method applied in the ICRF2 working group. 29 stable radio sources with a source structure index of less than 3.0 can also be used to increase the number of 295 ICRF2 defining sources.

  5. Constraining the Long-Term Average of Earthquake Recurrence Intervals From Paleo- and Historic Earthquakes by Assimilating Information From Instrumental Seismicity

    NASA Astrophysics Data System (ADS)

    Zoeller, G.

    2017-12-01

    Paleo- and historic earthquakes are the most important source of information for the estimationof long-term recurrence intervals in fault zones, because sequences of paleoearthquakes cover more than one seismic cycle. On the other hand, these events are often rare, dating uncertainties are enormous and the problem of missing or misinterpreted events leads to additional problems. Taking these shortcomings into account, long-term recurrence intervals are usually unstable as long as no additional information are included. In the present study, we assume that the time to the next major earthquake depends on the rate of small and intermediate events between the large ones in terms of a ``clock-change'' model that leads to a Brownian Passage Time distribution for recurrence intervals. We take advantage of an earlier finding that the aperiodicity of this distribution can be related to the Gutenberg-Richter-b-value, which is usually around one and can be estimated easily from instrumental seismicity in the region under consideration. This allows to reduce the uncertainties in the estimation of the mean recurrence interval significantly, especially for short paleoearthquake sequences and high dating uncertainties. We present illustrative case studies from Southern California and compare the method with the commonly used approach of exponentially distributed recurrence times assuming a stationary Poisson process.

  6. Rapid Source Characterization of the 2011 Mw 9.0 off the Pacific coast of Tohoku Earthquake

    USGS Publications Warehouse

    Hayes, Gavin P.

    2011-01-01

    On March 11th, 2011, a moment magnitude 9.0 earthquake struck off the coast of northeast Honshu, Japan, generating what may well turn out to be the most costly natural disaster ever. In the hours following the event, the U.S. Geological Survey National Earthquake Information Center led a rapid response to characterize the earthquake in terms of its location, size, faulting source, shaking and slip distributions, and population exposure, in order to place the disaster in a framework necessary for timely humanitarian response. As part of this effort, fast finite-fault inversions using globally distributed body- and surface-wave data were used to estimate the slip distribution of the earthquake rupture. Models generated within 7 hours of the earthquake origin time indicated that the event ruptured a fault up to 300 km long, roughly centered on the earthquake hypocenter, and involved peak slips of 20 m or more. Updates since this preliminary solution improve the details of this inversion solution and thus our understanding of the rupture process. However, significant observations such as the up-dip nature of rupture propagation and the along-strike length of faulting did not significantly change, demonstrating the usefulness of rapid source characterization for understanding the first order characteristics of major earthquakes.

  7. Geochemistry of dissolved trace elements and heavy metals in the Dan River Drainage (China): distribution, sources, and water quality assessment.

    PubMed

    Meng, Qingpeng; Zhang, Jing; Zhang, Zhaoyu; Wu, Tairan

    2016-04-01

    Dissolved trace elements and heavy metals in the Dan River drainage basin, which is the drinking water source area of South-to-North Water Transfer Project (China), affect large numbers of people and should therefore be carefully monitored. To investigate the distribution, sources, and quality of river water, this study integrating catchment geology and multivariate statistical techniques was carried out in the Dan River drainage from 99 river water samples collected in 2013. The distribution of trace metal concentrations in the Dan River drainage was similar to that in the Danjiangkou Reservoir, indicating that the reservoir was significantly affected by the Dan River drainage. Moreover, our results suggested that As, Sb, Cd, Mn, and Ni were the major pollutants. We revealed extremely high concentrations of As and Sb in the Laoguan River, Cd in the Qingyou River, Mn, Ni, and Cd in the Yinhua River, As and Sb in the Laojun River, and Sb in the Dan River. According to the water quality index, water in the Dan River drainage was suitable for drinking; however, an exposure risk assessment model suggests that As and Sb in the Laojun and Laoguan rivers could pose a high risk to humans in terms of adverse health and potential non-carcinogenic effects.

  8. Long-term spatial and temporal microbial community dynamics in a large-scale drinking water distribution system with multiple disinfectant regimes.

    PubMed

    Potgieter, Sarah; Pinto, Ameet; Sigudu, Makhosazana; du Preez, Hein; Ncube, Esper; Venter, Stephanus

    2018-08-01

    Long-term spatial-temporal investigations of microbial dynamics in full-scale drinking water distribution systems are scarce. These investigations can reveal the process, infrastructure, and environmental factors that influence the microbial community, offering opportunities to re-think microbial management in drinking water systems. Often, these insights are missed or are unreliable in short-term studies, which are impacted by stochastic variabilities inherent to large full-scale systems. In this two-year study, we investigated the spatial and temporal dynamics of the microbial community in a large, full scale South African drinking water distribution system that uses three successive disinfection strategies (i.e. chlorination, chloramination and hypochlorination). Monthly bulk water samples were collected from the outlet of the treatment plant and from 17 points in the distribution system spanning nearly 150 km and the bacterial community composition was characterised by Illumina MiSeq sequencing of the V4 hypervariable region of the 16S rRNA gene. Like previous studies, Alpha- and Betaproteobacteria dominated the drinking water bacterial communities, with an increase in Betaproteobacteria post-chloramination. In contrast with previous reports, the observed richness, diversity, and evenness of the bacterial communities were higher in the winter months as opposed to the summer months in this study. In addition to temperature effects, the seasonal variations were also likely to be influenced by changes in average water age in the distribution system and corresponding changes in disinfectant residual concentrations. Spatial dynamics of the bacterial communities indicated distance decay, with bacterial communities becoming increasingly dissimilar with increasing distance between sampling locations. These spatial effects dampened the temporal changes in the bulk water community and were the dominant factor when considering the entire distribution system. However, temporal variations were consistently stronger as compared to spatial changes at individual sampling locations and demonstrated seasonality. This study emphasises the need for long-term studies to comprehensively understand the temporal patterns that would otherwise be missed in short-term investigations. Furthermore, systematic long-term investigations are particularly critical towards determining the impact of changes in source water quality, environmental conditions, and process operations on the changes in microbial community composition in the drinking water distribution system. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Factors affecting continued use of ceramic water purifiers distributed to tsunami-affected communities in Sri Lanka.

    PubMed

    Casanova, Lisa M; Walters, Adam; Naghawatte, Ajith; Sobsey, Mark D

    2012-11-01

    There is little information about continued use of point-of-use technologies after disaster relief efforts. After the 2004 tsunami, the Red Cross distributed ceramic water filters in Sri Lanka. This study determined factors associated with filter disuse and evaluate the quality of household drinking water. A cross-sectional survey of water sources and treatment, filter use and household characteristics was administered by in-person oral interview, and household water quality was tested. Multivariable logistic regression was used to model probability of filter non-use. At the time of survey, 24% of households (107/452) did not use filters; the most common reason given was breakage (42%). The most common household water sources were taps and wells. Wells were used by 45% of filter users and 28% of non-users. Of households with taps, 75% had source water Escherichia coli in the lowest World Health Organisation risk category (<1/100 ml), vs. only 30% of households reporting wells did. Tap households were approximately four times more likely to discontinue filter use than well households. After 2 years, 24% of households were non-users. The main factors were breakage and household water source; households with taps were more likely to stop use than households with wells. Tap water users also had higher-quality source water, suggesting that disuse is not necessarily negative and monitoring of water quality can aid decision-making about continued use. To promote continued use, disaster recovery filter distribution efforts must be joined with capacity building for long-term water monitoring, supply chains and local production. © 2012 Blackwell Publishing Ltd.

  10. Prediction of Down-Gradient Impacts of DNAPL Source Depletion Using Tracer Techniques

    NASA Astrophysics Data System (ADS)

    Basu, N. B.; Fure, A. D.; Jawitz, J. W.

    2006-12-01

    Four simplified DNAPL source depletion models that have been discussed in the literature recently are evaluated for the prediction of long-term effects of source depletion under natural gradient flow. These models are simple in form (a power function equation is an example) but are shown here to serve as mathematical analogs to complex multiphase flow and transport simulators. One of the source depletion models, the equilibrium streamtube model, is shown to be relatively easily parameterized using non-reactive and reactive tracers. Non-reactive tracers are used to characterize the aquifer heterogeneity while reactive tracers are used to describe the mean DNAPL mass and its distribution. This information is then used in a Lagrangian framework to predict source remediation performance. In a Lagrangian approach the source zone is conceptualized as a collection of non-interacting streamtubes with hydrodynamic and DNAPL heterogeneity represented by the variation of the travel time and DNAPL saturation among the streamtubes. The travel time statistics are estimated from the non-reactive tracer data while the DNAPL distribution statistics are estimated from the reactive tracer data. The combined statistics are used to define an analytical solution for contaminant dissolution under natural gradient flow. The tracer prediction technique compared favorably with results from a multiphase flow and transport simulator UTCHEM in domains with different hydrodynamic heterogeneity (variance of the log conductivity field = 0.2, 1 and 3).

  11. Long-distance quantum key distribution with imperfect devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lo Piparo, Nicoló; Razavi, Mohsen

    2014-12-04

    Quantum key distribution over probabilistic quantum repeaters is addressed. We compare, under practical assumptions, two such schemes in terms of their secure key generation rate per memory, R{sub QKD}. The two schemes under investigation are the one proposed by Duan et al. in [Nat. 414, 413 (2001)] and that of Sangouard et al. proposed in [Phys. Rev. A 76, 050301 (2007)]. We consider various sources of imperfections in the latter protocol, such as a nonzero double-photon probability for the source, dark count per pulse, channel loss and inefficiencies in photodetectors and memories, to find the rate for different nesting levels.more » We determine the maximum value of the double-photon probability beyond which it is not possible to share a secret key anymore. We find the crossover distance for up to three nesting levels. We finally compare the two protocols.« less

  12. Distribution and Geochemistry of Rare-Earth Elements in Rivers of Southern and Eastern Primorye (Far East of Russia)

    NASA Astrophysics Data System (ADS)

    Chudaev, O. V.; Bragin, I. V.; A, Kharitonova N.; Chelnokov, G. A.

    2016-03-01

    The distribution and geochemistry of rare earth elements (REE) in anthropogenic, technogenic and natural surface waters of southern and eastern Primorye, Far East of Russia, are presented in this study. The obtained results indicated that most of REE (up to 70%) were transported as suspended matter, ratio between dissolved and suspended forms varing from the source to the mouth of rivers. It is shown that all REE (except Ce) in the source of the rivers are predominantly presented in dissolved form, however, the content of light and heavy REE is different. Short-term enrichment of light rare earth elements (LREE) caused by REE-rich runoff from waste dumps and mining is neutralized by the increase in river flow rate. Rivers in urban areas are characterized by high content of LREE in dissolved form and very low in suspended one.

  13. Stable source reconstruction from a finite number of measurements in the multi-frequency inverse source problem

    NASA Astrophysics Data System (ADS)

    Karamehmedović, Mirza; Kirkeby, Adrian; Knudsen, Kim

    2018-06-01

    We consider the multi-frequency inverse source problem for the scalar Helmholtz equation in the plane. The goal is to reconstruct the source term in the equation from measurements of the solution on a surface outside the support of the source. We study the problem in a certain finite dimensional setting: from measurements made at a finite set of frequencies we uniquely determine and reconstruct sources in a subspace spanned by finitely many Fourier–Bessel functions. Further, we obtain a constructive criterion for identifying a minimal set of measurement frequencies sufficient for reconstruction, and under an additional, mild assumption, the reconstruction method is shown to be stable. Our analysis is based on a singular value decomposition of the source-to-measurement forward operators and the distribution of positive zeros of the Bessel functions of the first kind. The reconstruction method is implemented numerically and our theoretical findings are supported by numerical experiments.

  14. A numerical study on dual-phase-lag model of bio-heat transfer during hyperthermia treatment.

    PubMed

    Kumar, P; Kumar, Dinesh; Rai, K N

    2015-01-01

    The success of hyperthermia in the treatment of cancer depends on the precise prediction and control of temperature. It was absolutely a necessity for hyperthermia treatment planning to understand the temperature distribution within living biological tissues. In this paper, dual-phase-lag model of bio-heat transfer has been studied using Gaussian distribution source term under most generalized boundary condition during hyperthermia treatment. An approximate analytical solution of the present problem has been done by Finite element wavelet Galerkin method which uses Legendre wavelet as a basis function. Multi-resolution analysis of Legendre wavelet in the present case localizes small scale variations of solution and fast switching of functional bases. The whole analysis is presented in dimensionless form. The dual-phase-lag model of bio-heat transfer has compared with Pennes and Thermal wave model of bio-heat transfer and it has been found that large differences in the temperature at the hyperthermia position and time to achieve the hyperthermia temperature exist, when we increase the value of τT. Particular cases when surface subjected to boundary condition of 1st, 2nd and 3rd kind are discussed in detail. The use of dual-phase-lag model of bio-heat transfer and finite element wavelet Galerkin method as a solution method helps in precise prediction of temperature. Gaussian distribution source term helps in control of temperature during hyperthermia treatment. So, it makes this study more useful for clinical applications. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Predicting vertically-nonsequential wetting patterns with a source-responsive model

    USGS Publications Warehouse

    Nimmo, John R.; Mitchell, Lara

    2013-01-01

    Water infiltrating into soil of natural structure often causes wetting patterns that do not develop in an orderly sequence. Because traditional unsaturated flow models represent a water advance that proceeds sequentially, they fail to predict irregular development of water distribution. In the source-responsive model, a diffuse domain (D) represents flow within soil matrix material following traditional formulations, and a source-responsive domain (S), characterized in terms of the capacity for preferential flow and its degree of activation, represents preferential flow as it responds to changing water-source conditions. In this paper we assume water undergoing rapid source-responsive transport at any particular time is of negligibly small volume; it becomes sensible at the time and depth where domain transfer occurs. A first-order transfer term represents abstraction from the S to the D domain which renders the water sensible. In tests with lab and field data, for some cases the model shows good quantitative agreement, and in all cases it captures the characteristic patterns of wetting that proceed nonsequentially in the vertical direction. In these tests we determined the values of the essential characterizing functions by inverse modeling. These functions relate directly to observable soil characteristics, rendering them amenable to evaluation and improvement through hydropedologic development.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeze, R.A.

    Many emerging remediation technologies are designed to remove contaminant mass from source zones at DNAPL sites in response to regulatory requirements. There is often concern in the regulated community as to whether mass removal actually reduces risk, or whether the small risk reductions achieved warrant the large costs incurred. This paper sets out a framework for quantifying the degree to which risk is reduced as mass is removed from shallow, saturated, low-permeability, dual-porosity, DNAPL source zones. Risk is defined in terms of meeting an alternate concentration level (ACL) at a compliance well in an aquifer underlying the source zone. Themore » ACL is back-calculated from a carcinogenic health-risk characterization at a downstream water-supply well. Source-zone mass-removal efficiencies are heavily dependent on the distribution of mass between media (fractures, matrix) and phases (dissolved, sorbed, free product). Due to the uncertainties in currently-available technology performance data, the scope of the paper is limited to developing a framework for generic technologies rather than making risk-reduction calculations for specific technologies. Despite the qualitative nature of the exercise, results imply that very high mass-removal efficiencies are required to achieve significant long-term risk reduction with technology, applications of finite duration. 17 refs., 7 figs., 6 tabs.« less

  17. A simple theoretical model for ⁶³Ni betavoltaic battery.

    PubMed

    Zuo, Guoping; Zhou, Jianliang; Ke, Guotu

    2013-12-01

    A numerical simulation of the energy deposition distribution in semiconductors is performed for ⁶³Ni beta particles. Results show that the energy deposition distribution exhibits an approximate exponential decay law. A simple theoretical model is developed for ⁶³Ni betavoltaic battery based on the distribution characteristics. The correctness of the model is validated by two literature experiments. Results show that the theoretical short-circuit current agrees well with the experimental results, and the open-circuit voltage deviates from the experimental results in terms of the influence of the PN junction defects and the simplification of the source. The theoretical model can be applied to ⁶³Ni and ¹⁴⁷Pm betavoltaic batteries. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Data-optimized source modeling with the Backwards Liouville Test–Kinetic method

    DOE PAGES

    Woodroffe, J. R.; Brito, T. V.; Jordanova, V. K.; ...

    2017-09-14

    In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution were used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. Our study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra,more » Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less

  19. 78 FR 33691 - Distribution of Source Material to Exempt Persons and to General Licensees and Revision of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-05

    ... Distribution of Source Material to Exempt Persons and to General Licensees and Revision of General License and..., Distribution of Source Material to Exempt Persons and to General Licensees and Revision of General License and Exemptions (Distribution of Source Material Rule). The Distribution of Source Material Rule amended the NRC's...

  20. [Dilemmas of health financing].

    PubMed

    Herrera Zárate, M; González Torres, R

    1989-01-01

    The economic crisis had had a profound effect on the finances of health services in Mexico. The expenditure on health has decreased, both in absolute terms and in relation to the national gross product. Funding problems have been aggravated by inequities in budget distribution: social security institutions have been favored; geographical distribution of resources is concentrated in the central areas of the country and in the more developed states, and curative health care has prevailed over preventive medicine. Administrative inefficiency hinders even more the appropriate utilization of resources. Diversification of funding sources has been proposed, through external debt, local funding, and specific health taxing. But these proposals are questionable. The high cost of the debt service has reduced international credits as a source of financing. Resource concentration at the federal level, and the different compromises related to the economic solidarity pact have also diminished the potentiality of local state financing. On the other hand, a special health tax is not viable within the current fiscal framework. The alternatives are a better budget planning, a change in the institutional and regional distribution of resources, and improvement in the administrative mechanisms of funding.

  1. Acceleration of auroral electrons in parallel electric fields

    NASA Technical Reports Server (NTRS)

    Kaufmann, R. L.; Walker, D. N.; Arnoldy, R. L.

    1976-01-01

    Rocket observations of auroral electrons are compared with the predictions of a number of theoretical acceleration mechanisms that involve an electric field parallel to the earth's magnetic field. The theoretical models are discussed in terms of required plasma sources, the location of the acceleration region, and properties of necessary wave-particle scattering mechanisms. We have been unable to find any steady state scatter-free electric field configuration that predicts electron flux distributions in agreement with the observations. The addition of a fluctuating electric field or wave-particle scattering several thousand kilometers above the rocket can modify the theoretical flux distributions so that they agree with measurements. The presence of very narrow energy peaks in the flux contours implies a characteristic temperature of several tens of electron volts or less for the source of field-aligned auroral electrons and a temperature of several hundred electron volts or less for the relatively isotropic 'monoenergetic' auroral electrons. The temperature of the field-aligned electrons is more representative of the magnetosheath or possibly the ionosphere as a source region than of the plasma sheet.

  2. Diamond-based single-photon emitters

    NASA Astrophysics Data System (ADS)

    Aharonovich, I.; Castelletto, S.; Simpson, D. A.; Su, C.-H.; Greentree, A. D.; Prawer, S.

    2011-07-01

    The exploitation of emerging quantum technologies requires efficient fabrication of key building blocks. Sources of single photons are extremely important across many applications as they can serve as vectors for quantum information—thereby allowing long-range (perhaps even global-scale) quantum states to be made and manipulated for tasks such as quantum communication or distributed quantum computation. At the single-emitter level, quantum sources also afford new possibilities in terms of nanoscopy and bio-marking. Color centers in diamond are prominent candidates to generate and manipulate quantum states of light, as they are a photostable solid-state source of single photons at room temperature. In this review, we discuss the state of the art of diamond-based single-photon emitters and highlight their fabrication methodologies. We present the experimental techniques used to characterize the quantum emitters and discuss their photophysical properties. We outline a number of applications including quantum key distribution, bio-marking and sub-diffraction imaging, where diamond-based single emitters are playing a crucial role. We conclude with a discussion of the main challenges and perspectives for employing diamond emitters in quantum information processing.

  3. Open source software integrated into data services of Japanese planetary explorations

    NASA Astrophysics Data System (ADS)

    Yamamoto, Y.; Ishihara, Y.; Otake, H.; Imai, K.; Masuda, K.

    2015-12-01

    Scientific data obtained by Japanese scientific satellites and lunar and planetary explorations are archived in DARTS (Data ARchives and Transmission System). DARTS provides the data with a simple method such as HTTP directory listing for long-term preservation while DARTS tries to provide rich web applications for ease of access with modern web technologies based on open source software. This presentation showcases availability of open source software through our services. KADIAS is a web-based application to search, analyze, and obtain scientific data measured by SELENE(Kaguya), a Japanese lunar orbiter. KADIAS uses OpenLayers to display maps distributed from Web Map Service (WMS). As a WMS server, open source software MapServer is adopted. KAGUYA 3D GIS (KAGUYA 3D Moon NAVI) provides a virtual globe for the SELENE's data. The main purpose of this application is public outreach. NASA World Wind Java SDK is used to develop. C3 (Cross-Cutting Comparisons) is a tool to compare data from various observations and simulations. It uses Highcharts to draw graphs on web browsers. Flow is a tool to simulate a Field-Of-View of an instrument onboard a spacecraft. This tool itself is open source software developed by JAXA/ISAS, and the license is BSD 3-Caluse License. SPICE Toolkit is essential to compile FLOW. SPICE Toolkit is also open source software developed by NASA/JPL, and the website distributes many spacecrafts' data. Nowadays, open source software is an indispensable tool to integrate DARTS services.

  4. Bait distribution among multiple colonies of Pharaoh ants (hymenoptera: Formicidae).

    PubMed

    Oi, D H; Vail, K M; Williams, D F

    2000-08-01

    Pharaoh ant, Monomorium pharaonis (L.), infestations often consist of several colonies located at different nest sites. To achieve control, it is desirable to suppress or eliminate the populations of a majority of these colonies. We compared the trophallactic distribution and efficacy of two ant baits, with different modes of action, among groups of four colonies of Pharaoh ants. Baits contained either the metabolic-inhibiting active ingredient hydramethylnon or the insect growth regulator (IGR) pyriproxyfen. Within 3 wk, the hydramethylnon bait reduced worker and brood populations by at least 80%, and queen reductions ranged between 73 and 100%, when nests were in proximity (within 132 cm) to the bait source. However, these nest sites were reoccupied by ants from other colonies located further from the bait source. The pyriproxyfen bait was distributed more thoroughly to all nest locations with worker populations gradually declining by 73% at all nest sites after 8 wk. Average queen reductions ranged from 31 to 49% for all nest sites throughout the study. Even though some queens survived, brood reductions were rapid in the pyriproxyfen treatment, with reductions of 95% at all locations by week 3. Unlike the metabolic inhibitor, the IGR did not kill adult worker ants quickly, thus, more surviving worker ants were available to distribute the bait to all colonies located at different nest sites. Thus, from a single bait source, the slow-acting bait toxicant provided gradual, but long-term control, whereas the fast-acting bait toxicant provided rapid, localized control for a shorter duration.

  5. Local spectrum analysis of field propagation in an anisotropic medium. Part I. Time-harmonic fields.

    PubMed

    Tinkelman, Igor; Melamed, Timor

    2005-06-01

    The phase-space beam summation is a general analytical framework for local analysis and modeling of radiation from extended source distributions. In this formulation, the field is expressed as a superposition of beam propagators that emanate from all points in the source domain and in all directions. In this Part I of a two-part investigation, the theory is extended to include propagation in anisotropic medium characterized by a generic wave-number profile for time-harmonic fields; in a companion paper [J. Opt. Soc. Am. A 22, 1208 (2005)], the theory is extended to time-dependent fields. The propagation characteristics of the beam propagators in a homogeneous anisotropic medium are considered. With use of Gaussian windows for the local processing of either ordinary or extraordinary electromagnetic field distributions, the field is represented by a phase-space spectral distribution in which the propagating elements are Gaussian beams that are formulated by using Gaussian plane-wave spectral distributions over the extended source plane. By applying saddle-point asymptotics, we extract the Gaussian beam phenomenology in the anisotropic environment. The resulting field is parameterized in terms of the spatial evolution of the beam curvature, beam width, etc., which are mapped to local geometrical properties of the generic wave-number profile. The general results are applied to the special case of uniaxial crystal, and it is found that the asymptotics for the Gaussian beam propagators, as well as the physical phenomenology attached, perform remarkably well.

  6. Observations of a free-energy source for intense electrostatic waves. [in upper atmosphere near upper hybrid resonance frequency

    NASA Technical Reports Server (NTRS)

    Kurth, W. S.; Frank, L. A.; Gurnett, D. A.; Burek, B. G.; Ashour-Abdalla, M.

    1980-01-01

    Significant progress has been made in understanding intense electrostatic waves near the upper hybrid resonance frequency in terms of the theory of multiharmonic cyclotron emission using a classical loss-cone distribution function as a model. Recent observations by Hawkeye 1 and GEOS 1 have verified the existence of loss-cone distributions in association with the intense electrostatic wave events, however, other observations by Hawkeye and ISEE have indicated that loss cones are not always observable during the wave events, and in fact other forms of free energy may also be responsible for the instability. Now, for the first time, a positively sloped feature in the perpendicular distribution function has been uniquely identified with intense electrostatic wave activity. Correspondingly, we suggest that the theory is flexible under substantial modifications of the model distribution function.

  7. Early outbreak detection by linking health advice line calls to water distribution areas retrospectively demonstrated in a large waterborne outbreak of cryptosporidiosis in Sweden.

    PubMed

    Bjelkmar, Pär; Hansen, Anette; Schönning, Caroline; Bergström, Jakob; Löfdahl, Margareta; Lebbad, Marianne; Wallensten, Anders; Allestam, Görel; Stenmark, Stephan; Lindh, Johan

    2017-04-18

    In the winter and spring of 2011 a large outbreak of cryptosporidiosis occurred in Skellefteå municipality, Sweden. This study summarizes the outbreak investigation in terms of outbreak size, duration, clinical characteristics, possible source(s) and the potential for earlier detection using calls to a health advice line. The investigation included two epidemiological questionnaires and microbial analysis of samples from patients, water and other environmental sources. In addition, a retrospective study based on phone calls to a health advice line was performed by comparing patterns of phone calls between different water distribution areas. Our analyses showed that approximately 18,500 individuals were affected by a waterborne outbreak of cryptosporidiosis in Skellefteå in 2011. This makes it the second largest outbreak of cryptosporidiosis in Europe to date. Cryptosporidium hominis oocysts of subtype IbA10G2 were found in patient and sewage samples, but not in raw water or in drinking water, and the initial contamination source could not be determined. The outbreak went unnoticed to authorities for several months. The analysis of the calls to the health advice line provides strong indications early in the outbreak that it was linked to a particular water treatment plant. We conclude that an earlier detection of the outbreak by linking calls to a health advice line to water distribution areas could have limited the outbreak substantially.

  8. Transfer function analysis of thermospheric perturbations

    NASA Technical Reports Server (NTRS)

    Mayr, H. G.; Harris, I.; Varosi, F.; Herrero, F. A.; Spencer, N. W.

    1986-01-01

    Applying perturbation theory, a spectral model in terms of vectors spherical harmonics (Legendre polynomials) is used to describe the short term thermospheric perturbations originating in the auroral regions. The source may be Joule heating, particle precipitation or ExB ion drift-momentum coupling. A multiconstituent atmosphere is considered, allowing for the collisional momentum exchange between species including Ar, O2, N2, O, He and H. The coupled equations of energy, mass and momentum conservation are solved simultaneously for the major species N2 and O. Applying homogeneous boundary conditions, the integration is carred out from the Earth's surface up to 700 km. In the analysis, the spherical harmonics are treated as eigenfunctions, assuming that the Earth's rotation (and prevailing circulation) do not significantly affect perturbations with periods which are typically much less than one day. Under these simplifying assumptions, and given a particular source distribution in the vertical, a two dimensional transfer function is constructed to describe the three dimensional response of the atmosphere. In the order of increasing horizontal wave numbers (order of polynomials), this transfer function reveals five components. To compile the transfer function, the numerical computations are very time consuming (about 100 hours on a VAX for one particular vertical source distribution). However, given the transfer function, the atmospheric response in space and time (using Fourier integral representation) can be constructed with a few seconds of a central processing unit. This model is applied in a case study of wind and temperature measurements on the Dynamics Explorer B, which show features characteristic of a ringlike excitation source in the auroral oval. The data can be interpreted as gravity waves which are focused (and amplified) in the polar region and then are reflected to propagate toward lower latitudes.

  9. Transfer function analysis of thermospheric perturbations

    NASA Astrophysics Data System (ADS)

    Mayr, H. G.; Harris, I.; Varosi, F.; Herrero, F. A.; Spencer, N. W.

    1986-06-01

    Applying perturbation theory, a spectral model in terms of vectors spherical harmonics (Legendre polynomials) is used to describe the short term thermospheric perturbations originating in the auroral regions. The source may be Joule heating, particle precipitation or ExB ion drift-momentum coupling. A multiconstituent atmosphere is considered, allowing for the collisional momentum exchange between species including Ar, O2, N2, O, He and H. The coupled equations of energy, mass and momentum conservation are solved simultaneously for the major species N2 and O. Applying homogeneous boundary conditions, the integration is carred out from the Earth's surface up to 700 km. In the analysis, the spherical harmonics are treated as eigenfunctions, assuming that the Earth's rotation (and prevailing circulation) do not significantly affect perturbations with periods which are typically much less than one day. Under these simplifying assumptions, and given a particular source distribution in the vertical, a two dimensional transfer function is constructed to describe the three dimensional response of the atmosphere. In the order of increasing horizontal wave numbers (order of polynomials), this transfer function reveals five components. To compile the transfer function, the numerical computations are very time consuming (about 100 hours on a VAX for one particular vertical source distribution). However, given the transfer function, the atmospheric response in space and time (using Fourier integral representation) can be constructed with a few seconds of a central processing unit. This model is applied in a case study of wind and temperature measurements on the Dynamics Explorer B, which show features characteristic of a ringlike excitation source in the auroral oval. The data can be interpreted as gravity waves which are focused (and amplified) in the polar region and then are reflected to propagate toward lower latitudes.

  10. Time-evolution of grain size distributions in random nucleation and growth crystallization processes

    NASA Astrophysics Data System (ADS)

    Teran, Anthony V.; Bill, Andreas; Bergmann, Ralf B.

    2010-02-01

    We study the time dependence of the grain size distribution N(r,t) during crystallization of a d -dimensional solid. A partial differential equation, including a source term for nuclei and a growth law for grains, is solved analytically for any dimension d . We discuss solutions obtained for processes described by the Kolmogorov-Avrami-Mehl-Johnson model for random nucleation and growth (RNG). Nucleation and growth are set on the same footing, which leads to a time-dependent decay of both effective rates. We analyze in detail how model parameters, the dimensionality of the crystallization process, and time influence the shape of the distribution. The calculations show that the dynamics of the effective nucleation and effective growth rates play an essential role in determining the final form of the distribution obtained at full crystallization. We demonstrate that for one class of nucleation and growth rates, the distribution evolves in time into the logarithmic-normal (lognormal) form discussed earlier by Bergmann and Bill [J. Cryst. Growth 310, 3135 (2008)]. We also obtain an analytical expression for the finite maximal grain size at all times. The theory allows for the description of a variety of RNG crystallization processes in thin films and bulk materials. Expressions useful for experimental data analysis are presented for the grain size distribution and the moments in terms of fundamental and measurable parameters of the model.

  11. Assessment of Groundwater Susceptibility to Non-Point Source Contaminants Using Three-Dimensional Transient Indexes.

    PubMed

    Zhang, Yong; Weissmann, Gary S; Fogg, Graham E; Lu, Bingqing; Sun, HongGuang; Zheng, Chunmiao

    2018-06-05

    Groundwater susceptibility to non-point source contamination is typically quantified by stable indexes, while groundwater quality evolution (or deterioration globally) can be a long-term process that may last for decades and exhibit strong temporal variations. This study proposes a three-dimensional (3- d ), transient index map built upon physical models to characterize the complete temporal evolution of deep aquifer susceptibility. For illustration purposes, the previous travel time probability density (BTTPD) approach is extended to assess the 3- d deep groundwater susceptibility to non-point source contamination within a sequence stratigraphic framework observed in the Kings River fluvial fan (KRFF) aquifer. The BTTPD, which represents complete age distributions underlying a single groundwater sample in a regional-scale aquifer, is used as a quantitative, transient measure of aquifer susceptibility. The resultant 3- d imaging of susceptibility using the simulated BTTPDs in KRFF reveals the strong influence of regional-scale heterogeneity on susceptibility. The regional-scale incised-valley fill deposits increase the susceptibility of aquifers by enhancing rapid downward solute movement and displaying relatively narrow and young age distributions. In contrast, the regional-scale sequence-boundary paleosols within the open-fan deposits "protect" deep aquifers by slowing downward solute movement and displaying a relatively broad and old age distribution. Further comparison of the simulated susceptibility index maps to known contaminant distributions shows that these maps are generally consistent with the high concentration and quick evolution of 1,2-dibromo-3-chloropropane (DBCP) in groundwater around the incised-valley fill since the 1970s'. This application demonstrates that the BTTPDs can be used as quantitative and transient measures of deep aquifer susceptibility to non-point source contamination.

  12. Assessment of infrasound signals recorded on seismic stations and infrasound arrays in the western United States using ground truth sources

    NASA Astrophysics Data System (ADS)

    Park, Junghyun; Hayward, Chris; Stump, Brian W.

    2018-06-01

    Ground truth sources in Utah during 2003-2013 are used to assess the contribution of temporal atmospheric conditions to infrasound detection and the predictive capabilities of atmospheric models. Ground truth sources consist of 28 long duration static rocket motor burn tests and 28 impulsive rocket body demolitions. Automated infrasound detections from a hybrid of regional seismometers and infrasound arrays use a combination of short-term time average/long-term time average ratios and spectral analyses. These detections are grouped into station triads using a Delaunay triangulation network and then associated to estimate phase velocity and azimuth to filter signals associated with a particular source location. The resulting range and azimuth distribution from sources to detecting stations varies seasonally and is consistent with predictions based on seasonal atmospheric models. Impulsive signals from rocket body detonations are observed at greater distances (>700 km) than the extended duration signals generated by the rocket burn test (up to 600 km). Infrasound energy attenuation associated with the two source types is quantified as a function of range and azimuth from infrasound amplitude measurements. Ray-tracing results using Ground-to-Space atmospheric specifications are compared to these observations and illustrate the degree to which the time variations in characteristics of the observations can be predicted over a multiple year time period.

  13. A time reversal algorithm in acoustic media with Dirac measure approximations

    NASA Astrophysics Data System (ADS)

    Bretin, Élie; Lucas, Carine; Privat, Yannick

    2018-04-01

    This article is devoted to the study of a photoacoustic tomography model, where one is led to consider the solution of the acoustic wave equation with a source term writing as a separated variables function in time and space, whose temporal component is in some sense close to the derivative of the Dirac distribution at t  =  0. This models a continuous wave laser illumination performed during a short interval of time. We introduce an algorithm for reconstructing the space component of the source term from the measure of the solution recorded by sensors during a time T all along the boundary of a connected bounded domain. It is based at the same time on the introduction of an auxiliary equivalent Cauchy problem allowing to derive explicit reconstruction formula and then to use of a deconvolution procedure. Numerical simulations illustrate our approach. Finally, this algorithm is also extended to elasticity wave systems.

  14. Darwin Core: An Evolving Community-Developed Biodiversity Data Standard

    PubMed Central

    Wieczorek, John; Bloom, David; Guralnick, Robert; Blum, Stan; Döring, Markus; Giovanni, Renato; Robertson, Tim; Vieglais, David

    2012-01-01

    Biodiversity data derive from myriad sources stored in various formats on many distinct hardware and software platforms. An essential step towards understanding global patterns of biodiversity is to provide a standardized view of these heterogeneous data sources to improve interoperability. Fundamental to this advance are definitions of common terms. This paper describes the evolution and development of Darwin Core, a data standard for publishing and integrating biodiversity information. We focus on the categories of terms that define the standard, differences between simple and relational Darwin Core, how the standard has been implemented, and the community processes that are essential for maintenance and growth of the standard. We present case-study extensions of the Darwin Core into new research communities, including metagenomics and genetic resources. We close by showing how Darwin Core records are integrated to create new knowledge products documenting species distributions and changes due to environmental perturbations. PMID:22238640

  15. Computational study of radiation doses at UNLV accelerator facility

    NASA Astrophysics Data System (ADS)

    Hodges, Matthew; Barzilov, Alexander; Chen, Yi-Tung; Lowe, Daniel

    2017-09-01

    A Varian K15 electron linear accelerator (linac) has been considered for installation at University of Nevada, Las Vegas (UNLV). Before experiments can be performed, it is necessary to evaluate the photon and neutron spectra as generated by the linac, as well as the resulting dose rates within the accelerator facility. A computational study using MCNPX was performed to characterize the source terms for the bremsstrahlung converter. The 15 MeV electron beam available in the linac is above the photoneutron threshold energy for several materials in the linac assembly, and as a result, neutrons must be accounted for. The angular and energy distributions for bremsstrahlung flux generated by the interaction of the 15 MeV electron beam with the linac target were determined. This source term was used in conjunction with the K15 collimators to determine the dose rates within the facility.

  16. Estimates of long-term mean-annual nutrient loads considered for use in SPARROW models of the Midcontinental region of Canada and the United States, 2002 base year

    USGS Publications Warehouse

    Saad, David A.; Benoy, Glenn A.; Robertson, Dale M.

    2018-05-11

    Streamflow and nutrient concentration data needed to compute nitrogen and phosphorus loads were compiled from Federal, State, Provincial, and local agency databases and also from selected university databases. The nitrogen and phosphorus loads are necessary inputs to Spatially Referenced Regressions on Watershed Attributes (SPARROW) models. SPARROW models are a way to estimate the distribution, sources, and transport of nutrients in streams throughout the Midcontinental region of Canada and the United States. After screening the data, approximately 1,500 sites sampled by 34 agencies were identified as having suitable data for calculating the long-term mean-annual nutrient loads required for SPARROW model calibration. These final sites represent a wide range in watershed sizes, types of nutrient sources, and land-use and watershed characteristics in the Midcontinental region of Canada and the United States.

  17. Water and soil biotic relations in Mercury distribution

    NASA Technical Reports Server (NTRS)

    Siegel, S. M.; Siegel, B. Z.; Puerner, N.; Speitel, T.; Thorarinsson, F.

    1975-01-01

    The distribution of Hg is considered both in terms of its availability in soil fractions and the relationship between Hg in plant samples and Hg in ambient soils or other supportive media. The plants were grouped by habitat into epipedic-epiphytic (mosses, lichens) and endopedic-aquatic-marine (Basidiomycetes and algae) samples; nonvascular and vascular forms were also distinguished. Sources included Alaska, Hawaii, New England and Iceland. Brief consideration was also given to Hg distribution in a plant-animal-soil community. Data were expressed in terms of plant Hg content and plant substratum concentration ratio. Average Hg contents and concentration ratios, and modal ranges for the ratios were determined. The results showed similar average Hg contents in all groups (126 to 199 ppb) but a low value (84 ppb) in the lichens; terrestrial forms had ratios of 3.5 to 7.6 whereas the marine algae yielded a figure of 78.7. A secondary mode in the range 0 to 0.1 appeared only in the Alaska-New England Group, over 500 km distant from active thermal sites. Evidence for both exclusion and concentration behavior was obtained.

  18. Survey of ion plating sources

    NASA Technical Reports Server (NTRS)

    Spalvins, T.

    1979-01-01

    Ion plating is a plasma deposition technique where ions of the gas and the evaporant have a decisive role in the formation of a coating in terms of adherence, coherence, and morphological growth. The range of materials that can be ion plated is predominantly determined by the selection of the evaporation source. Based on the type of evaporation source, gaseous media and mode of transport, the following will be discussed: resistance, electron beam sputtering, reactive and ion beam evaporation. Ionization efficiencies and ion energies in the glow discharge determine the percentage of atoms which are ionized under typical ion plating conditions. The plating flux consists of a small number of energetic ions and a large number of energetic neutrals. The energy distribution ranges from thermal energies up to a maximum energy of the discharge. The various reaction mechanisms which contribute to the exceptionally strong adherence - formation of a graded substrate/coating interface are not fully understood, however the controlling factors are evaluated. The influence of process variables on the nucleation and growth characteristics are illustrated in terms of morphological changes which affect the mechanical and tribological properties of the coating.

  19. Taste and odor occurrence in Lake William C. Bowen and Municipal Reservoir #1, Spartanburg County, South Carolina

    USGS Publications Warehouse

    Journey, Celeste; Arrington, Jane M.

    2009-01-01

    The U.S. Geological Survey and Spartanburg Water are working cooperatively on an ongoing study of Lake Bowen and Reservoir #1 to identify environmental factors that enhance or influence the production of geosmin in the source-water reservoirs. Spartanburg Water is using information from this study to develop management strategies to reduce (short-term solution) and prevent (long-term solution) geosmin occurrence. Spartanburg Water utility treats and distributes drinking water to the Spartanburg area of South Carolina. The drinking water sources for the area are Lake William C. Bowen (Lake Bowen) and Municipal Reservoir #1 (Reservoir #1), located north of Spartanburg. These reservoirs, which were formed by the impoundment of the South Pacolet River, were assessed in 2006 by the South Carolina Department of Health and Environmental Control (SCDHEC) as being fully supportive of all uses based on established criteria. Nonetheless, Spartanburg Water had noted periodic taste and odor problems due to the presence of geosmin, a naturally occurring compound in the source water. Geosmin is not harmful, but its presence in drinking water is aesthetically unpleasant.

  20. A Designer Fluid For Aluminum Phase Change Devices. Performance Enhancement in Copper Heat Pipes Performance Enhancement in Copper Heat Pipes. Volume 3

    DTIC Science & Technology

    2016-11-17

    out dynamics of a designer fluid were investigated experimentally in a flat grooved heat pipe. Generated coatings were observed during heat pipe... experimental temperature distributions matched well. Uncertainties in the closure properties were the major source of error. 15. SUBJECT TERMS...72  Results and Discussion ( Experimental Results for IAS 2 in Grooved Wick #1

  1. Forecasting Future Sea Ice Conditions in the MIZ: A Lagrangian Approach

    DTIC Science & Technology

    2013-09-30

    www.mcgill.ca/meteo/people/tremblay LONG-TERM GOALS 1- Determine the source regions for sea ice in the seasonally ice-covered zones (SIZs...distribution of sea ice cover and transport pathways. 2- Improve our understanding of the strengths and/or limitations of GCM predictions of future...ocean currents, RGPS sea ice deformation, Reanalysis surface wind , surface radiative fluxes, etc. Processing the large datasets involved is a tedious

  2. Bulgaria in European Security and Defense Policy

    DTIC Science & Technology

    2013-03-01

    simple message that, countries like Israel the U.S. and others are the "eternal enemy". Despite arms control and efforts to counter the proliferation...or alter its distribution; this will reduce the fish food source and trigger migration from mid- latitudes in the northern waters. Forests provide...have long-term effects on climate, soil, and their storage.4 Global food production, including genetically modified food, will continue to grow

  3. Photoacoustic Imaging.

    DTIC Science & Technology

    1983-12-01

    recrystallization is currently an active area of research. Much effort has been made to grow large grain polysilicon with grain sizes of 100 microns from fine grain... polysilicon using laser recrystallization. The recrystallization process is inherently traumatic, producing large changes in temperature in short...temperature distribution above as the source term in the acoustic field equation, we ol fain r where B1)jwP) The general solution to this equation is given by

  4. CSI-EPT in Presence of RF-Shield for MR-Coils.

    PubMed

    Arduino, Alessandro; Zilberti, Luca; Chiampi, Mario; Bottauscio, Oriano

    2017-07-01

    Contrast source inversion electric properties tomography (CSI-EPT) is a recently developed technique for the electric properties tomography that recovers the electric properties distribution starting from measurements performed by magnetic resonance imaging scanners. This method is an optimal control approach based on the contrast source inversion technique, which distinguishes itself from other electric properties tomography techniques for its capability to recover also the local specific absorption rate distribution, essential for online dosimetry. Up to now, CSI-EPT has only been described in terms of integral equations, limiting its applicability to homogeneous unbounded background. In order to extend the method to the presence of a shield in the domain-as in the recurring case of shielded radio frequency coils-a more general formulation of CSI-EPT, based on a functional viewpoint, is introduced here. Two different implementations of CSI-EPT are proposed for a 2-D transverse magnetic model problem, one dealing with an unbounded domain and one considering the presence of a perfectly conductive shield. The two implementations are applied on the same virtual measurements obtained by numerically simulating a shielded radio frequency coil. The results are compared in terms of both electric properties recovery and local specific absorption rate estimate, in order to investigate the requirement of an accurate modeling of the underlying physical problem.

  5. Synoptic controls on precipitation pathways and snow delivery to high-accumulation ice core sites in the Ross Sea region, Antarctica

    NASA Astrophysics Data System (ADS)

    Sinclair, K. E.; Bertler, N. A. N.; Trompetter, W. J.

    2010-11-01

    Dominant storm tracks to two ice core sites on the western margin of the Ross Sea, Antarctica (Skinner Saddle (SKS) and Evans Piedmont Glacier), are investigated to establish key synoptic controls on snow accumulation. This is critical in terms of understanding the seasonality, source regions, and transport pathways of precipitation delivered to these sites. In situ snow depth and meteorological observations are used to identify major accumulation events in 2007-2008, which differ considerably between sites in terms of their magnitude and seasonal distribution. While snowfall at Evans Piedmont Glacier occurs almost exclusively during summer and spring, Skinner Saddle receives precipitation year round with a lull during the months of April and May. Cluster analysis of daily back trajectories reveals that the highest-accumulation days at both sites result from fast-moving air masses, associated with synoptic-scale low-pressure systems. There is evidence that short-duration pulses of snowfall at SKS also originate from mesocyclone development over the Ross Ice Shelf and local moisture sources. Changes in the frequency and seasonal distribution of these mechanisms of precipitation delivery will have a marked impact on annual accumulation over time and will therefore need careful consideration during the interpretation of stable isotope and geochemical records from these ice cores.

  6. Analysis of streamflow distribution of non-point source nitrogen export from long-term urban-rural catchments to guide watershed management in the Chesapeake Bay watershed

    NASA Astrophysics Data System (ADS)

    Duncan, J. M.; Band, L. E.; Groffman, P.

    2017-12-01

    Discharge, land use, and watershed management practices (stream restoration and stormwater control measures) have been found to be important determinants of nitrogen (N) export to receiving waters. We used long-term water quality stations from the Baltimore Ecosystem Study Long-Term Ecological Research (BES LTER) Site to quantify nitrogen export across streamflow conditions at the small watershed scale. We calculated nitrate and total nitrogen fluxes using methodology that allows for changes over time; weighted regressions on time, discharge, and seasonality. Here we tested the hypotheses that a) while the largest N stream fluxes occur during storm events, there is not a clear relationship between N flux and discharge and b) N export patterns are aseasonal in developed watersheds where sources are larger and retention capacity is lower. The goal is to scale understanding from small watersheds to larger ones. Developing a better understanding of hydrologic controls on nitrogen export is essential for successful adaptive watershed management at societally meaningful spatial scales.

  7. The eGo grid model: An open-source and open-data based synthetic medium-voltage grid model for distribution power supply systems

    NASA Astrophysics Data System (ADS)

    Amme, J.; Pleßmann, G.; Bühler, J.; Hülk, L.; Kötter, E.; Schwaegerl, P.

    2018-02-01

    The increasing integration of renewable energy into the electricity supply system creates new challenges for distribution grids. The planning and operation of distribution systems requires appropriate grid models that consider the heterogeneity of existing grids. In this paper, we describe a novel method to generate synthetic medium-voltage (MV) grids, which we applied in our DIstribution Network GeneratOr (DINGO). DINGO is open-source software and uses freely available data. Medium-voltage grid topologies are synthesized based on location and electricity demand in defined demand areas. For this purpose, we use GIS data containing demand areas with high-resolution spatial data on physical properties, land use, energy, and demography. The grid topology is treated as a capacitated vehicle routing problem (CVRP) combined with a local search metaheuristics. We also consider the current planning principles for MV distribution networks, paying special attention to line congestion and voltage limit violations. In the modelling process, we included power flow calculations for validation. The resulting grid model datasets contain 3608 synthetic MV grids in high resolution, covering all of Germany and taking local characteristics into account. We compared the modelled networks with real network data. In terms of number of transformers and total cable length, we conclude that the method presented in this paper generates realistic grids that could be used to implement a cost-optimised electrical energy system.

  8. Synoptic, Global Mhd Model For The Solar Corona

    NASA Astrophysics Data System (ADS)

    Cohen, Ofer; Sokolov, I. V.; Roussev, I. I.; Gombosi, T. I.

    2007-05-01

    The common techniques for mimic the solar corona heating and the solar wind acceleration in global MHD models are as follow. 1) Additional terms in the momentum and energy equations derived from the WKB approximation for the Alfv’en wave turbulence; 2) some empirical heat source in the energy equation; 3) a non-uniform distribution of the polytropic index, γ, used in the energy equation. In our model, we choose the latter approach. However, in order to get a more realistic distribution of γ, we use the empirical Wang-Sheeley-Arge (WSA) model to constrain the MHD solution. The WSA model provides the distribution of the asymptotic solar wind speed from the potential field approximation; therefore it also provides the distribution of the kinetic energy. Assuming that far from the Sun the total energy is dominated by the energy of the bulk motion and assuming the conservation of the Bernoulli integral, we can trace the total energy along a magnetic field line to the solar surface. On the surface the gravity is known and the kinetic energy is negligible. Therefore, we can get the surface distribution of γ as a function of the final speed originating from this point. By interpolation γ to spherically uniform value on the source surface, we use this spatial distribution of γ in the energy equation to obtain a self-consistent, steady state MHD solution for the solar corona. We present the model result for different Carrington Rotations.

  9. Non-sky-averaged sensitivity curves for space-based gravitational-wave observatories

    NASA Astrophysics Data System (ADS)

    Vallisneri, Michele; Galley, Chad R.

    2012-06-01

    The signal-to-noise ratio (SNR) is used in gravitational-wave observations as the basic figure of merit for detection confidence and, together with the Fisher matrix, for the amount of physical information that can be extracted from a detected signal. SNRs are usually computed from a sensitivity curve, which describes the gravitational-wave amplitude needed by a monochromatic source of given frequency to achieve a threshold SNR. Although the term ‘sensitivity’ is used loosely to refer to the detector’s noise spectral density, the two quantities are not the same: the sensitivity includes also the frequency- and orientation-dependent response of the detector to gravitational waves and takes into account the duration of observation. For interferometric space-based detectors similar to LISA, which are sensitive to long-lived signals and have constantly changing position and orientation, exact SNRs need to be computed on a source-by-source basis. For convenience, most authors prefer to work with sky-averaged sensitivities, accepting inaccurate SNRs for individual sources and giving up control over the statistical distribution of SNRs for source populations. In this paper, we describe a straightforward end-to-end recipe to compute the non-sky-averaged sensitivity of interferometric space-based detectors of any geometry. This recipe includes the effects of spacecraft motion and of seasonal variations in the partially subtracted confusion foreground from Galactic binaries, and it can be used to generate a sampling distribution of sensitivities for a given source population. In effect, we derive error bars for the sky-averaged sensitivity curve, which provide a stringent statistical interpretation for previously unqualified statements about sky-averaged SNRs. As a worked-out example, we consider isotropic and Galactic-disk populations of monochromatic sources, as observed with the ‘classic LISA’ configuration. We confirm that the (standard) inverse-rms average sensitivity for the isotropic population remains the same whether or not the LISA orbits are included in the computation. However, detector motion tightens the distribution of sensitivities, so for 50% of sources the sensitivity is within 30% of its average. For the Galactic-disk population, the average and the distribution of the sensitivity for a moving detector turn out to be similar to the isotropic case.

  10. Final design of thermal diagnostic system in SPIDER ion source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brombin, M., E-mail: matteo.brombin@igi.cnr.it; Dalla Palma, M.; Pasqualotto, R.

    The prototype radio frequency source of the ITER heating neutral beams will be first tested in SPIDER test facility to optimize H{sup −} production, cesium dynamics, and overall plasma characteristics. Several diagnostics will allow to fully characterise the beam in terms of uniformity and divergence and the source, besides supporting a safe and controlled operation. In particular, thermal measurements will be used for beam monitoring and system protection. SPIDER will be instrumented with mineral insulated cable thermocouples, both on the grids, on other components of the beam source, and on the rear side of the beam dump water cooled elements.more » This paper deals with the final design and the technical specification of the thermal sensor diagnostic for SPIDER. In particular the layout of the diagnostic, together with the sensors distribution in the different components, the cables routing and the conditioning and acquisition cubicles are described.« less

  11. Final design of thermal diagnostic system in SPIDER ion source

    NASA Astrophysics Data System (ADS)

    Brombin, M.; Dalla Palma, M.; Pasqualotto, R.; Pomaro, N.

    2016-11-01

    The prototype radio frequency source of the ITER heating neutral beams will be first tested in SPIDER test facility to optimize H- production, cesium dynamics, and overall plasma characteristics. Several diagnostics will allow to fully characterise the beam in terms of uniformity and divergence and the source, besides supporting a safe and controlled operation. In particular, thermal measurements will be used for beam monitoring and system protection. SPIDER will be instrumented with mineral insulated cable thermocouples, both on the grids, on other components of the beam source, and on the rear side of the beam dump water cooled elements. This paper deals with the final design and the technical specification of the thermal sensor diagnostic for SPIDER. In particular the layout of the diagnostic, together with the sensors distribution in the different components, the cables routing and the conditioning and acquisition cubicles are described.

  12. Indirect (source-free) integration method. I. Wave-forms from geodesic generic orbits of EMRIs

    NASA Astrophysics Data System (ADS)

    Ritter, Patxi; Aoudia, Sofiane; Spallicci, Alessandro D. A. M.; Cordier, Stéphane

    2016-12-01

    The Regge-Wheeler-Zerilli (RWZ) wave-equation describes Schwarzschild-Droste black hole perturbations. The source term contains a Dirac distribution and its derivative. We have previously designed a method of integration in time domain. It consists of a finite difference scheme where analytic expressions, dealing with the wave-function discontinuity through the jump conditions, replace the direct integration of the source and the potential. Herein, we successfully apply the same method to the geodesic generic orbits of EMRI (Extreme Mass Ratio Inspiral) sources, at second order. An EMRI is a Compact Star (CS) captured by a Super-Massive Black Hole (SMBH). These are considered the best probes for testing gravitation in strong regime. The gravitational wave-forms, the radiated energy and angular momentum at infinity are computed and extensively compared with other methods, for different orbits (circular, elliptic, parabolic, including zoom-whirl).

  13. Long-term benthic monitoring studies in the freshwater portion of the Potomac River: 1983 to 1985, cumulative report. Volume 1. Text

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaughnessy, A.T.; Holland, A.F.

    1989-12-01

    The report summarizes data from the first three years of a long-term monitoring program to establish baseline conditions in benthic communities on the upper Potomac River. Major sources of variation were considered in an effort to characterize the effect of two power plants on distribution and abundance of the benthos. Distinct changes occurred in benthic communities in the vicinity of power plant discharges. These included decreased abundances of dominant species and reduced occurrences of rare species. Impacts associated with power plants were most severe during summer months and during low flow years.

  14. Long-term benthic monitoring studies in the freshwater portion of the Potomac River: 1983 to 1985, cumulative report. Volume 2. Appendices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaughnessy, A.T.; Holland, A.F.

    1989-12-01

    The report summarizes data from the first three years of a long-term monitoring program to establish baseline conditions in benthic communities on the upper Potomac River. Major sources of variation were considered in an effort to characterize the effect of two power plants on distribution and abundance of the benthos. Distinct changes occurred in benthic communities in the vicinity of power plant discharges. These included decreased abundances of dominant species and reduced occurrences of rare species. Impacts associated with power plants were most severe during summer months and during low flow years.

  15. Sources, distribution, bioavailability, toxicity, and risk assessment of heavy metal(loid)s in complementary medicines.

    PubMed

    Bolan, Shiv; Kunhikrishnan, Anitha; Seshadri, Balaji; Choppala, Girish; Naidu, Ravi; Bolan, Nanthi S; Ok, Yong Sik; Zhang, Ming; Li, Chun-Guang; Li, Feng; Noller, Barry; Kirkham, Mary Beth

    2017-11-01

    The last few decades have seen the rise of alternative medical approaches including the use of herbal supplements, natural products, and traditional medicines, which are collectively known as 'Complementary medicines'. However, there are increasing concerns on the safety and health benefits of these medicines. One of the main hazards with the use of complementary medicines is the presence of heavy metal(loid)s such as arsenic (As), cadmium (Cd), lead (Pb), and mercury (Hg). This review deals with the characteristics of complementary medicines in terms of heavy metal(loid)s sources, distribution, bioavailability, toxicity, and human risk assessment. The heavy metal(loid)s in these medicines are derived from uptake by medicinal plants, cross-contamination during processing, and therapeutic input of metal(loid)s. This paper discusses the distribution of heavy metal(loid)s in these medicines, in terms of their nature, concentration, and speciation. The importance of determining bioavailability towards human health risk assessment was emphasized by the need to estimate daily intake of heavy metal(loid)s in complementary medicines. The review ends with selected case studies of heavy metal(loid) toxicity from complementary medicines with specific reference to As, Cd, Pb, and Hg. The future research opportunities mentioned in the conclusion of review will help researchers to explore new avenues, methodologies, and approaches to the issue of heavy metal(loid)s in complementary medicines, thereby generating new regulations and proposing fresh approach towards safe use of these medicines. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. A numerical method for shock driven multiphase flow with evaporating particles

    NASA Astrophysics Data System (ADS)

    Dahal, Jeevan; McFarland, Jacob A.

    2017-09-01

    A numerical method for predicting the interaction of active, phase changing particles in a shock driven flow is presented in this paper. The Particle-in-Cell (PIC) technique was used to couple particles in a Lagrangian coordinate system with a fluid in an Eulerian coordinate system. The Piecewise Parabolic Method (PPM) hydrodynamics solver was used for solving the conservation equations and was modified with mass, momentum, and energy source terms from the particle phase. The method was implemented in the open source hydrodynamics software FLASH, developed at the University of Chicago. A simple validation of the methods is accomplished by comparing velocity and temperature histories from a single particle simulation with the analytical solution. Furthermore, simple single particle parcel simulations were run at two different sizes to study the effect of particle size on vorticity deposition in a shock-driven multiphase instability. Large particles were found to have lower enstrophy production at early times and higher enstrophy dissipation at late times due to the advection of the particle vorticity source term through the carrier gas. A 2D shock-driven instability of a circular perturbation is studied in simulations and compared to previous experimental data as further validation of the numerical methods. The effect of the particle size distribution and particle evaporation is examined further for this case. The results show that larger particles reduce the vorticity deposition, while particle evaporation increases it. It is also shown that for a distribution of particles sizes the vorticity deposition is decreased compared to single particle size case at the mean diameter.

  17. Overview of the gaps in the health care legislation in Georgia: short-, medium-, and long-term priorities.

    PubMed

    Kiknadze, Nino; Beletsky, Leo

    2013-12-12

    After gaining independence following the dissolution of the Soviet Union, Georgia has aspired to become the region's leader in progressive legal reform. Particularly in the realm of health care regulation, Georgia has proceeded with extensive legislative reforms intended to modernize its health care system, and bring it in line with international standards. As part of a larger project to improve human rights in patient care, we conducted a study designed to identify gaps in the current Georgian health care legislation. Using a cross-site research framework based on the European Charter of Patients’ Rights, an interdisciplinary working group oversaw a comprehensive review of human rights legislation pertinent to health care settings using various sources, such as black letter law, expert opinions, court cases, research papers, reports, and complaints. The study identified a number of serious inconsistencies, gaps, and conflicts in the definition and coverage of terms used in the national legislative canon pertinent to human rights in patient care. These include inconsistent definitions of key terms "informed consent" and "medical malpractice" across the legislative landscape. Imprecise and overly broad drafting of legislation has left concepts like patient confidentiality and implied consent wide open to abuse. The field of health care provider rights was entirely missing from existing Georgian legislation. To our knowledge, this is the first study of its kind in Georgia. Gaps and inconsistencies uncovered were categorized based on a short-, medium-, and long-term action framework. Results were presented to key decision makers in Georgian ministerial and legislative institutions. Several of the major recommendations are currently being considered for inclusion into future legal reform. Copyright © 2013 Kiknadze and Beletsky. This is an open access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/), which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original author and source are credited.

  18. Nonlinear dispersion-based incoherent photonic processing for microwave pulse generation with full reconfigurability.

    PubMed

    Bolea, Mario; Mora, José; Ortega, Beatriz; Capmany, José

    2012-03-12

    A novel all-optical technique based on the incoherent processing of optical signals using high-order dispersive elements is analyzed for microwave arbitrary pulse generation. We show an approach which allows a full reconfigurability of a pulse in terms of chirp, envelope and central frequency by the proper control of the second-order dispersion and the incoherent optical source power distribution, achieving large values of time-bandwidth product.

  19. The RATIO method for time-resolved Laue crystallography

    PubMed Central

    Coppens, Philip; Pitak, Mateusz; Gembicky, Milan; Messerschmidt, Marc; Scheins, Stephan; Benedict, Jason; Adachi, Shin-ichi; Sato, Tokushi; Nozawa, Shunsuke; Ichiyanagi, Kohei; Chollet, Matthieu; Koshihara, Shin-ya

    2009-01-01

    A RATIO method for analysis of intensity changes in time-resolved pump–probe Laue diffraction experiments is described. The method eliminates the need for scaling the data with a wavelength curve representing the spectral distribution of the source and removes the effect of possible anisotropic absorption. It does not require relative scaling of series of frames and removes errors due to all but very short term fluctuations in the synchrotron beam. PMID:19240334

  20. NRL SSD Research Achievements: 19601970. Volume 1

    DTIC Science & Technology

    2015-10-30

    radio sources between declinations +10 and -20,” Mills, B.Y., Slee O.B., Hill, E.R. 1958, Australian J. Phys., 11 , 360-387 12. “SPEAR: Small Payload... 11 . SPONSOR / MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION / AVAILABILITY STATEMENT 13. SUPPLEMENTARY NOTES 14. ABSTRACT 15. SUBJECT TERMS 16...from a tour-de-force lunar occultation rocket experiment; and the First Detection of X-Ray Pulsations from the Crab Pulsar that matched the radio

  1. Distribution and mobility of mercury in soils of a gold mining region, Cuyuni river basin, Venezuela.

    PubMed

    Santos-Francés, F; García-Sánchez, A; Alonso-Rojo, P; Contreras, F; Adams, M

    2011-04-01

    An extensive and remote gold mining region located in the East of Venezuela has been studied with the aim of assessing the distribution and mobility of mercury in soil and the level of Hg pollution at artisanal gold mining sites. To do so, soils and pond sediments were sampled at sites not subject to anthropological influence, as well as in areas affected by gold mining activities. Total Hg in regionally distributed soils ranged between 0.02 mg kg(-1) and 0.40 mg kg(-1), with a median value of 0.11 mg kg(-1), which is slightly higher than soil Hg worldwide, possibly indicating long-term atmospheric input or more recent local atmospheric input, in addition to minor lithogenic sources. A reference Hg concentration of 0.33 mg kg(-1) is proposed for the detection of mining affected soils in this region. Critical total Hg concentrations were found in the surrounding soils of pollutant sources, such as milling-amalgamation sites, where soil Hg contents ranged from 0.16 mg kg(-1) to 542 mg kg(-1) with an average of 26.89 mg kg(-1), which also showed high levels of elemental Hg, but quite low soluble+exchangeable Hg fraction (0.02-4.90 mg kg(-1)), suggesting low Hg soil mobility and bioavailability, as confirmed by soil column leaching tests. The vertical distribution of Hg through the soil profiles, as well as variations in soil Hg contents with distance from the pollution source, and Hg in pond mining sediments were also analysed. Copyright © 2010 Elsevier Ltd. All rights reserved.

  2. Spatial distribution and sources of heavy metals in natural pasture soil around copper-molybdenum mine in Northeast China.

    PubMed

    Wang, Zhiqiang; Hong, Chen; Xing, Yi; Wang, Kang; Li, Yifei; Feng, Lihui; Ma, Silu

    2018-06-15

    The characterization of the content and source of heavy metals are essential to assess the potential threat of metals to human health. The present study collected 140 topsoil samples around a Cu-Mo mine (Wunugetushan, China) and investigated the concentrations and spatial distribution pattern of Cr, Ni, Zn, Cu, Mo and Cd in soil using multivariate and geostatistical analytical methods. Results indicated that the average concentrations of six heavy metals, especially Cu and Mo, were obviously higher than the local background values. Correlation analysis and principal component analysis divided these metals into three groups, including Cr and Ni, Cu and Mo, Zn and Cd. Meanwhile, the spatial distribution maps of heavy metals indicated that Cr and Ni in soil were no notable anthropogenic inputs and mainly controlled by natural factors because their spatial maps exhibited non-point source contamination. The concentrations of Cu and Mo gradually decreased with distance away from the mine area, suggesting that human mining activities may be crucial in the spreading of contaminants. Soil contamination of Zn were associated with livestock manure produced from grazing. In addition, the environmental risk of heavy metal pollution was assessed by geo-accumulation index. All the results revealed that the spatial distribution of heavy metals in soil were in agreement with the local human activities. Investigating and identifying the origin of heavy metals in pasture soil will lay the foundation for taking effective measures to preserve soil from the long-term accumulation of heavy metals. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Inner Source and Interstellar Pickup Ions observed by MMS-HPCA

    NASA Astrophysics Data System (ADS)

    Gomez, Roman; Fuselier, Stephen; Burch, James L.; Mukherjee, Joey; Valek, Phillip W.; Allegrini, Frederic; Desai, Mihir I.

    2017-04-01

    Pickup Ions in the solar system are either of interstellar origin, or come from an inner source whose existence is confirmed, but which has not been directly observed. The Hot Plasma Composition Analyzer of the Magnetospheric Multiscale mission (MMS-HPCA) measures the energy and directional flux of ions with M/Q from 1 eV/e to 40 keV/e and is used measure the composition and dynamics of reconnection plasmas near the earth. During the first phase of the mission, from 1 September 2015 to 8 March 2016, the spacecraft at 12 Earth Radii apogee swept through the dayside from 1800 to 0600 local time. Although the apogee was designed to maximize encounters with the magnetopause, there were many instances when the spacecraft crossed the bow shock and sampled the solar wind. In November and December, while the spacecraft were downstream of the interstellar neutral focusing cone, HPCA detected pick up ions, such as He+, O+, and Ne+. He+ was distributed in an energy range of 14 eV - 20.6 keV, peaking at 757 eV; presumably of interstellar origin. O+ was observed in the energy range of 390 eV - 10.6 keV, and also seems to come from the interstellar medium. Ne+ was observed to be tightly distributed around a center energy of 5.5 keV, which implies an inner source origin. The mass - energy - angle analysis of these pick up ion distributions is presented, and their interpretation in terms of interstellar and inner source ions is discussed.

  4. Disaster Risk Reduction through Innovative Uses of Crowd Sourcing (Invited)

    NASA Astrophysics Data System (ADS)

    Berger, J.; Greene, M.

    2010-12-01

    Crowd sourcing can be described as a method of distributed problem-solving. It takes advantage of the power of the crowd, which can in some cases be a community of experts and in other cases the collective insight of a broader range of contributors with varying degrees of domain knowledge. The term crowd sourcing was first used by Jeff Howe in a June 2006 Wired magazine article “The Rise of Crowdsourcing,” and is a combination of the terms “crowd” and “outsourcing.” Some commonly known examples of crowd sourcing, in its broadest sense, include Wikepedia, distributed participatory design projects, and consumer websites such as Yelp and Angie’s List. The popularity and success of early large-scale crowd sourcing activities is made possible through leveraging Web 2.0 technologies that allow for mass participation from distributed individuals. The Earthquake Engineering Research Institute (EERI) in Oakland, California recently participated in two crowd sourcing projects. One was initiated and coordinated by EERI, while in the second case EERI was invited to contribute once the crowd sourcing activity was underway. In both projects there was: 1) the determination of a problem or set of tasks that could benefit immediately from the engagement of an informed volunteer group of professionals; 2) a segmenting of the problem into discrete pieces that could be completed in a short period of time (from ten minutes to four hours); 3) a call to action, where an interested community was made aware of the project; and 4) the collection, aggregation, vetting and ultimately distribution of the results in a relatively short period of time. The first EERI crowd sourcing example was the use of practicing engineers and engineering students in California to help estimate the number of pre-1980 concrete buildings in the high seismic risk counties in the state. This building type is known to perform poorly in earthquakes, and state officials were interested in understanding more about the size of the problem—how many buildings, which jurisdictions. Volunteers signed up for individual jurisdictions and used a variety of techniques to estimate the count. They shared their techniques at meetings and posted their results online. Over 100 volunteers also came together to walk the streets of downtown San Francisco, a city with a particularly large number of these buildings, gathering more data on each building that will be used in a later phase to identify possible mitigation strategies. The second example was EERI’s participation in a response network, GEO-CAN, created in support of the World Bank’s responsibility in the damage assessment of buildings in Port-au-Prince immediately after the January 12, 2010 earthquake. EERI members, primarily earthquake engineers, were invited to speed up critical damage assessment using pre- and post-event aerial imagery. An area of 300 sq km was divided into grids, and grids were then allocated to knowledgeable individuals for analysis. The initial analysis was completed within 96 hours through the participation of over 300 volunteers. Ultimately, over 600 volunteers completed damage assessments for about 30,000 buildings.

  5. Convective Detrainment and Control of the Tropical Water Vapor Distribution

    NASA Astrophysics Data System (ADS)

    Kursinski, E. R.; Rind, D.

    2006-12-01

    Sherwood et al. (2006) developed a simple power law model describing the relative humidity distribution in the tropical free troposphere where the power law exponent is the ratio of a drying time scale (tied to subsidence rates) and a moistening time which is the average time between convective moistening events whose temporal distribution is described as a Poisson distribution. Sherwood et al. showed that the relative humidity distribution observed by GPS occultations and MLS is indeed close to a power law, approximately consistent with the simple model's prediction. Here we modify this simple model to be in terms of vertical length scales rather than time scales in a manner that we think more correctly matches the model predictions to the observations. The subsidence is now in terms of the vertical distance the air mass has descended since it last detrained from a convective plume. The moisture source term becomes a profile of convective detrainment flux versus altitude. The vertical profile of the convective detrainment flux is deduced from the observed distribution of the specific humidity at each altitude combined with sinking rates estimated from radiative cooling. The resulting free tropospheric detrainment profile increases with altitude above 3 km somewhat like an exponential profile which explains the approximate power law behavior observed by Sherwood et al. The observations also reveal a seasonal variation in the detrainment profile reflecting changes in the convective behavior expected by some based on observed seasonal changes in the vertical structure of convective regions. The simple model results will be compared with the moisture control mechanisms in a GCM with many additional mechanisms, the GISS climate model, as described in Rind (2006). References Rind. D., 2006: Water-vapor feedback. In Frontiers of Climate Modeling, J. T. Kiehl and V. Ramanathan (eds), Cambridge University Press [ISBN-13 978-0-521- 79132-8], 251-284. Sherwood, S., E. R. Kursinski and W. Read, A distribution law for free-tropospheric relative humidity, J. Clim. In press. 2006

  6. Plutonium isotopes and 241Am in the atmosphere of Lithuania: A comparison of different source terms

    NASA Astrophysics Data System (ADS)

    Lujanienė, G.; Valiulis, D.; Byčenkienė, S.; Šakalys, J.; Povinec, P. P.

    2012-12-01

    137Cs, 241Am and Pu isotopes collected in aerosol samples during 1994-2011 were analyzed with special emphasis on better understanding of Pu and Am behavior in the atmosphere. The results from long-term measurements of 240Pu/239Pu atom ratios showed a bimodal frequency distribution with median values of 0.195 and 0.253, indicating two main sources contributing to the Pu activities at the Vilnius sampling station. The low Pu atom ratio of 0.141 could be attributed to the weapon-grade plutonium derived from the nuclear weapon test sites. The frequency of air masses arriving from the North-West and North-East correlated with the Pu atom ratio indicating the input from the sources located in these regions (the Novaya Zemlya test site, Siberian nuclear plants), while no correlation with the Chernobyl region was observed. Measurements carried out during the Fukushima accident showed a negligible impact of this source with Pu activities by four orders of magnitude lower as compared to the Chernobyl accident. The activity concentration of actinides measured in the integrated sample collected in March-April, 2011 showed a small contribution of Pu with unusual activity and atom ratios indicating the presence of the spent fuel of different origin than that of the Chernobyl accident.

  7. Annual Rates on Seismogenic Italian Sources with Models of Long-Term Predictability for the Time-Dependent Seismic Hazard Assessment In Italy

    NASA Astrophysics Data System (ADS)

    Murru, Maura; Falcone, Giuseppe; Console, Rodolfo

    2016-04-01

    The present study is carried out in the framework of the Center for Seismic Hazard (CPS) INGV, under the agreement signed in 2015 with the Department of Civil Protection for developing a new model of seismic hazard of the country that can update the current reference (MPS04-S1; zonesismiche.mi.ingv.it and esse1.mi.ingv.it) released between 2004 and 2006. In this initiative, we participate with the Long-Term Stress Transfer (LTST) Model to provide the annual occurrence rate of a seismic event on the entire Italian territory, from a Mw4.5 minimum magnitude, considering bins of 0.1 magnitude units on geographical cells of 0.1° x 0.1°. Our methodology is based on the fusion of a statistical time-dependent renewal model (Brownian Passage Time, BPT, Matthews at al., 2002) with a physical model which considers the permanent effect in terms of stress that undergoes a seismogenic source in result of the earthquakes that occur on surrounding sources. For each considered catalog (historical, instrumental and individual seismogenic sources) we determined a distinct rate value for each cell of 0.1° x 0.1° for the next 50 yrs. If the cell falls within one of the sources in question, we adopted the respective value of rate, which is referred only to the magnitude of the event characteristic. This value of rate is divided by the number of grid cells that fall on the horizontal projection of the source. If instead the cells fall outside of any seismic source we considered the average value of the rate obtained from the historical and the instrumental catalog, using the method of Frankel (1995). The annual occurrence rate was computed for any of the three considered distributions (Poisson, BPT and BPT with inclusion of stress transfer).

  8. Geochemistry and carbon isotopic ratio for assessment of PM10 composition, source and seasonal trends in urban environment.

    PubMed

    Di Palma, A; Capozzi, F; Agrelli, D; Amalfitano, C; Giordano, S; Spagnuolo, V; Adamo, P

    2018-08-01

    Investigating the nature of PM 10 is crucial to differentiate sources and their relative contributions. In this study we compared the levels, and the chemical and mineralogical properties of PM 10 particles sampled in different seasons at monitoring stations representative of urban background, urban traffic and suburban traffic areas of Naples city. The aims were to relate the PM 10 load and characteristics to the location of the monitoring stations, to investigate the different sources contributing to PM 10 and to highlight PM 10 seasonal variability. Bulk analyses of chemical species in the PM 10 fraction included total carbon and nitrogen, δ 13 C and other 20 elements. Both natural and anthropogenic sources were found to contribute to the exceedances of the EU PM 10 limit values. The natural contribution was mainly related to marine aerosols and soil dust, as highlighted by X-ray diffractometry and SEM-EDS microscopy. The percentage of total carbon suggested a higher contribution of biogenic components to PM 10 in spring. However, this result was not supported by the δ 13 C values which were seasonally homogeneous and not sufficient to extract single emission sources. No significant differences, in terms of PM 10 load and chemistry, were observed between monitoring stations with different locations, suggesting a homogeneous distribution of PM 10 on the studied area in all seasons. The anthropogenic contribution to PM 10 seemed to dominate in all sites and seasons with vehicular traffic acting as a main source mostly by generation of non-exhaust emissions Our findings reinforce the need to focus more on the analysis of PM 10 in terms of quality than of load, to reconsider the criteria for the classification and the spatial distribution of the monitoring stations within urban and suburban areas, with a special attention to the background location, and to emphasize all the policies promoting sustainable mobility and reduction of both exhaust and not-exhaust traffic-related emissions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Sources, concentrations, and risks of naphthalene in indoor and outdoor air.

    PubMed

    Batterman, S; Chin, J-Y; Jia, C; Godwin, C; Parker, E; Robins, T; Max, P; Lewis, T

    2012-08-01

    Naphthalene is a ubiquitous pollutant, and very high concentrations are sometimes encountered indoors when this chemical is used as a pest repellent or deodorant. This study describes the distribution and sources of vapor-phase naphthalene concentrations in four communities in southeast Michigan, USA. Outdoors, naphthalene was measured in the communities and at a near-road site. Indoors, naphthalene levels were characterized in 288 suburban and urban homes. The median outdoor concentration was 0.15 μg/m(3), and a modest contribution from rush-hour traffic was noted. The median indoor long-term concentration was 0.89 μg/m(3), but concentrations were extremely skewed and 14% of homes exceeded 3 μg/m(3), the chronic reference concentration for non-cancer effects, 8% exceeded 10 μg/m(3), and levels reached 200 μg/m(3). The typical excess individual lifetime cancer risk was about 10(-4) and reached 10(-2) in some homes. Important sources include naphthalene's use as a pest repellent and deodorant, migration from attached garages and, to lesser extents, cigarette smoke and vehicle emissions. Excessive use as a repellent caused the highest concentrations. Naphthalene presents high risks in a subset of homes, and policies and actions to reduce exposures, for example, sales bans or restrictions, improved labeling, and consumer education, should be considered. Long-term average concentrations of naphthalene in most homes fell into the 0.2-1.7 μg/m(3) range reported as representative in earlier studies. The highly skewed distribution of concentrations results in a subset of homes with elevated concentrations and health risks that greatly exceed US EPA and World Health Organization (WHO) guidelines. The most important indoor source is the use of naphthalene as a pest repellant or deodorant; secondary sources include presence of an attached garage, cigarette smoking, and outdoor sources. House-to-house variation was large, reflecting differences among the residences and naphthalene use practices. Stronger policies and educational efforts are needed to eliminate or modify indoor usage practices of this chemical. © 2011 John Wiley & Sons A/S.

  10. Tropical Gravity Wave Momentum Fluxes and Latent Heating Distributions

    NASA Technical Reports Server (NTRS)

    Geller, Marvin A.; Zhou, Tiehan; Love, Peter T.

    2015-01-01

    Recent satellite determinations of global distributions of absolute gravity wave (GW) momentum fluxes in the lower stratosphere show maxima over the summer subtropical continents and little evidence of GW momentum fluxes associated with the intertropical convergence zone (ITCZ). This seems to be at odds with parameterizations forGWmomentum fluxes, where the source is a function of latent heating rates, which are largest in the region of the ITCZ in terms of monthly averages. The authors have examined global distributions of atmospheric latent heating, cloud-top-pressure altitudes, and lower-stratosphere absolute GW momentum fluxes and have found that monthly averages of the lower-stratosphere GW momentum fluxes more closely resemble the monthly mean cloud-top altitudes rather than the monthly mean rates of latent heating. These regions of highest cloud-top altitudes occur when rates of latent heating are largest on the time scale of cloud growth. This, plus previously published studies, suggests that convective sources for stratospheric GW momentum fluxes, being a function of the rate of latent heating, will require either a climate model to correctly model this rate of latent heating or some ad hoc adjustments to account for shortcomings in a climate model's land-sea differences in convective latent heating.

  11. Time Variations in Forecasts and Occurrences of Large Solar Energetic Particle Events

    NASA Astrophysics Data System (ADS)

    Kahler, S. W.

    2015-12-01

    The onsets and development of large solar energetic (E > 10 MeV) particle (SEP) events have been characterized in many studies. The statistics of SEP event onset delay times from associated solar flares and coronal mass ejections (CMEs), which depend on solar source longitudes, can be used to provide better predictions of whether a SEP event will occur following a large flare or fast CME. In addition, size distributions of peak SEP event intensities provide a means for a probabilistic forecast of peak intensities attained in observed SEP increases. SEP event peak intensities have been compared with their rise and decay times for insight into the acceleration and transport processes. These two time scales are generally treated as independent parameters describing the development of a SEP event, but we can invoke an alternative two-parameter description based on the assumption that decay times exceed rise times for all events. These two parameters, from the well known Weibull distribution, provide an event description in terms of its basic shape and duration. We apply this distribution to several large SEP events and ask what the characteristic parameters and their dependence on source longitudes can tell us about the origins of these important events.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oloff, L.-P., E-mail: oloff@physik.uni-kiel.de; Hanff, K.; Stange, A.

    With the advent of ultrashort-pulsed extreme ultraviolet sources, such as free-electron lasers or high-harmonic-generation (HHG) sources, a new research field for photoelectron spectroscopy has opened up in terms of femtosecond time-resolved pump-probe experiments. The impact of the high peak brilliance of these novel sources on photoemission spectra, so-called vacuum space-charge effects caused by the Coulomb interaction among the photoemitted probe electrons, has been studied extensively. However, possible distortions of the energy and momentum distributions of the probe photoelectrons caused by the low photon energy pump pulse due to the nonlinear emission of electrons have not been studied in detail yet.more » Here, we systematically investigate these pump laser-induced space-charge effects in a HHG-based experiment for the test case of highly oriented pyrolytic graphite. Specifically, we determine how the key parameters of the pump pulse—the excitation density, wavelength, spot size, and emitted electron energy distribution—affect the measured time-dependent energy and momentum distributions of the probe photoelectrons. The results are well reproduced by a simple mean-field model, which could open a path for the correction of pump laser-induced space-charge effects and thus toward probing ultrafast electron dynamics in strongly excited materials.« less

  13. Cometary spliting - a source for the Jupiter family?

    NASA Astrophysics Data System (ADS)

    Pittich, E. M.; Rickman, H.

    1994-01-01

    The quest for the origin of the Jupiter family of comets includes investigating the possibility that a large fraction of this population originates from past splitting events. In particular, one suggested scenario, albeit less attractive on physical grounds, maintains that a giant comet breakup is a major source of short-period comets. By simulating such events and integrating the motions of the fictitious fragments in an accurate solar system model for the typical lifetime of Jupiter family comets, it is possible to check whether the outcome may or may not be compatible with the observed orbital distribution. In this paper we present such integrations for a few typical progenitor orbits and analyze the ensuing thermalization process with particular attention to the Tisserand parameters. It is found that the sets of fragments lose their memory of a common origin very rapidly so that, in general terms, it is difficult to use the random appearance of the observed orbital distribution as evidence against the giant comet splitting hypothesis.

  14. Selenium mobility and distribution in irrigated and nonirrigated alluvial soils

    USGS Publications Warehouse

    Fio, John L.; Fujii, Roger; Deverel, S.J.

    1991-01-01

    Dissolution and leaching of soil salts by irrigation water is a primary source of Se to shallow groundwater in the western San Joaquin Valley, California. In this study, the mobility and distribution of selenite and selenate in soils with different irrigation and drainage histories was evaluated using sorption experiments and an advection-dispersion model. The sorption studies showed that selenate (15–12400 µg Se L−1) is not adsorbed to soil, whereas selenite (10–5000 µg Se L−1) is rapidly adsorbed. The time lag between adsorption and desorption of selenite is considerable, indicating a dependence of reaction rate on reaction direction (hysteresis). Selenite adsorption and desorption isotherms were different, and both were described with the Freundlich equation. Model results and chemical analyses of extracts from the soil samples showed that selenite is resistant to leaching and therefore can represent a potential long-term source of Se to groundwater. In contrast, selenate behaves as a conservative constituent under alkaline and oxidized conditions and is easily leached from soil.

  15. Simulation of charge breeding of rubidium using Monte Carlo charge breeding code and generalized ECRIS model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, L.; Cluggish, B.; Kim, J. S.

    2010-02-15

    A Monte Carlo charge breeding code (MCBC) is being developed by FAR-TECH, Inc. to model the capture and charge breeding of 1+ ion beam in an electron cyclotron resonance ion source (ECRIS) device. The ECRIS plasma is simulated using the generalized ECRIS model which has two choices of boundary settings, free boundary condition and Bohm condition. The charge state distribution of the extracted beam ions is calculated by solving the steady state ion continuity equations where the profiles of the captured ions are used as source terms. MCBC simulations of the charge breeding of Rb+ showed good agreement with recentmore » charge breeding experiments at Argonne National Laboratory (ANL). MCBC correctly predicted the peak of highly charged ion state outputs under free boundary condition and similar charge state distribution width but a lower peak charge state under the Bohm condition. The comparisons between the simulation results and ANL experimental measurements are presented and discussed.« less

  16. 78 FR 56685 - SourceGas Distribution LLC; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-13

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. CP13-540-000] SourceGas Distribution LLC; Notice of Application Take notice that on August 27, 2013, SourceGas Distribution LLC (Source... areas across the Nebraska-Colorado border within which SourceGas may, without further commission...

  17. Space-time quantitative source apportionment of soil heavy metal concentration increments.

    PubMed

    Yang, Yong; Christakos, George; Guo, Mingwu; Xiao, Lu; Huang, Wei

    2017-04-01

    Assessing the space-time trends and detecting the sources of heavy metal accumulation in soils have important consequences in the prevention and treatment of soil heavy metal pollution. In this study, we collected soil samples in the eastern part of the Qingshan district, Wuhan city, Hubei Province, China, during the period 2010-2014. The Cd, Cu, Pb and Zn concentrations in soils exhibited a significant accumulation during 2010-2014. The spatiotemporal Kriging technique, based on a quantitative characterization of soil heavy metal concentration variations in terms of non-separable variogram models, was employed to estimate the spatiotemporal soil heavy metal distribution in the study region. Our findings showed that the Cd, Cu, and Zn concentrations have an obvious incremental tendency from the southwestern to the central part of the study region. However, the Pb concentrations exhibited an obvious tendency from the northern part to the central part of the region. Then, spatial overlay analysis was used to obtain absolute and relative concentration increments of adjacent 1- or 5-year periods during 2010-2014. The spatial distribution of soil heavy metal concentration increments showed that the larger increments occurred in the center of the study region. Lastly, the principal component analysis combined with the multiple linear regression method were employed to quantify the source apportionment of the soil heavy metal concentration increments in the region. Our results led to the conclusion that the sources of soil heavy metal concentration increments should be ascribed to industry, agriculture and traffic. In particular, 82.5% of soil heavy metal concentration increment during 2010-2014 was ascribed to industrial/agricultural activities sources. Using STK and SOA to obtain the spatial distribution of heavy metal concentration increments in soils. Using PCA-MLR to quantify the source apportionment of soil heavy metal concentration increments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. On the gravitational potential and field anomalies due to thin mass layers

    NASA Technical Reports Server (NTRS)

    Ockendon, J. R.; Turcotte, D. L.

    1977-01-01

    The gravitational potential and field anomalies for thin mass layers are derived using the technique of matched asymptotic expansions. An inner solution is obtained using an expansion in powers of the thickness and it is shown that the outer solution is given by a surface distribution of mass sources and dipoles. Coefficients are evaluated by matching the inner expansion of the outer solution with the outer expansion of the inner solution. The leading term in the inner expansion for the normal gravitational field gives the Bouguer formula. The leading term in the expansion for the gravitational potential gives an expression for the perturbation to the geoid. The predictions given by this term are compared with measurements by satellite altimetry. The second-order terms in the expansion for the gravitational field are required to predict the gravity anomaly at a continental margin. The results are compared with observations.

  19. Determining Water Content and Distribution in PEMFCs to Predict Aging While in Storage

    DOE PAGES

    Stariha, Sarah; Wilson, Mahlon Scott; LaManna, Jacob M.; ...

    2017-08-24

    Proton membrane exchange fuel cells (PEMFCs) have the potential to be long term backup power sources with a startup time on the order of seconds. Water management is the key issue in being able to successfully store PEMFCs for extended periods of time. In this work custom made PEMFCs were humidified at various relative humidities (%RH) and subsequently stored for different lengths of time. The fuel cell’s water content was then imaged at the National Institute of Standards and Technology (NIST) neutron imaging facility. In conclusion, the cells’ startup performances were measured simulating quick startup conditions to define the effectmore » of different water distributions.« less

  20. Determining Water Content and Distribution in PEMFCs to Predict Aging While in Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stariha, Sarah; Wilson, Mahlon Scott; LaManna, Jacob M.

    Proton membrane exchange fuel cells (PEMFCs) have the potential to be long term backup power sources with a startup time on the order of seconds. Water management is the key issue in being able to successfully store PEMFCs for extended periods of time. In this work custom made PEMFCs were humidified at various relative humidities (%RH) and subsequently stored for different lengths of time. The fuel cell’s water content was then imaged at the National Institute of Standards and Technology (NIST) neutron imaging facility. In conclusion, the cells’ startup performances were measured simulating quick startup conditions to define the effectmore » of different water distributions.« less

  1. The 'sleeping beauty' galaxy NGC 4826: an almost textbook example of the Abelian Higgs vorto-source (-sink)

    NASA Astrophysics Data System (ADS)

    Saniga, Metod

    1995-03-01

    It is demonstrated that the kinematic 'peculiarity' of the early Sab galaxy NGC 4826 can easily be understood in terms of the Abelian Higgs (AH) model of spiral galaxies. A cylindrically symmetric AH vorto-source (-sink) with a disk-to-bulge ratio Omega greater than 1 is discussed and the distributions of the diagonal components of the corresponding stress-energy tensor Tmu,nu are presented. It is argued that the sign-changing component Tphiphi could account for the existence of two counter-rotating gas disks while negative values of Trr imply inward gas motions as observed in the outer and transition regions of the galaxy.

  2. Instantaneous and time-averaged dispersion and measurement models for estimation theory applications with elevated point source plumes

    NASA Technical Reports Server (NTRS)

    Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.

    1977-01-01

    Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.

  3. Finite-element solutions for geothermal systems

    NASA Technical Reports Server (NTRS)

    Chen, J. C.; Conel, J. E.

    1977-01-01

    Vector potential and scalar potential are used to formulate the governing equations for a single-component and single-phase geothermal system. By assuming an initial temperature field, the fluid velocity can be determined which, in turn, is used to calculate the convective heat transfer. The energy equation is then solved by considering convected heat as a distributed source. Using the resulting temperature to compute new source terms, the final results are obtained by iterations of the procedure. Finite-element methods are proposed for modeling of realistic geothermal systems; the advantages of such methods are discussed. The developed methodology is then applied to a sample problem. Favorable agreement is obtained by comparisons with a previous study.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodroffe, J. R.; Brito, T. V.; Jordanova, V. K.

    In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution were used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. Our study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra,more » Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less

  5. Large-scale Distribution of Arrival Directions of Cosmic Rays Detected Above 1018 eV at the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Pierre Auger Collaboration; Abreu, P.; Aglietta, M.; Ahlers, M.; Ahn, E. J.; Albuquerque, I. F. M.; Allard, D.; Allekotte, I.; Allen, J.; Allison, P.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Alves Batista, R.; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Antiči'c, T.; Aramo, C.; Arganda, E.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Badescu, A. M.; Balzer, M.; Barber, K. B.; Barbosa, A. F.; Bardenet, R.; Barroso, S. L. C.; Baughman, B.; Bäuml, J.; Baus, C.; Beatty, J. J.; Becker, K. H.; Bellétoile, A.; Bellido, J. A.; BenZvi, S.; Berat, C.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blanco, F.; Blanco, M.; Bleve, C.; Blümer, H.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brancus, I.; Brogueira, P.; Brown, W. C.; Bruijn, R.; Buchholz, P.; Bueno, A.; Buroker, L.; Burton, R. E.; Caballero-Mora, K. S.; Caccianiga, B.; Caramete, L.; Caruso, R.; Castellina, A.; Catalano, O.; Cataldi, G.; Cazon, L.; Cester, R.; Chauvin, J.; Cheng, S. H.; Chiavassa, A.; Chinellato, J. A.; Chirinos Diaz, J.; Chudoba, J.; Cilmo, M.; Clay, R. W.; Cocciolo, G.; Collica, L.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cook, H.; Cooper, M. J.; Coppens, J.; Cordier, A.; Coutu, S.; Covault, C. E.; Creusot, A.; Criss, A.; Cronin, J.; Curutiu, A.; Dagoret-Campagne, S.; Dallier, R.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; De Domenico, M.; De Donato, C.; de Jong, S. J.; De La Vega, G.; de Mello Junior, W. J. M.; de Mello Neto, J. R. T.; De Mitri, I.; de Souza, V.; de Vries, K. D.; del Peral, L.; del Río, M.; Deligny, O.; Dembinski, H.; Dhital, N.; Di Giulio, C.; Díaz Castro, M. L.; Diep, P. N.; Diogo, F.; Dobrigkeit, C.; Docters, W.; D'Olivo, J. C.; Dong, P. N.; Dorofeev, A.; dos Anjos, J. C.; Dova, M. T.; D'Urso, D.; Dutan, I.; Ebr, J.; Engel, R.; Erdmann, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Facal San Luis, P.; Falcke, H.; Fang, K.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Fick, B.; Figueira, J. M.; Filevich, A.; Filipčič, A.; Fliescher, S.; Fracchiolla, C. E.; Fraenkel, E. D.; Fratu, O.; Fröhlich, U.; Fuchs, B.; Gaior, R.; Gamarra, R. F.; Gambetta, S.; García, B.; Garcia Roca, S. T.; Garcia-Gamez, D.; Garcia-Pinto, D.; Garilli, G.; Gascon Bravo, A.; Gemmeke, H.; Ghia, P. L.; Giller, M.; Gitto, J.; Glass, H.; Gold, M. S.; Golup, G.; Gomez Albarracin, F.; Gómez Berisso, M.; Gómez Vitale, P. F.; Gonçalves, P.; Gonzalez, J. G.; Gookin, B.; Gorgi, A.; Gouffon, P.; Grashorn, E.; Grebe, S.; Griffith, N.; Grillo, A. F.; Guardincerri, Y.; Guarino, F.; Guedes, G. P.; Hansen, P.; Harari, D.; Harrison, T. A.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Hollon, N.; Holmes, V. C.; Homola, P.; Hörandel, J. R.; Horvath, P.; Hrabovský, M.; Huber, D.; Huege, T.; Insolia, A.; Ionita, F.; Italiano, A.; Jansen, S.; Jarne, C.; Jiraskova, S.; Josebachuili, M.; Kadija, K.; Kampert, K. H.; Karhan, P.; Kasper, P.; Katkov, I.; Kégl, B.; Keilhauer, B.; Keivani, A.; Kelley, J. L.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Knapp, J.; Koang, D.-H.; Kotera, K.; Krohm, N.; Krömer, O.; Kruppke-Hansen, D.; Kuempel, D.; Kulbartz, J. K.; Kunka, N.; La Rosa, G.; Lachaud, C.; LaHurd, D.; Latronico, L.; Lauer, R.; Lautridou, P.; Le Coz, S.; Leão, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; López, R.; Lopez Agüera, A.; Louedec, K.; Lozano Bahilo, J.; Lu, L.; Lucero, A.; Ludwig, M.; Lyberis, H.; Maccarone, M. C.; Macolino, C.; Maldera, S.; Maller, J.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, J.; Marin, V.; Maris, I. C.; Marquez Falcon, H. R.; Marsella, G.; Martello, D.; Martin, L.; Martinez, H.; Martínez Bravo, O.; Martraire, D.; Masías Meza, J. J.; Mathes, H. J.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurel, D.; Maurizio, D.; Mazur, P. O.; Medina-Tanco, G.; Melissas, M.; Melo, D.; Menichetti, E.; Menshikov, A.; Mertsch, P.; Messina, S.; Meurer, C.; Meyhandan, R.; Mi'canovi'c, S.; Micheletti, M. I.; Minaya, I. A.; Miramonti, L.; Molina-Bueno, L.; Mollerach, S.; Monasor, M.; Monnier Ragaigne, D.; Montanet, F.; Morales, B.; Morello, C.; Moreno, E.; Moreno, J. C.; Mostafá, M.; Moura, C. A.; Muller, M. A.; Müller, G.; Münchmeyer, M.; Mussa, R.; Navarra, G.; Navarro, J. L.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Neuser, J.; Nhung, P. T.; Niechciol, M.; Niemietz, L.; Nierstenhoefer, N.; Nitz, D.; Nosek, D.; Nožka, L.; Oehlschläger, J.; Olinto, A.; Ortiz, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Parente, G.; Parizot, E.; Parra, A.; Pastor, S.; Paul, T.; Pech, M.; Peķala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Pesce, R.; Petermann, E.; Petrera, S.; Petrolini, A.; Petrov, Y.; Pfendner, C.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Ponce, V. H.; Pontz, M.; Porcelli, A.; Privitera, P.; Prouza, M.; Quel, E. J.; Querchfeld, S.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Riggi, S.; Risse, M.; Ristori, P.; Rivera, H.; Rizi, V.; Roberts, J.; Rodrigues de Carvalho, W.; Rodriguez, G.; Rodriguez Cabo, I.; Rodriguez Martino, J.; Rodriguez Rojo, J.; Rodríguez-Frías, M. D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Rouillé-d'Orfeuil, B.; Roulet, E.; Rovero, A. C.; Rühle, C.; Saftoiu, A.; Salamida, F.; Salazar, H.; Salesa Greus, F.; Salina, G.; Sánchez, F.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, B.; Sarkar, S.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, A.; Scholten, O.; Schoorlemmer, H.; Schovancova, J.; Schovánek, P.; Schröder, F.; Schuster, D.; Sciutto, S. J.; Scuderi, M.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Silva Lopez, H. H.; Sima, O.; 'Smiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Spinka, H.; Squartini, R.; Srivastava, Y. N.; Stanic, S.; Stapleton, J.; Stasielak, J.; Stephan, M.; Stutz, A.; Suarez, F.; Suomijärvi, T.; Supanitsky, A. D.; Šuša, T.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Tapia, A.; Tartare, M.; Taşcău, O.; Tcaciuc, R.; Thao, N. T.; Thomas, D.; Tiffenberg, J.; Timmermans, C.; Tkaczyk, W.; Todero Peixoto, C. J.; Toma, G.; Tomankova, L.; Tomé, B.; Tonachini, A.; Torralba Elipe, G.; Travnicek, P.; Tridapalli, D. B.; Tristram, G.; Trovato, E.; Tueros, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van Aar, G.; van den Berg, A. M.; van Velzen, S.; van Vliet, A.; Varela, E.; Vargas Cárdenas, B.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Verzi, V.; Vicha, J.; Videla, M.; Villaseñor, L.; Wahlberg, H.; Wahrlich, P.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Werner, F.; Westerhoff, S.; Whelan, B. J.; Widom, A.; Wieczorek, G.; Wiencke, L.; Wilczyńska, B.; Wilczyński, H.; Will, M.; Williams, C.; Winchen, T.; Wommer, M.; Wundheiler, B.; Yamamoto, T.; Yapici, T.; Younk, P.; Yuan, G.; Yushkov, A.; Zamorano Garcia, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zaw, I.; Zepeda, A.; Zhou, J.; Zhu, Y.; Zimbres Silva, M.; Ziolkowski, M.

    2012-12-01

    A thorough search for large-scale anisotropies in the distribution of arrival directions of cosmic rays detected above 1018 eV at the Pierre Auger Observatory is presented. This search is performed as a function of both declination and right ascension in several energy ranges above 1018 eV, and reported in terms of dipolar and quadrupolar coefficients. Within the systematic uncertainties, no significant deviation from isotropy is revealed. Assuming that any cosmic-ray anisotropy is dominated by dipole and quadrupole moments in this energy range, upper limits on their amplitudes are derived. These upper limits allow us to test the origin of cosmic rays above 1018 eV from stationary Galactic sources densely distributed in the Galactic disk and predominantly emitting light particles in all directions.

  6. Evaluation of the communications impact of a low power arcjet thruster

    NASA Technical Reports Server (NTRS)

    Carney, Lynnette M.

    1988-01-01

    The interaction of a 1 kW arcjet thruster plume with a communications signal is evaluated. A two-parameter, source flow equation has been used to represent the far flow field distribution of the arcjet plume in a realistic spacecraft configuration. Modelling the plume as a plasma slab, the interaction of the plume with a 4 GHz communications signal is then evaluated in terms of signal attenuation and phase shift between transmitting and receiving antennas. Except for propagation paths which pass very near the arcjet source, the impacts to transmission appear to be negligible. The dominant signal loss mechanism is refraction of the beam rather than absorption losses due to collisions. However, significant reflection of the signal at the sharp vacuum-plasma boundary may also occur for propagation paths which pass near the source.

  7. Refined Source Terms in WAVEWATCH III with Wave Breaking and Sea Spray Forecasts

    DTIC Science & Technology

    2015-09-30

    young wind seas reported by Schwendeman et al. (2014) and for the open ocean cases reported by Sutherland and Melville (2015). These verifications...modeled Λ(c) distributions shown in Figure 3 follow a very similar dependence to the Sutherland and Melville observations to about 1-2 m/s. The...and 11) as well as Sutherland and Melville (2015) which show beff ~ O(10-3). Figure 4. Modeled behavior of spectrally-integrated breaking

  8. Workshop on Future Directions for Optical Information Processing.

    DTIC Science & Technology

    1981-03-01

    h . The i reference point source simultaneously illuminates the i member of a family of n phase-encoding Aiffusers (e.g. shower glass , ground glass ...diffuser (ground glass ) section illuminated with a plane wave [35.37). The n(n-1) - 4(3) - 12 crosstalk terms have been distributed into the noise...for 2x2 input Fig. 6. Outnut of processor analogous to that array, l.Sx magnifier, ground glass diffuser of Fig. 5, but using spherical wavefront and

  9. A hybrid probabilistic/spectral model of scalar mixing

    NASA Astrophysics Data System (ADS)

    Vaithianathan, T.; Collins, Lance

    2002-11-01

    In the probability density function (PDF) description of a turbulent reacting flow, the local temperature and species concentration are replaced by a high-dimensional joint probability that describes the distribution of states in the fluid. The PDF has the great advantage of rendering the chemical reaction source terms closed, independent of their complexity. However, molecular mixing, which involves two-point information, must be modeled. Indeed, the qualitative shape of the PDF is sensitive to this modeling, hence the reliability of the model to predict even the closed chemical source terms rests heavily on the mixing model. We will present a new closure to the mixing based on a spectral representation of the scalar field. The model is implemented as an ensemble of stochastic particles, each carrying scalar concentrations at different wavenumbers. Scalar exchanges within a given particle represent ``transfer'' while scalar exchanges between particles represent ``mixing.'' The equations governing the scalar concentrations at each wavenumber are derived from the eddy damped quasi-normal Markovian (or EDQNM) theory. The model correctly predicts the evolution of an initial double delta function PDF into a Gaussian as seen in the numerical study by Eswaran & Pope (1988). Furthermore, the model predicts the scalar gradient distribution (which is available in this representation) approaches log normal at long times. Comparisons of the model with data derived from direct numerical simulations will be shown.

  10. Visualizing Distributions from Multi-Return Lidar Data to Understand Forest Structure

    NASA Technical Reports Server (NTRS)

    Kao, David L.; Kramer, Marc; Luo, Alison; Dungan, Jennifer; Pang, Alex

    2004-01-01

    Spatially distributed probability density functions (pdfs) are becoming relevant to the Earth scientists and ecologists because of stochastic models and new sensors that provide numerous realizations or data points per unit area. One source of these data is from multi-return airborne lidar, a type of laser that records multiple returns for each pulse of light sent towards the ground. Data from multi-return lidar is a vital tool in helping us understand the structure of forest canopies over large extents. This paper presents several new visualization tools that allow scientists to rapidly explore, interpret and discover characteristic distributions within the entire spatial field. The major contribution from-this work is a paradigm shift which allows ecologists to think of and analyze their data in terms of the distribution. This provides a way to reveal information on the modality and shape of the distribution previously not possible. The tools allow the scientists to depart from traditional parametric statistical analyses and to associate multimodal distribution characteristics to forest structures. Examples are given using data from High Island, southeast Alaska.

  11. Study on ion energy distribution in low-frequency oscillation time scale of Hall thrusters

    NASA Astrophysics Data System (ADS)

    Wei, Liqiu; Li, Wenbo; Ding, Yongjie; Han, Liang; Yu, Daren; Cao, Yong

    2017-11-01

    This paper reports on the dynamic characteristics of the distribution of ion energy during Hall thruster discharge in the low-frequency oscillation time scale through experimental studies, and a statistical analysis of the time-varying peak and width of ion energy and the ratio of high-energy ions during the low-frequency oscillation. The results show that the ion energy distribution exhibits a periodic change during the low-frequency oscillation. Moreover, the variation in the ion energy peak is opposite to that of the discharge current, and the variations in width of the ion energy distribution and the ratio of high-energy ions are consistent with that of the discharge current. The variation characteristics of the ion density and discharge potential were simulated by one-dimensional hybrid-direct kinetic simulations; the simulation results and analysis indicate that the periodic change in the distribution of ion energy during the low-frequency oscillation depends on the relationship between the ionization source term and discharge potential distribution during ionization in the discharge channel.

  12. A gossip based information fusion protocol for distributed frequent itemset mining

    NASA Astrophysics Data System (ADS)

    Sohrabi, Mohammad Karim

    2018-07-01

    The computational complexity, huge memory space requirement, and time-consuming nature of frequent pattern mining process are the most important motivations for distribution and parallelization of this mining process. On the other hand, the emergence of distributed computational and operational environments, which causes the production and maintenance of data on different distributed data sources, makes the parallelization and distribution of the knowledge discovery process inevitable. In this paper, a gossip based distributed itemset mining (GDIM) algorithm is proposed to extract frequent itemsets, which are special types of frequent patterns, in a wireless sensor network environment. In this algorithm, local frequent itemsets of each sensor are extracted using a bit-wise horizontal approach (LHPM) from the nodes which are clustered using a leach-based protocol. Heads of clusters exploit a gossip based protocol in order to communicate each other to find the patterns which their global support is equal to or more than the specified support threshold. Experimental results show that the proposed algorithm outperforms the best existing gossip based algorithm in term of execution time.

  13. An Empirical Temperature Variance Source Model in Heated Jets

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  14. Water use sources of desert riparian Populus euphratica forests.

    PubMed

    Si, Jianhua; Feng, Qi; Cao, Shengkui; Yu, Tengfei; Zhao, Chunyan

    2014-09-01

    Desert riparian forests are the main body of natural oases in the lower reaches of inland rivers; its growth and distribution are closely related to water use sources. However, how does the desert riparian forest obtains a stable water source and which water sources it uses to effectively avoid or overcome water stress to survive? This paper describes an analysis of the water sources, using the stable oxygen isotope technique and the linear mixed model of the isotopic values and of desert riparian Populus euphratica forests growing at sites with different groundwater depths and conditions. The results showed that the main water source of Populus euphratica changes from water in a single soil layer or groundwater to deep subsoil water and groundwater as the depth of groundwater increases. This appears to be an adaptive selection to arid and water-deficient conditions and is a primary reason for the long-term survival of P. euphratica in the desert riparian forest of an extremely arid region. Water contributions from the various soil layers and from groundwater differed and the desert riparian P. euphratica forests in different habitats had dissimilar water use strategies.

  15. Monthly and diurnal variations in aerosol size distributions, downwind of the Seoul metropolitan area

    NASA Astrophysics Data System (ADS)

    Kim, B. S.; Choi, Y.; Ghim, Y. S.

    2014-12-01

    The size distribution of aerosols is a physical property. However, since major aerosol types such as mineral dust, secondary inorganic ions, and carbonaceous aerosols are typically in specific size ranges, we can estimate the chemical composition of aerosols from the size distribution. We measured the mass size distribution of aerosols using an optical particle counter (Grimm Model 1.109) for a year from February 2013 to February 2014 at intervals of 10 minutes. The optical particle counter measures number concentrations between 0.25 and 32 μm in 31 bins and converts them into mass concentrations assuming a sphere and densities of aerosols in urban environment which originate from traffic and other combustion sources and are secondarily formed from photochemical reactions. The measurement site is at the rooftop of the five-story building on the hill (37.34 °N, 127.27 °E, 167 m above sea level), about 35 km southeast of downtown Seoul, the downwind area of which is affected by prevailing northwesterlies. There are no major emission sources nearby except a 4-lane road running about 1.4 km to the west. We tried to characterize the bimodal property of the mass size distribution, consisting of fine and coarse modes, in terms of mass concentration and mean diameter. Monthly and diurnal variations in mass concentration and mean diameter of each mode were investigated to estimate major aerosol types as well as major factors causing those variations.

  16. Investigation of three-dimensional localisation of radioactive sources using a fast organic liquid scintillator detector

    NASA Astrophysics Data System (ADS)

    Gamage, K. A. A.; Joyce, M. J.; Taylor, G. C.

    2013-04-01

    In this paper we discuss the possibility of locating radioactive sources in space using a scanning-based method, relative to the three-dimensional location of the detector. The scanning system comprises an organic liquid scintillator detector, a tungsten collimator and an adjustable equatorial mount. The detector output is connected to a bespoke fast digitiser (Hybrid Instruments Ltd., UK) which streams digital samples to a personal computer. A radioactive source has been attached to a vertical wall and the data have been collected in two stages. In the first case, the scanning system was placed a couple of metres away from the wall and in the second case it moved few centimetres from the previous location, parallel to the wall. In each case data were collected from a grid of measurement points (set of azimuth angles for set of elevation angles) which covered the source on the wall. The discrimination of fast neutrons and gamma rays, detected by the organic liquid scintillator detector, is carried out on the basis of pulse gradient analysis. Images are then produced in terms of the angular distribution of events for total counts, gamma rays and neutrons for both cases. The three-dimensional location of the neutron source can be obtained by considering the relative separation of the centres of the corresponding images of angular distribution of events. The measurements have been made at the National Physical Laboratory, Teddington, Middlesex, UK.

  17. Evaluation of PCB sources and releases for identifying priorities to reduce PCBs in Washington State (USA).

    PubMed

    Davies, Holly; Delistraty, Damon

    2016-02-01

    Polychlorinated biphenyls (PCBs) are ubiquitously distributed in the environment and produce multiple adverse effects in humans and wildlife. As a result, the purpose of our study was to characterize PCB sources in anthropogenic materials and releases to the environment in Washington State (USA) in order to formulate recommendations to reduce PCB exposures. Methods included review of relevant publications (e.g., open literature, industry studies and reports, federal and state government databases), scaling of PCB sources from national or county estimates to state estimates, and communication with industry associations and private and public utilities. Recognizing high associated uncertainty due to incomplete data, we strived to provide central tendency estimates for PCB sources. In terms of mass (high to low), PCB sources include lamp ballasts, caulk, small capacitors, large capacitors, and transformers. For perspective, these sources (200,000-500,000 kg) overwhelm PCBs estimated to reside in the Puget Sound ecosystem (1500 kg). Annual releases of PCBs to the environment (high to low) are attributed to lamp ballasts (400-1500 kg), inadvertent generation by industrial processes (900 kg), caulk (160 kg), small capacitors (3-150 kg), large capacitors (10-80 kg), pigments and dyes (0.02-31 kg), and transformers (<2 kg). Recommendations to characterize the extent of PCB distribution and decrease exposures include assessment of PCBs in buildings (e.g., schools) and replacement of these materials, development of Best Management Practices (BMPs) to contain PCBs, reduction of inadvertent generation of PCBs in consumer products, expansion of environmental monitoring and public education, and research to identify specific PCB congener profiles in human tissues.

  18. Increasing seismicity in the U. S. midcontinent: Implications for earthquake hazard

    USGS Publications Warehouse

    Ellsworth, William L.; Llenos, Andrea L.; McGarr, Arthur F.; Michael, Andrew J.; Rubinstein, Justin L.; Mueller, Charles S.; Petersen, Mark D.; Calais, Eric

    2015-01-01

    Earthquake activity in parts of the central United States has increased dramatically in recent years. The space-time distribution of the increased seismicity, as well as numerous published case studies, indicates that the increase is of anthropogenic origin, principally driven by injection of wastewater coproduced with oil and gas from tight formations. Enhanced oil recovery and long-term production also contribute to seismicity at a few locations. Preliminary hazard models indicate that areas experiencing the highest rate of earthquakes in 2014 have a short-term (one-year) hazard comparable to or higher than the hazard in the source region of tectonic earthquakes in the New Madrid and Charleston seismic zones.

  19. Numerical investigations of low-density nozzle flow by solving the Boltzmann equation

    NASA Technical Reports Server (NTRS)

    Deng, Zheng-Tao; Liaw, Goang-Shin; Chou, Lynn Chen

    1995-01-01

    A two-dimensional finite-difference code to solve the BGK-Boltzmann equation has been developed. The solution procedure consists of three steps: (1) transforming the BGK-Boltzmann equation into two simultaneous partial differential equations by taking moments of the distribution function with respect to the molecular velocity u(sub z), with weighting factors 1 and u(sub z)(sup 2); (2) solving the transformed equations in the physical space based on the time-marching technique and the four-stage Runge-Kutta time integration, for a given discrete-ordinate. The Roe's second-order upwind difference scheme is used to discretize the convective terms and the collision terms are treated as source terms; and (3) using the newly calculated distribution functions at each point in the physical space to calculate the macroscopic flow parameters by the modified Gaussian quadrature formula. Repeating steps 2 and 3, the time-marching procedure stops when the convergent criteria is reached. A low-density nozzle flow field has been calculated by this newly developed code. The BGK Boltzmann solution and experimental data show excellent agreement. It demonstrated that numerical solutions of the BGK-Boltzmann equation are ready to be experimentally validated.

  20. Systematic Variability of the He+ Pickup Ion Velocity Distribution Function Observed with SOHO/CELIAS/CTOF

    NASA Astrophysics Data System (ADS)

    Taut, A.; Drews, C.; Berger, L.; Wimmer-Schweingruber, R. F.

    2015-12-01

    The 1D Velocity Distribution Function (VDF) of He+ pickup ions shows two distinct populations that reflect the sources of these ions. The highly suprathermal population is the result of the ionization and pickup of almost resting interstellar neutrals that are injected into the solar wind as a highly anisotropic torus distribution. The nearly thermalized population is centered around the solar wind bulk speed and is mainly attributed to inner-source pickup ions that originate in the inner heliosphere. It is generally believed that the initial torus distribution of interstellar pickup ions is rapidly isotropized by resonant wave-particle interactions, but recent observations by Drews et al. (2015) of a torus-like VDF strongly limit this isotropization. This in turn means that more observational data is needed to further characterize the kinetic behavior of pickup ions. In this study we use data from the Charge-Time-Of-Flight sensor on-board SOHO. As this sensor offers unrivaled counting statistics for He+ together with a sufficient mass-per-charge resolution it is well-suited for investigating the He+ VDF on comparatively short timescales. We combine this data with the high resolution magnetic field data from WIND via an extrapolation to the location of SOHO. With this combination of instruments we investigate the He+ VDF for time periods of different solar wind speeds, magnetic field directions, and wave power. We find a systematic trend of the short-term He+ VDF with these parameters. Especially by varying the considered magnetic field directions we observe a 1D projection of the anisotropic torus-like VDF. In addition, we investigate stream interaction regions and coronal mass ejections. In the latter we observe an excess of inner-source He+ that is accompanied by a significant increase of heavy pickup ion count rates. This may be linked to the as yet ill understood production mechanism of inner-source pickup ions.

  1. Characterizing error distributions for MISR and MODIS optical depth data

    NASA Astrophysics Data System (ADS)

    Paradise, S.; Braverman, A.; Kahn, R.; Wilson, B.

    2008-12-01

    The Multi-angle Imaging SpectroRadiometer (MISR) and Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's EOS satellites collect massive, long term data records on aerosol amounts and particle properties. MISR and MODIS have different but complementary sampling characteristics. In order to realize maximum scientific benefit from these data, the nature of their error distributions must be quantified and understood so that discrepancies between them can be rectified and their information combined in the most beneficial way. By 'error' we mean all sources of discrepancies between the true value of the quantity of interest and the measured value, including instrument measurement errors, artifacts of retrieval algorithms, and differential spatial and temporal sampling characteristics. Previously in [Paradise et al., Fall AGU 2007: A12A-05] we presented a unified, global analysis and comparison of MISR and MODIS measurement biases and variances over lives of the missions. We used AErosol RObotic NETwork (AERONET) data as ground truth and evaluated MISR and MODIS optical depth distributions relative to AERONET using simple linear regression. However, AERONET data are themselves instrumental measurements subject to sources of uncertainty. In this talk, we discuss results from an improved analysis of MISR and MODIS error distributions that uses errors-in-variables regression, accounting for uncertainties in both the dependent and independent variables. We demonstrate on optical depth data, but the method is generally applicable to other aerosol properties as well.

  2. Simulation studies on multi-mode heat transfer from an open cavity with a flush-mounted discrete heat source

    NASA Astrophysics Data System (ADS)

    Gururaja Rao, C.; Nagabhushana Rao, V.; Krishna Das, C.

    2008-04-01

    Prominent results of a simulation study on conjugate convection with surface radiation from an open cavity with a traversable flush mounted discrete heat source in the left wall are presented in this paper. The open cavity is considered to be of fixed height but with varying spacing between the legs. The position of the heat source is varied along the left leg of the cavity. The governing equations for temperature distribution along the cavity are obtained by making energy balance between heat generated, conducted, convected and radiated. Radiation terms are tackled using radiosity-irradiation formulation, while the view factors, therein, are evaluated using the crossed-string method of Hottel. The resulting non-linear partial differential equations are converted into algebraic form using finite difference formulation and are subsequently solved by Gauss Seidel iterative technique. An optimum grid system comprising 111 grids along the legs of the cavity, with 30 grids in the heat source and 31 grids across the cavity has been used. The effects of various parameters, such as surface emissivity, convection heat transfer coefficient, aspect ratio and thermal conductivity on the important results, including local temperature distribution along the cavity, peak temperature in the left and right legs of the cavity and relative contributions of convection and radiation to heat dissipation in the cavity, are studied in great detail.

  3. 14 CFR 23.1310 - Power source capacity and distribution.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Power source capacity and distribution. 23... Equipment General § 23.1310 Power source capacity and distribution. (a) Each installation whose functioning... power supply system, distribution system, or other utilization system. (b) In determining compliance...

  4. 14 CFR 23.1310 - Power source capacity and distribution.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Power source capacity and distribution. 23... Equipment General § 23.1310 Power source capacity and distribution. (a) Each installation whose functioning... power supply system, distribution system, or other utilization system. (b) In determining compliance...

  5. 14 CFR 23.1310 - Power source capacity and distribution.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Power source capacity and distribution. 23... Equipment General § 23.1310 Power source capacity and distribution. (a) Each installation whose functioning... power supply system, distribution system, or other utilization system. (b) In determining compliance...

  6. Long-term measurements of submicrometer aerosol chemistry at the Southern Great Plains (SGP) using an Aerosol Chemical Speciation Monitor (ACSM)

    DOE PAGES

    Parworth, Caroline; Tilp, Alison; Fast, Jerome; ...

    2015-04-01

    In this study the long-term trends of non-refractory submicrometer aerosol (NR-PM1) composition and mass concentration measured by an Aerosol Chemical Speciation Monitor (ACSM) at the Atmospheric Radiation Measurement (ARM) program's Southern Great Plains (SGP) site are discussed. NR-PM1 data was recorded at ~30 min intervals over a period of 19 months between November 2010 and June 2012. Positive Matrix Factorization (PMF) was performed on the measured organic mass spectral matrix using a rolling window technique to derive factors associated with distinct sources, evolution processes, and physiochemical properties. The rolling window approach also allows us to capture the dynamic variations ofmore » the chemical properties in the organic aerosol (OA) factors over time. Three OA factors were obtained including two oxygenated OA (OOA) factors, differing in degrees of oxidation, and a biomass burning OA (BBOA) factor. Back trajectory analyses were performed to investigate possible sources of major NR-PM1 species at the SGP site. Organics dominated NR-PM1 mass concentration for the majority of the study with the exception of winter, when ammonium nitrate increases due to transport of precursor species from surrounding urban and agricultural areas and also due to cooler temperatures. Sulfate mass concentrations have little seasonal variation with mixed regional and local sources. In the spring BBOA emissions increase and are mainly associated with local fires. Isoprene and carbon monoxide emission rates were obtained by the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and the 2011 U.S. National Emissions Inventory to represent the spatial distribution of biogenic and anthropogenic sources, respectively. The combined spatial distribution of isoprene emissions and air mass trajectories suggest that biogenic emissions from the southeast contribute to SOA formation at the SGP site during the summer.« less

  7. Microplastics in Freshwater River Sediments in Shanghai, China: A Case Study of Risk Assessment in Mega Cities

    NASA Astrophysics Data System (ADS)

    Peng, G.; Xu, P.

    2017-12-01

    Microplastics are plastics that measure less than 5 mm, which attracted exponential interest in recent years. Microplastics are widely distributed in water, sediments, and biotas. Most of distribution studies focus on the marine environment, yet methods to conduct risk assessment are limited. Widespread of microplastics has raised alarm for the well-being of marine living resources because of its negative ecological effects that has been proved. To understand the distribution of microplastics in urban rivers and source of marine microplastics, we investigated into river sediments in Shanghai, the biggest city in China. Seven sampling sites covered most of city central districts including one sampling site from a tidal flat. Density separation, microscopic inspection and identification were conducted to analyze microplastic abundance, shape and color. It is found that pellets were the most prevalent shape, followed by fiber and fragment. White microplastics were the most common type in terms of color. White foamed microplastic pellets were widely distributed in urban river sediments. Microplastic abundance from rivers was one to two orders of magnitude higher than that from the tidal flat. The significant difference between river and tidal flat samples lead to the conclusion that coastal rivers may be the source of microplastics, therefore in situ data and legitimate estimation should be considered by policy-makers. Seven types of microplastics were identified by μ-FT-IR analysis, indicating a secondary source. Comparison between two types of μ-FT-IR instruments was summarized. Framework for environmental risk assessment for microplastics in sediments was proposed. Indicators and ranks were select for the assessment of microplastic in sediments. It is recommended to select the index, integrate statistical data, follow expert opinions extensively and construct comprehensive evaluation method and ecological risk assessment system for the Chinese context.

  8. The Role of Inverse Compton Scattering in Solar Coronal Hard X-Ray and γ-Ray Sources

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Bastian, T. S.

    2012-05-01

    Coronal hard X-ray (HXR) and continuum γ-ray sources associated with the impulsive phase of solar flares have been the subject of renewed interest in recent years. They have been interpreted in terms of thin-target, non-thermal bremsstrahlung emission. This interpretation has led to rather extreme physical requirements in some cases. For example, in one case, essentially all of the electrons in the source must be accelerated to non-thermal energies to account for the coronal HXR source. In other cases, the extremely hard photon spectra of the coronal continuum γ-ray emission suggest that the low-energy cutoff of the electron energy distribution lies in the MeV energy range. Here, we consider the role of inverse Compton scattering (ICS) as an alternate emission mechanism in both the ultra- and mildly relativistic regimes. It is known that relativistic electrons are produced during powerful flares; these are capable of upscattering soft photospheric photons to HXR and γ-ray energies. Previously overlooked is the fact that mildly relativistic electrons, generally produced in much greater numbers in flares of all sizes, can upscatter extreme-ultraviolet/soft X-ray photons to HXR energies. We also explore ICS on anisotropic electron distributions and show that the resulting emission can be significantly enhanced over an isotropic electron distribution for favorable viewing geometries. We briefly review results from bremsstrahlung emission and reconsider circumstances under which non-thermal bremsstrahlung or ICS would be favored. Finally, we consider a selection of coronal HXR and γ-ray events and find that in some cases the ICS is a viable alternative emission mechanism.

  9. Gravitational wave source counts at high redshift and in models with extra dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    García-Bellido, Juan; Nesseris, Savvas; Trashorras, Manuel, E-mail: juan.garciabellido@uam.es, E-mail: savvas.nesseris@csic.es, E-mail: manuel.trashorras@csic.es

    2016-07-01

    Gravitational wave (GW) source counts have been recently shown to be able to test how gravitational radiation propagates with the distance from the source. Here, we extend this formalism to cosmological scales, i.e. the high redshift regime, and we discuss the complications of applying this methodology to high redshift sources. We also allow for models with compactified extra dimensions like in the Kaluza-Klein model. Furthermore, we also consider the case of intermediate redshifts, i.e. 0 < z ∼< 1, where we show it is possible to find an analytical approximation for the source counts dN / d ( S /more » N ). This can be done in terms of cosmological parameters, such as the matter density Ω {sub m} {sub ,0} of the cosmological constant model or the cosmographic parameters for a general dark energy model. Our analysis is as general as possible, but it depends on two important factors: a source model for the black hole binary mergers and the GW source to galaxy bias. This methodology also allows us to obtain the higher order corrections of the source counts in terms of the signal-to-noise S / N . We then forecast the sensitivity of future observations in constraining GW physics but also the underlying cosmology by simulating sources distributed over a finite range of signal-to-noise with a number of sources ranging from 10 to 500 sources as expected from future detectors. We find that with 500 events it will be possible to provide constraints on the matter density parameter at present Ω {sub m} {sub ,0} on the order of a few percent and with the precision growing fast with the number of events. In the case of extra dimensions we find that depending on the degeneracies of the model, with 500 events it may be possible to provide stringent limits on the existence of the extra dimensions if the aforementioned degeneracies can be broken.« less

  10. New VLBI2010 scheduling strategies and implications on the terrestrial reference frames.

    PubMed

    Sun, Jing; Böhm, Johannes; Nilsson, Tobias; Krásná, Hana; Böhm, Sigrid; Schuh, Harald

    In connection with the work for the next generation VLBI2010 Global Observing System (VGOS) of the International VLBI Service for Geodesy and Astrometry, a new scheduling package (Vie_Sched) has been developed at the Vienna University of Technology as a part of the Vienna VLBI Software. In addition to the classical station-based approach it is equipped with a new scheduling strategy based on the radio sources to be observed. We introduce different configurations of source-based scheduling options and investigate the implications on present and future VLBI2010 geodetic schedules. By comparison to existing VLBI schedules of the continuous campaign CONT11, we find that the source-based approach with two sources has a performance similar to the station-based approach in terms of number of observations, sky coverage, and geodetic parameters. For an artificial 16 station VLBI2010 network, the source-based approach with four sources provides an improved distribution of source observations on the celestial sphere. Monte Carlo simulations yield slightly better repeatabilities of station coordinates with the source-based approach with two sources or four sources than the classical strategy. The new VLBI scheduling software with its alternative scheduling strategy offers a promising option with respect to applications of the VGOS.

  11. New VLBI2010 scheduling strategies and implications on the terrestrial reference frames

    NASA Astrophysics Data System (ADS)

    Sun, Jing; Böhm, Johannes; Nilsson, Tobias; Krásná, Hana; Böhm, Sigrid; Schuh, Harald

    2014-05-01

    In connection with the work for the next generation VLBI2010 Global Observing System (VGOS) of the International VLBI Service for Geodesy and Astrometry, a new scheduling package (Vie_Sched) has been developed at the Vienna University of Technology as a part of the Vienna VLBI Software. In addition to the classical station-based approach it is equipped with a new scheduling strategy based on the radio sources to be observed. We introduce different configurations of source-based scheduling options and investigate the implications on present and future VLBI2010 geodetic schedules. By comparison to existing VLBI schedules of the continuous campaign CONT11, we find that the source-based approach with two sources has a performance similar to the station-based approach in terms of number of observations, sky coverage, and geodetic parameters. For an artificial 16 station VLBI2010 network, the source-based approach with four sources provides an improved distribution of source observations on the celestial sphere. Monte Carlo simulations yield slightly better repeatabilities of station coordinates with the source-based approach with two sources or four sources than the classical strategy. The new VLBI scheduling software with its alternative scheduling strategy offers a promising option with respect to applications of the VGOS.

  12. On the scale dependence of earthquake stress drop

    NASA Astrophysics Data System (ADS)

    Cocco, Massimo; Tinti, Elisa; Cirella, Antonella

    2016-10-01

    We discuss the debated issue of scale dependence in earthquake source mechanics with the goal of providing supporting evidence to foster the adoption of a coherent interpretative framework. We examine the heterogeneous distribution of source and constitutive parameters during individual ruptures and their scaling with earthquake size. We discuss evidence that slip, slip-weakening distance and breakdown work scale with seismic moment and are interpreted as scale dependent parameters. We integrate our estimates of earthquake stress drop, computed through a pseudo-dynamic approach, with many others available in the literature for both point sources and finite fault models. We obtain a picture of the earthquake stress drop scaling with seismic moment over an exceptional broad range of earthquake sizes (-8 < MW < 9). Our results confirm that stress drop values are scattered over three order of magnitude and emphasize the lack of corroborating evidence that stress drop scales with seismic moment. We discuss these results in terms of scale invariance of stress drop with source dimension to analyse the interpretation of this outcome in terms of self-similarity. Geophysicists are presently unable to provide physical explanations of dynamic self-similarity relying on deterministic descriptions of micro-scale processes. We conclude that the interpretation of the self-similar behaviour of stress drop scaling is strongly model dependent. We emphasize that it relies on a geometric description of source heterogeneity through the statistical properties of initial stress or fault-surface topography, in which only the latter is constrained by observations.

  13. Long-term monitoring of persistent organic pollutants (POPs) at the Norwegian Troll station in Dronning Maud Land, Antarctica

    NASA Astrophysics Data System (ADS)

    Kallenborn, R.; Breivik, K.; Eckhardt, S.; Lunder, C. R.; Manø, S.; Schlabach, M.; Stohl, A.

    2013-03-01

    A first long-term monitoring of selected persistent organic pollutants (POPs) in Antarctic air has been conducted at the Norwegian Research station Troll (Dronning Maud Land). As target contaminants 32 PCB congeners, a- and g-hexachlorocyclohexane (HCH), trans- and cis-chlordane, trans- and cis-nonachlor, p,p'- and o,p-DDT, DDD, DDE as well as hexachlorobenzene (HCB) were selected. The monitoring program with weekly samples taken during the period 2007-2010 was coordinated with the parallel program at the Norwegian Arctic monitoring site (Zeppelin mountain, Ny-Ålesund, Svalbard) in terms of priority compounds, sampling schedule as well as analytical methods. The POP concentration levels found in Antarctica were considerably lower than Arctic atmospheric background concentrations. Similar as observed for Arctic samples, HCB is the predominant POP compound with levels of around 22 pg m-3 throughout the entire monitoring period. In general, the following concentration distribution was found for the Troll samples analyzed: HCB > Sum HCH > Sum PCB > Sum DDT > Sum chlordanes. Atmospheric long-range transport was identified as a major contamination source for POPs in Antarctic environments. Several long-range transport events with elevated levels of pesticides and/or compounds with industrial sources were identified based on retroplume calculations with a Lagrangian particle dispersion model (FLEXPART). The POP levels determined in Troll air were compared with 1 concentrations found in earlier measurement campaigns at other Antarctic research stations from the past 18 yr. Except for HCB for which similar concentration distributions were observed in all sampling campaigns, concentrations in the recent Troll samples were lower than in samples collected during the early 1990s. These concentration reductions are obviously a direct consequence of international regulations restricting the usage of POP-like chemicals on a worldwide scale.

  14. Development of a Distributed Source Containment Transport, Transformation, and Fate (CTT&F) Sub-Model for Military Installations

    DTIC Science & Technology

    2007-08-01

    includes soil erodibility terms from the Universal Soil Lass Equation ( USLE ) for estimating the overland sediment transport capacity (for both the x and y...q = unit flow rate of water = va h [L2/T] vc = critical velocity for erosion overland [L/T] K = USLE soil erodibility factor C = USLE soil ...cover factor P = USLE soil management practice factor Be = width of eroding surface in flow direction [L]. In channels, sediment particles can be

  15. Radio-interferometric imaging of the subsurface emissions from the planet Mercury

    NASA Technical Reports Server (NTRS)

    Burns, J. O.; Zeilik, M.; Gisler, G. R.; Borovsky, J. E.; Baker, D. N.

    1987-01-01

    The distribution of total and polarized intensities from Mercury's subsurface layers have been mapped using VLA observations. The first detection of a hot pole along the Hermean equator is reported and modeled as black-body reradiation from preferential diurnal heating. These observations appear to rule out any internal sources of heat within Mercury. Polarized emission from the limb of the planet is also found, and is understood in terms of the dielectric properties of the Hermean surface.

  16. Ultraviolet and visible variability of the coma of Comet Levy (1990c)

    NASA Technical Reports Server (NTRS)

    Feldman, P. D.; Budzien, S. A.; Festou, M. C.; A'Hearn, M. F.; Tozzi, G. P.

    1992-01-01

    A visible lightcurve of Comet Levy obtained with the IUE Fine Error Sensor has revealed short-term coma variability. A production-rate source function is derivable from these data which implies a nucleus exhibiting hemispherically asymmetric activity. The ratio of gas-to-dust-production rates is also noted to exhibit asymmetry. The low dust-outflow velocity derived from observations, at about 200 m/sec, indicates a distribution that is rich in large, 3-10 micron particles.

  17. Harmony: EEG/MEG Linear Inverse Source Reconstruction in the Anatomical Basis of Spherical Harmonics

    PubMed Central

    Petrov, Yury

    2012-01-01

    EEG/MEG source localization based on a “distributed solution” is severely underdetermined, because the number of sources is much larger than the number of measurements. In particular, this makes the solution strongly affected by sensor noise. A new way to constrain the problem is presented. By using the anatomical basis of spherical harmonics (or spherical splines) instead of single dipoles the dimensionality of the inverse solution is greatly reduced without sacrificing the quality of the data fit. The smoothness of the resulting solution reduces the surface bias and scatter of the sources (incoherency) compared to the popular minimum-norm algorithms where single-dipole basis is used (MNE, depth-weighted MNE, dSPM, sLORETA, LORETA, IBF) and allows to efficiently reduce the effect of sensor noise. This approach, termed Harmony, performed well when applied to experimental data (two exemplars of early evoked potentials) and showed better localization precision and solution coherence than the other tested algorithms when applied to realistically simulated data. PMID:23071497

  18. Potential sources of precipitation in Lake Baikal basin

    NASA Astrophysics Data System (ADS)

    Shukurov, K. A.; Mokhov, I. I.

    2017-11-01

    Based on the data of long-term measurements at 23 meteorological stations in the Russian part of the Lake Baikal basin the probabilities of daily precipitation with different intensity and their contribution to the total precipitation are estimated. Using the trajectory model HYSPLIT_4 for each meteorological station for the period 1948-2016 the 10-day backward trajectories of air parcels, the height of these trajectories and distribution of specific humidity along the trajectories are calculated. The average field of power of potential sources of daily precipitation (less than 10 mm) for all meteorological stations in the Russian part of the Lake Baikal basin was obtained using the CWT (concentration weighted trajectory) method. The areas have been identified from which within 10 days water vapor can be transported to the Lake Baikal basin, as well as regions of the most and least powerful potential sources. The fields of the mean height of air parcels trajectories and the mean specific humidity along the trajectories are compared with the field of mean power of potential sources.

  19. Distribution of "Compound" and "Simple" Flows in the Deccan Traps (India)

    NASA Astrophysics Data System (ADS)

    Vanderkluysen, L.; Self, S.; Jay, A. E.; Sheth, H. C.; Clarke, A. B.

    2014-12-01

    The Deccan Traps are a dominantly mafic large igneous province (LIP) that, prior to erosion, covered ~1 million km2 of west-central India with lava flows. The type sections of the Western Ghats escarpment, where the Deccan lava pile reaches a maximum reconstructed stratigraphic thickness of ~3400 m, are subdivided into eleven formations defined on chemo-stratigraphic grounds. Earlier work recognized that emplacement of Deccan basalt flows primarily occurs following two main modes: as a stack of meter-sized pāhoehoe toes and lobes, termed "compound" flows; or as inflated sheet lobes tens to hundreds of meters in width and meters to tens of meters in height, previously termed "simple" flows. Initially, the distribution of small lobes and sheet lobes in the Deccan was thought to be controlled by distance from source, but later work suggested the distribution to be mainly controlled along stratigraphic, formational boundaries, with six of the lower formations being composed exclusively of compound flows, and the upper 4-5 formations being wholly built of sheet lobes. This simple stratigraphic subdivision of lava flow morphologies has also been documented in the volcanic architecture of other LIPs, e.g., the Etendeka, the Ethiopian Traps, and in the Faeroe Islands (North Atlantic LIP). Upon examination of eight sections carefully logged along the Western Ghats, this traditional view must be challenged. Where the lower Deccan formations crop out, we found that as much as 65% of the exposed thickness (below the Khandala Formation) is made up of sheet lobes, from 40% in the Bhimashankar Formation to 75% in the Thakurvadi Formation. Near the bottom of the sequence, 25% of the Neral Formation is composed of sheet lobes ≥15 m in thickness. This distribution in lava flow morphology does not seem to be noticeably affected by the inferred distance to the source (based on the location of similar-composition dikes for each formation). Several mechanisms have been proposed to explain the development of compound flows and inflated sheet lobes, involving one or more of the following factors: underlying slope, varying effusion rate, and source geometry. Analogue experiments are currently under way to test the relative influence of each of these factors in the development of different lava flow morphologies in LIPs.

  20. High energy variability of 3C 273 during the AGILE multiwavelength campaign of December 2007-January 2008

    NASA Astrophysics Data System (ADS)

    Pacciani, L.; Donnarumma, I.; Vittorini, V.; D'Ammando, F.; Fiocchi, M. T.; Impiombato, D.; Stratta, G.; Verrecchia, F.; Bulgarelli, A.; Chen, A. W.; Giuliani, A.; Longo, F.; Pucella, G.; Vercellone, S.; Tavani, M.; Argan, A.; Barbiellini, G.; Boffelli, F.; Caraveo, P. A.; Cattaneo, P. W.; Cocco, V.; Costa, E.; Del Monte, E.; Di Cocco, G.; Evangelista, Y.; Feroci, M.; Froysland, T.; Fuschino, F.; Galli, M.; Gianotti, F.; Labanti, C.; Lapshov, I.; Lazzarotto, F.; Lipari, P.; Marisaldi, M.; Mereghetti, S.; Morselli, A.; Pellizzoni, A.; Perotti, F.; Picozza, P.; Prest, M.; Rapisarda, M.; Soffitta, P.; Trifoglio, M.; Tosti, G.; Trois, A.; Vallazza, E.; Zanello, D.; Antonelli, L. A.; Colafrancesco, S.; Cutini, S.; Gasparrini, D.; Giommi, P.; Pittori, C.; Salotti, L.

    2009-01-01

    Context: We report the results of a 3-week multi-wavelength campaign targeting the flat spectrum radio quasar 3C 273 carried out with the AGILE gamma-ray mission, covering the 30 MeV-50 GeV and 18-60 keV, the REM observatory (covering the near-IR and optical), Swift (near-UV/Optical, 0.2-10 keV and 15-50 keV), INTEGRAL (3-200 keV) and Rossi XTE (2-12 keV). This is the first observational campaign including gamma-ray data, after the last EGRET observations, more than 8 years ago. Aims: This campaign has been organized by the AGILE team with the aim of observing, studying and modelling the broad band energy spectrum of the source, and its variability on a week timescale, testing the emission models describing the spectral energy distribution of this source. Methods: Our study was carried out using simultaneous light curves of the source flux from all the involved instruments, in the different energy ranges, to search for correlated variability. Then a time-resolved spectral energy distribution was used for a detailed physical modelling of the emission mechanisms. Results: The source was detected in gamma-rays only in the second week of our campaign, with a flux comparable to the level detected by EGRET in June 1991. We found an indication of a possible anti-correlation between the emission at gamma-rays and at soft and hard X-rays, supported by the complete set of instruments. Instead, optical data do not show short term variability, as expected for this source. Only in two preceding EGRET observations (in 1993 and 1997) 3C 273 showed intra-observation variability in gamma-rays. In the 1997 observation, flux variation in gamma-rays was associated with a synchrotron flare. The energy-density spectrum with almost simultaneous data partially covers the regions of synchrotron emission, the big blue bump, and the inverse-Compton. We adopted a leptonic model to explain the hard X/gamma-ray emissions, although from our analysis hadronic models cannot be ruled out. In the adopted model, the soft X-ray emission is consistent with combined synchrotron-self Compton and external Compton mechanisms, while hard X and gamma-ray emissions are compatible with external Compton from thermal photons of the disk. Under this model, the time evolution of the spectral energy distribution is well interpreted and modelled in terms of an acceleration episode of the electron population, leading to a shift in the inverse Compton peak towards higher energies.

  1. Multisource inverse-geometry CT. Part I. System concept and development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Man, Bruno, E-mail: deman@ge.com; Harrison, Dan

    Purpose: This paper presents an overview of multisource inverse-geometry computed tomography (IGCT) as well as the development of a gantry-based research prototype system. The development of the distributed x-ray source is covered in a companion paper [V. B. Neculaes et al., “Multisource inverse-geometry CT. Part II. X-ray source design and prototype,” Med. Phys. 43, 4617–4627 (2016)]. While progress updates of this development have been presented at conferences and in journal papers, this paper is the first comprehensive overview of the multisource inverse-geometry CT concept and prototype. The authors also provide a review of all previous IGCT related publications. Methods: Themore » authors designed and implemented a gantry-based 32-source IGCT scanner with 22 cm field-of-view, 16 cm z-coverage, 1 s rotation time, 1.09 × 1.024 mm detector cell size, as low as 0.4 × 0.8 mm focal spot size and 80–140 kVp x-ray source voltage. The system is built using commercially available CT components and a custom made distributed x-ray source. The authors developed dedicated controls, calibrations, and reconstruction algorithms and evaluated the system performance using phantoms and small animals. Results: The authors performed IGCT system experiments and demonstrated tube current up to 125 mA with up to 32 focal spots. The authors measured a spatial resolution of 13 lp/cm at 5% cutoff. The scatter-to-primary ratio is estimated 62% for a 32 cm water phantom at 140 kVp. The authors scanned several phantoms and small animals. The initial images have relatively high noise due to the low x-ray flux levels but minimal artifacts. Conclusions: IGCT has unique benefits in terms of dose-efficiency and cone-beam artifacts, but comes with challenges in terms of scattered radiation and x-ray flux limits. To the authors’ knowledge, their prototype is the first gantry-based IGCT scanner. The authors summarized the design and implementation of the scanner and the authors presented results with phantoms and small animals.« less

  2. Multisource inverse-geometry CT. Part I. System concept and development

    PubMed Central

    De Man, Bruno; Uribe, Jorge; Baek, Jongduk; Harrison, Dan; Yin, Zhye; Longtin, Randy; Roy, Jaydeep; Waters, Bill; Wilson, Colin; Short, Jonathan; Inzinna, Lou; Reynolds, Joseph; Neculaes, V. Bogdan; Frutschy, Kristopher; Senzig, Bob; Pelc, Norbert

    2016-01-01

    Purpose: This paper presents an overview of multisource inverse-geometry computed tomography (IGCT) as well as the development of a gantry-based research prototype system. The development of the distributed x-ray source is covered in a companion paper [V. B. Neculaes et al., “Multisource inverse-geometry CT. Part II. X-ray source design and prototype,” Med. Phys. 43, 4617–4627 (2016)]. While progress updates of this development have been presented at conferences and in journal papers, this paper is the first comprehensive overview of the multisource inverse-geometry CT concept and prototype. The authors also provide a review of all previous IGCT related publications. Methods: The authors designed and implemented a gantry-based 32-source IGCT scanner with 22 cm field-of-view, 16 cm z-coverage, 1 s rotation time, 1.09 × 1.024 mm detector cell size, as low as 0.4 × 0.8 mm focal spot size and 80–140 kVp x-ray source voltage. The system is built using commercially available CT components and a custom made distributed x-ray source. The authors developed dedicated controls, calibrations, and reconstruction algorithms and evaluated the system performance using phantoms and small animals. Results: The authors performed IGCT system experiments and demonstrated tube current up to 125 mA with up to 32 focal spots. The authors measured a spatial resolution of 13 lp/cm at 5% cutoff. The scatter-to-primary ratio is estimated 62% for a 32 cm water phantom at 140 kVp. The authors scanned several phantoms and small animals. The initial images have relatively high noise due to the low x-ray flux levels but minimal artifacts. Conclusions: IGCT has unique benefits in terms of dose-efficiency and cone-beam artifacts, but comes with challenges in terms of scattered radiation and x-ray flux limits. To the authors’ knowledge, their prototype is the first gantry-based IGCT scanner. The authors summarized the design and implementation of the scanner and the authors presented results with phantoms and small animals. PMID:27487877

  3. Method and apparatus for reducing the harmonic currents in alternating-current distribution networks

    DOEpatents

    Beverly, Leon H.; Hance, Richard D.; Kristalinski, Alexandr L.; Visser, Age T.

    1996-01-01

    An improved apparatus and method reduce the harmonic content of AC line and neutral line currents in polyphase AC source distribution networks. The apparatus and method employ a polyphase Zig-Zag transformer connected between the AC source distribution network and a load. The apparatus and method also employs a mechanism for increasing the source neutral impedance of the AC source distribution network. This mechanism can consist of a choke installed in the neutral line between the AC source and the Zig-Zag transformer.

  4. Method and apparatus for reducing the harmonic currents in alternating-current distribution networks

    DOEpatents

    Beverly, L.H.; Hance, R.D.; Kristalinski, A.L.; Visser, A.T.

    1996-11-19

    An improved apparatus and method reduce the harmonic content of AC line and neutral line currents in polyphase AC source distribution networks. The apparatus and method employ a polyphase Zig-Zag transformer connected between the AC source distribution network and a load. The apparatus and method also employs a mechanism for increasing the source neutral impedance of the AC source distribution network. This mechanism can consist of a choke installed in the neutral line between the AC source and the Zig-Zag transformer. 23 figs.

  5. Relativity, nonextensivity, and extended power law distributions.

    PubMed

    Silva, R; Lima, J A S

    2005-11-01

    A proof of the relativistic theorem by including nonextensive effects is given. As it happens in the nonrelativistic limit, the molecular chaos hypothesis advanced by Boltzmann does not remain valid, and the second law of thermodynamics combined with a duality transformation implies that the parameter lies on the interval [0,2]. It is also proven that the collisional equilibrium states (null entropy source term) are described by the relativistic power law extension of the exponential Juttner distribution which reduces, in the nonrelativistic domain, to the Tsallis power law function. As a simple illustration of the basic approach, we derive the relativistic nonextensive equilibrium distribution for a dilute charged gas under the action of an electromagnetic field . Such results reduce to the standard ones in the extensive limit, thereby showing that the nonextensive entropic framework can be harmonized with the space-time ideas contained in the special relativity theory.

  6. A Simplified Theory of Coupled Oscillator Array Phase Control

    NASA Technical Reports Server (NTRS)

    Pogorzelski, R. J.; York, R. A.

    1997-01-01

    Linear and planar arrays of coupled oscillators have been proposed as means of achieving high power rf sources through coherent spatial power combining. In such - applications, a uniform phase distribution over the aperture is desired. However, it has been shown that by detuning some of the oscillators away from the oscillation frequency of the ensemble of oscillators, one may achieve other useful aperture phase distributions. Notable among these are linear phase distributions resulting in steering of the output rf beam away from the broadside direction. The theory describing the operation of such arrays of coupled oscillators is quite complicated since the phenomena involved are inherently nonlinear. This has made it difficult to develop an intuitive understanding of the impact of oscillator tuning on phase control and has thus impeded practical application. In this work a simpl!fied theory is developed which facilitates intuitive understanding by establishing an analog of the phase control problem in terms of electrostatics.

  7. Reaction formulation for radiation and scattering from plates, corner reflectors and dielectric-coated cylinders

    NASA Technical Reports Server (NTRS)

    Wang, N. N.

    1974-01-01

    The reaction concept is employed to formulate an integral equation for radiation and scattering from plates, corner reflectors, and dielectric-coated conducting cylinders. The surface-current density on the conducting surface is expanded with subsectional bases. The dielectric layer is modeled with polarization currents radiating in free space. Maxwell's equation and the boundary conditions are employed to express the polarization-current distribution in terms of the surface-current density on the conducting surface. By enforcing reaction tests with an array of electric test sources, the moment method is employed to reduce the integral equation to a matrix equation. Inversion of the matrix equation yields the current distribution, and the scattered field is then obtained by integrating the current distribution. The theory, computer program and numerical results are presented for radiation and scattering from plates, corner reflectors, and dielectric-coated conducting cylinders.

  8. Characterising RNA secondary structure space using information entropy

    PubMed Central

    2013-01-01

    Comparative methods for RNA secondary structure prediction use evolutionary information from RNA alignments to increase prediction accuracy. The model is often described in terms of stochastic context-free grammars (SCFGs), which generate a probability distribution over secondary structures. It is, however, unclear how this probability distribution changes as a function of the input alignment. As prediction programs typically only return a single secondary structure, better characterisation of the underlying probability space of RNA secondary structures is of great interest. In this work, we show how to efficiently compute the information entropy of the probability distribution over RNA secondary structures produced for RNA alignments by a phylo-SCFG, and implement it for the PPfold model. We also discuss interpretations and applications of this quantity, including how it can clarify reasons for low prediction reliability scores. PPfold and its source code are available from http://birc.au.dk/software/ppfold/. PMID:23368905

  9. Shock-drift particle acceleration in superluminal shocks - A model for hot spots in extragalactic radio sources

    NASA Technical Reports Server (NTRS)

    Begelman, Mitchell C.; Kirk, John G.

    1990-01-01

    Shock-drift acceleration at relativistic shock fronts is investigated using a fully relativistic treatment of both the microphysics of the shock-drift acceleration and the macrophysics of the shock front. By explicitly tracing particle trajectories across shocks, it is shown how the adiabatic invariance of a particle's magnetic moment breaks down as the upstream shock speed becomes relativistic, and is recovered at subrelativistic velocities. These calculations enable the mean increase in energy of a particle which encounters the shock with a given pitch angle to be calculated. The results are used to construct the downstream electron distribution function in terms of the incident distribution function and the bulk properties of the shock. The synchrotron emissivity of the transmitted distribution is calculated, and it is demonstrated that amplification factors are easily obtained which are more than adequate to explain the observed constrasts in surface brightness between jets and hot spots.

  10. Direct design of aspherical lenses for extended non-Lambertian sources in three-dimensional rotational geometry

    PubMed Central

    Wu, Rengmao; Hua, Hong

    2016-01-01

    Illumination design used to redistribute the spatial energy distribution of light source is a key technique in lighting applications. However, there is still no effective illumination design method for extended sources, especially for extended non-Lambertian sources. What we present here is to our knowledge the first direct method for extended non-Lambertian sources in three-dimensional (3D) rotational geometry. In this method, both meridional rays and skew rays of the extended source are taken into account to tailor the lens profile in the meridional plane. A set of edge rays and interior rays emitted from the extended source which will take a given direction after the refraction of the aspherical lens are found by the Snell’s law, and the output intensity at this direction is then calculated to be the integral of the luminance function of the outgoing rays at this direction. This direct method is effective for both extended non-Lambertian sources and extended Lambertian sources in 3D rotational symmetry, and can directly find a solution to the prescribed design problem without cumbersome iterative illuminance compensation. Two examples are presented to demonstrate the effectiveness of the proposed method in terms of performance and capacity for tackling complex designs. PMID:26832484

  11. Evaluation of ground-water quality in the Santa Maria Valley, California

    USGS Publications Warehouse

    Hughes, Jerry L.

    1977-01-01

    The quality and quantity of recharge to the Santa Maria Valley, Calif., ground-water basin from natural sources, point sources, and agriculture are expressed in terms of a hydrologic budget, a solute balance, and maps showing the distribution of select chemical constituents. Point sources includes a sugar-beet refinery, oil refineries, stockyards, golf courses, poultry farms, solid-waste landfills, and municipal and industrial wastewater-treatment facilities. Pumpage has exceeded recharge by about 10,000 acre-feet per year. The result is a declining potentiometric surface with an accumulation of solutes and an increase in nitrogen in ground water. Nitrogen concentrations have reached as much as 50 milligrams per liter. In comparison to the solutes from irrigation return, natural recharge, and rain, discharge of wastewater from municipal and industrial wastewater-treatment facilities contributes less than 10 percent. The quality of treated wastewater is often lower in select chemical constituents than the receiving water. (Woodard-USGS)

  12. A Monte Carlo simulation study for the gamma-ray/neutron dual-particle imager using rotational modulation collimator (RMC).

    PubMed

    Kim, Hyun Suk; Choi, Hong Yeop; Lee, Gyemin; Ye, Sung-Joon; Smith, Martin B; Kim, Geehyun

    2018-03-01

    The aim of this work is to develop a gamma-ray/neutron dual-particle imager, based on rotational modulation collimators (RMCs) and pulse shape discrimination (PSD)-capable scintillators, for possible applications for radioactivity monitoring as well as nuclear security and safeguards. A Monte Carlo simulation study was performed to design an RMC system for the dual-particle imaging, and modulation patterns were obtained for gamma-ray and neutron sources in various configurations. We applied an image reconstruction algorithm utilizing the maximum-likelihood expectation-maximization method based on the analytical modeling of source-detector configurations, to the Monte Carlo simulation results. Both gamma-ray and neutron source distributions were reconstructed and evaluated in terms of signal-to-noise ratio, showing the viability of developing an RMC-based gamma-ray/neutron dual-particle imager using PSD-capable scintillators.

  13. A Kinetic Study of Microwave Start-up of Tokamak Plasmas

    NASA Astrophysics Data System (ADS)

    du Toit, E. J.; O'Brien, M. R.; Vann, R. G. L.

    2017-07-01

    A kinetic model for studying the time evolution of the distribution function for microwave startup is presented. The model for the distribution function is two dimensional in momentum space, but, for simplicity and rapid calculations, has no spatial dependence. Experiments on the Mega Amp Spherical Tokamak have shown that the plasma current is carried mainly by electrons with energies greater than 70 keV, and effects thought to be important in these experiments are included, i.e. particle sources, orbital losses, the loop voltage and microwave heating, with suitable volume averaging where necessary to give terms independent of spatial dimensions. The model predicts current carried by electrons with the same energies as inferred from the experiments, though the current drive efficiency is smaller.

  14. Toward a legal framework that promotes and protects sex workers' health and human rights.

    PubMed

    Overs, Cheryl; Loff, Bebe

    2013-06-14

    Complex combinations of law, policy, and enforcement practices determine sex workers vulnerability to HIV and rights abuses. We identify "lack of recognition as a person before the law" as an important but undocumented barrier to accessing services and conclude that multi-faceted, setting-specific reform is needed-rather than a singular focus on decriminalization-if the health and human rights of sex workers are to be realized. Copyright © 2013 Overs and Loff. This is an open access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/), which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original author and source are credited.

  15. Estimating the mass variance in neutron multiplicity counting-A comparison of approaches

    NASA Astrophysics Data System (ADS)

    Dubi, C.; Croft, S.; Favalli, A.; Ocherashvili, A.; Pedersen, B.

    2017-12-01

    In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α , n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.

  16. Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubi, C.; Croft, S.; Favalli, A.

    In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less

  17. Development and Characterization of a Laser-Induced Acoustic Desorption Source.

    PubMed

    Huang, Zhipeng; Ossenbrüggen, Tim; Rubinsky, Igor; Schust, Matthias; Horke, Daniel A; Küpper, Jochen

    2018-03-20

    A laser-induced acoustic desorption source, developed for use at central facilities, such as free-electron lasers, is presented. It features prolonged measurement times and a fixed interaction point. A novel sample deposition method using aerosol spraying provides a uniform sample coverage and hence stable signal intensity. Utilizing strong-field ionization as a universal detection scheme, the produced molecular plume is characterized in terms of number density, spatial extend, fragmentation, temporal distribution, translational velocity, and translational temperature. The effect of desorption laser intensity on these plume properties is evaluated. While translational velocity is invariant for different desorption laser intensities, pointing to a nonthermal desorption mechanism, the translational temperature increases significantly and higher fragmentation is observed with increased desorption laser fluence.

  18. Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches

    DOE PAGES

    Dubi, C.; Croft, S.; Favalli, A.; ...

    2017-09-14

    In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less

  19. Column Store for GWAC: A High-cadence, High-density, Large-scale Astronomical Light Curve Pipeline and Distributed Shared-nothing Database

    NASA Astrophysics Data System (ADS)

    Wan, Meng; Wu, Chao; Wang, Jing; Qiu, Yulei; Xin, Liping; Mullender, Sjoerd; Mühleisen, Hannes; Scheers, Bart; Zhang, Ying; Nes, Niels; Kersten, Martin; Huang, Yongpan; Deng, Jinsong; Wei, Jianyan

    2016-11-01

    The ground-based wide-angle camera array (GWAC), a part of the SVOM space mission, will search for various types of optical transients by continuously imaging a field of view (FOV) of 5000 degrees2 every 15 s. Each exposure consists of 36 × 4k × 4k pixels, typically resulting in 36 × ˜175,600 extracted sources. For a modern time-domain astronomy project like GWAC, which produces massive amounts of data with a high cadence, it is challenging to search for short timescale transients in both real-time and archived data, and to build long-term light curves for variable sources. Here, we develop a high-cadence, high-density light curve pipeline (HCHDLP) to process the GWAC data in real-time, and design a distributed shared-nothing database to manage the massive amount of archived data which will be used to generate a source catalog with more than 100 billion records during 10 years of operation. First, we develop HCHDLP based on the column-store DBMS of MonetDB, taking advantage of MonetDB’s high performance when applied to massive data processing. To realize the real-time functionality of HCHDLP, we optimize the pipeline in its source association function, including both time and space complexity from outside the database (SQL semantic) and inside (RANGE-JOIN implementation), as well as in its strategy of building complex light curves. The optimized source association function is accelerated by three orders of magnitude. Second, we build a distributed database using a two-level time partitioning strategy via the MERGE TABLE and REMOTE TABLE technology of MonetDB. Intensive tests validate that our database architecture is able to achieve both linear scalability in response time and concurrent access by multiple users. In summary, our studies provide guidance for a solution to GWAC in real-time data processing and management of massive data.

  20. Charge state distributions of oxygen and carbon in the energy range 1 to 300 keV/e observed with AMPTE/CCE in the magnetosphere

    NASA Technical Reports Server (NTRS)

    Kremser, G.; Stuedemann, W.; Wilken, B.; Gloeckler, G.; Hamilton, D. C.

    1985-01-01

    Observations of charge state distributions of oxygen and carbon are presented that were obtained with the charge-energy-mass spectrometer onboard the AMPTE/CCE spacecraft. Data were selected for two different local time sectors (apogee at 1300 LT and 0300 LT, respectively), three L-ranges (4-6, 6-8, and greater than 8), and quiet to moderately disturbed days (Kp less than or equal to 4). The charge state distributions reveal the existence of all charge states of oxygen and carbon in the magnetosphere. The relative importance of the different charge states strongly depends on L and much less on local time. The observations confirm that the solar wind and the ionosphere contribute to the oxygen population, whereas carbon only originates from the solar wind. The L-dependence of the charge state distributions can be interpreted in terms of these different ion sources and of charge exchange and diffusion processes that largely influence the distribution of oxygen and carbon in the magnetosphere.

  1. Neutron coincidence counting based on time interval analysis with one- and two-dimensional Rossi-alpha distributions: an application for passive neutron waste assay

    NASA Astrophysics Data System (ADS)

    Bruggeman, M.; Baeten, P.; De Boeck, W.; Carchon, R.

    1996-02-01

    Neutron coincidence counting is commonly used for the non-destructive assay of plutonium bearing waste or for safeguards verification measurements. A major drawback of conventional coincidence counting is related to the fact that a valid calibration is needed to convert a neutron coincidence count rate to a 240Pu equivalent mass ( 240Pu eq). In waste assay, calibrations are made for representative waste matrices and source distributions. The actual waste however may have quite different matrices and source distributions compared to the calibration samples. This often results in a bias of the assay result. This paper presents a new neutron multiplicity sensitive coincidence counting technique including an auto-calibration of the neutron detection efficiency. The coincidence counting principle is based on the recording of one- and two-dimensional Rossi-alpha distributions triggered respectively by pulse pairs and by pulse triplets. Rossi-alpha distributions allow an easy discrimination between real and accidental coincidences and are aimed at being measured by a PC-based fast time interval analyser. The Rossi-alpha distributions can be easily expressed in terms of a limited number of factorial moments of the neutron multiplicity distributions. The presented technique allows an unbiased measurement of the 240Pu eq mass. The presented theory—which will be indicated as Time Interval Analysis (TIA)—is complementary to Time Correlation Analysis (TCA) theories which were developed in the past, but is from the theoretical point of view much simpler and allows a straightforward calculation of deadtime corrections and error propagation. Analytical expressions are derived for the Rossi-alpha distributions as a function of the factorial moments of the efficiency dependent multiplicity distributions. The validity of the proposed theory is demonstrated and verified via Monte Carlo simulations of pulse trains and the subsequent analysis of the simulated data.

  2. Comparison of debris flux models

    NASA Astrophysics Data System (ADS)

    Sdunnus, H.; Beltrami, P.; Klinkrad, H.; Matney, M.; Nazarenko, A.; Wegener, P.

    The availability of models to estimate the impact risk from the man-made space debris and the natural meteoroid environment is essential for both, manned and unmanned satellite missions. Various independent tools based on different approaches have been developed in the past years. Due to an increased knowledge of the debris environment and its sources e.g. from improved measurement capabilities, these models could be updated regularly, providing more detailed and more reliable simulations. This paper addresses an in-depth, quantitative comparison of widely distributed debris flux models which were recently updated, namely ESA's MASTER 2001 model, NASA's ORDEM 2000 and the Russian SDPA 2000 model. The comparison was performed in the frame of the work of the 20t h Interagency Debris Coordination (IADC) meeting held in Surrey, UK. ORDEM 2000ORDEM 2000 uses careful empirical estimates of the orbit populations based onthree primary data sources - the US Space Command Catalog, the H ystackaRadar, and the Long Duration Exposure Facility spacecraft returned surfaces.Further data (e.g. HAX and Goldstone radars, impacts on Shuttle windows andradiators, and others) were used to adjust these populations for regions in time,size, and space not covered by the primary data sets. Some interpolation andextrapolation to regions with no data (such as projections into the future) wasprovided by the EVOLVE model. MASTER 2001The ESA MASTER model offers a full three dimensional description of theterrestrial debris distribution reaching from LEO up to the GEO region. Fluxresults relative to an orbiting target or to an inertial volume can be resolved intosource terms, impactor characteristics and orbit, as well as impact velocity anddirection. All relevant debris source terms are considered by the MASTERmodel. For each simulated source, a corresponding debris generation model interms of mass/diameter distribution, additional velocities, and directionalspreading has been developed. A comprehensive perturbation model was used topropagate all objects to a reference epoch. SDPA 2000The Russian Space Debris Prediction and Analysis (SDPA) model is the semi-analytical stochastic tool for medium- and long-term forecast of the man-madedebris environment (with size larger than 1 mm), for construction of spatialdensity and velocity distribution in LEO and GEO as well as for risk evaluation.The last version of SDPA 2000 consists of ten individual modules related to theaforementioned tasks. The total characteristics of space debris of the differentsizes are considered (without partition of these characteristics into specificsources). The current space debris environment is characterised a) by the spatialdensity dependence on the altitude and latitude of a point, as well as on size ofobjects and b) by a statistical distribution of the magnitude and direction of spaceobjects velocities in an inertial geocentric coordinate system. Thesecharacteristics are constructed on the basis of the complex application of theaccessible measuring information and series of a priori data. The comparison is performed by applying the models to a large number of target orbits specified by a grid in terms of impactor size (6 gridpoints), target orbit perigee altitude (16 gridpoints), and target orbit inclination (15 gridpoints). These result provide a characteristic diagram of integral fluxes for all models, which will be compared. Further to this, the models are applied to orbits of particular interest, namely the ISS orbit, and a sun-synchronous orbit. For these cases, the comparison will include the comparison of flux directionality and velocity. References 1. Liou, J.-C., M. J. Matney, P. D. Anz-Meador, D. Kessler, M. Jansen, and J. R.Theall, 2001, "The New NASA Orbital Debris Engineering ModelORDEM2000", NASA/TP-2002-210780. 2. P. Wegener, J. Bendisch, K.D. Bunte, H. Sdunnus; Upgrade of the ESAMASTER Model; Final Report of ESOC/TOS-GMA contract 12318/97/D/IM;May 2000 3. A.I. Nazarenko, I.L. Menchikov. Engineering Model of Space DebrisEnvironment. Third European Conference on Space Debris, Darmstadt,Germany, March 2001.

  3. Altered newborn gender distribution in patients with low mid-trimester maternal serum human chorionic gonadotropin (MShCG).

    PubMed

    Santolaya-Forgas, J; Meyer, W J; Burton, B K; Scommegna, A

    1997-01-01

    to determine if the sex ratio (male/female) is altered in infants born to patients with low mid-trimester maternal serum human chorionic gonadotropin (MShCG). Between 2/1/90 and 1/3/91, 3,116 patients underwent prenatal screening using second-trimester maternal serum alpha-fetoprotein (MSAFP), MShCG, and maternal serum unconjugated estriol (MSuE3). Among these, there were 132 patients with low second-trimester MShCG (< 0.4 MoM), normal MSAFP and MSuE3. The gender distribution of these term, normal newborns was compared to that of 237 controls, matched for race, maternal age, and referral source and delivered at term to mothers with normal mid-trimester MSAFP, MSuE3, and MShCG. The gender distribution of these two groups of newborns was also compared to that of 78 term newborns from the same obstetrical population delivered to mothers with second-trimester MShCG > 2.5 MoM and normal MSAFP and MSuE3. All patients had a complete obstetrical history. Forty-nine percent of the controls were male vs. 62% of the group with slow second-trimester MShCG (P < .01). Within the group with low MShCG, 59% of infants were male when the MShCG was between 0.19 and 0.4 MoM (A) and 80% when the MShCG was < 0.2 MoM (B) (control vs. A vs. B P < .005). The sex ratio in the high-MShCG group was similar to control. The data suggest that gender distribution is different from normal in patients with low mid-trimester MShCG.

  4. Ecosystem variability in the offshore northeastern Chukchi Sea

    NASA Astrophysics Data System (ADS)

    Blanchard, Arny L.; Day, Robert H.; Gall, Adrian E.; Aerts, Lisanne A. M.; Delarue, Julien; Dobbins, Elizabeth L.; Hopcroft, Russell R.; Questel, Jennifer M.; Weingartner, Thomas J.; Wisdom, Sheyna S.

    2017-12-01

    Understanding influences of cumulative effects from multiple stressors in marine ecosystems requires an understanding of the sources for and scales of variability. A multidisciplinary ecosystem study in the offshore northeastern Chukchi Sea during 2008-2013 investigated the variability of the study area's two adjacent sub-ecosystems: a pelagic system influenced by interannual and/or seasonal temporal variation at large, oceanographic (regional) scales, and a benthic-associated system more influenced by small-scale spatial variations. Variability in zooplankton communities reflected interannual oceanographic differences in waters advected northward from the Bering Sea, whereas variation in benthic communities was associated with seafloor and bottom-water characteristics. Variations in the planktivorous seabird community were correlated with prey distributions, whereas interaction effects in ANOVA for walruses were related to declines of sea-ice. Long-term shifts in seabird distributions were also related to changes in sea-ice distributions that led to more open water. Although characteristics of the lower trophic-level animals within sub-ecosystems result from oceanographic variations and interactions with seafloor topography, distributions of apex predators were related to sea-ice as a feeding platform (walruses) or to its absence (i.e., open water) for feeding (seabirds). The stability of prey resources appears to be a key factor in mediating predator interactions with other ocean characteristics. Seabirds reliant on highly-variable zooplankton prey show long-term changes as open water increases, whereas walruses taking benthic prey in biomass hotspots respond to sea-ice changes in the short-term. A better understanding of how variability scales up from prey to predators and how prey resource stability (including how critical prey respond to environmental changes over space and time) might be altered by climate and anthropogenic stressors is essential to predicting the future state of both the Chukchi and other arctic systems.

  5. Long-term consistency in spatial patterns of primate seed dispersal.

    PubMed

    Heymann, Eckhard W; Culot, Laurence; Knogge, Christoph; Noriega Piña, Tony Enrique; Tirado Herrera, Emérita R; Klapproth, Matthias; Zinner, Dietmar

    2017-03-01

    Seed dispersal is a key ecological process in tropical forests, with effects on various levels ranging from plant reproductive success to the carbon storage potential of tropical rainforests. On a local and landscape scale, spatial patterns of seed dispersal create the template for the recruitment process and thus influence the population dynamics of plant species. The strength of this influence will depend on the long-term consistency of spatial patterns of seed dispersal. We examined the long-term consistency of spatial patterns of seed dispersal with spatially explicit data on seed dispersal by two neotropical primate species, Leontocebus nigrifrons and Saguinus mystax (Callitrichidae), collected during four independent studies between 1994 and 2013. Using distributions of dispersal probability over distances independent of plant species, cumulative dispersal distances, and kernel density estimates, we show that spatial patterns of seed dispersal are highly consistent over time. For a specific plant species, the legume Parkia panurensis , the convergence of cumulative distributions at a distance of 300 m, and the high probability of dispersal within 100 m from source trees coincide with the dimension of the spatial-genetic structure on the embryo/juvenile (300 m) and adult stage (100 m), respectively, of this plant species. Our results are the first demonstration of long-term consistency of spatial patterns of seed dispersal created by tropical frugivores. Such consistency may translate into idiosyncratic patterns of regeneration.

  6. Links between fear of humans, stress and survival support a non-random distribution of birds among urban and rural habitats

    PubMed Central

    Rebolo-Ifrán, Natalia; Carrete, Martina; Sanz-Aguilar, Ana; Rodríguez-Martínez, Sol; Cabezas, Sonia; Marchant, Tracy A.; Bortolotti, Gary R.; Tella, José L.

    2015-01-01

    Urban endocrine ecology aims to understand how organisms cope with new sources of stress and maintain allostatic load to thrive in an increasingly urbanized world. Recent research efforts have yielded controversial results based on short-term measures of stress, without exploring its fitness effects. We measured feather corticosterone (CORTf, reflecting the duration and amplitude of glucocorticoid secretion over several weeks) and subsequent annual survival in urban and rural burrowing owls. This species shows high individual consistency in fear of humans (i.e., flight initiation distance, FID), allowing us to hypothesize that individuals distribute among habitats according to their tolerance to human disturbance. FIDs were shorter in urban than in rural birds, but CORTf levels did not differ, nor were correlated to FIDs. Survival was twice as high in urban as in rural birds and links with CORTf varied between habitats: while a quadratic relationship supports stabilizing selection in urban birds, high predation rates may have masked CORTf-survival relationship in rural ones. These results evidence that urban life does not constitute an additional source of stress for urban individuals, as shown by their near identical CORTf values compared with rural conspecifics supporting the non-random distribution of individuals among habitats according to their behavioural phenotypes. PMID:26348294

  7. Links between fear of humans, stress and survival support a non-random distribution of birds among urban and rural habitats.

    PubMed

    Rebolo-Ifrán, Natalia; Carrete, Martina; Sanz-Aguilar, Ana; Rodríguez-Martínez, Sol; Cabezas, Sonia; Marchant, Tracy A; Bortolotti, Gary R; Tella, José L

    2015-09-08

    Urban endocrine ecology aims to understand how organisms cope with new sources of stress and maintain allostatic load to thrive in an increasingly urbanized world. Recent research efforts have yielded controversial results based on short-term measures of stress, without exploring its fitness effects. We measured feather corticosterone (CORTf, reflecting the duration and amplitude of glucocorticoid secretion over several weeks) and subsequent annual survival in urban and rural burrowing owls. This species shows high individual consistency in fear of humans (i.e., flight initiation distance, FID), allowing us to hypothesize that individuals distribute among habitats according to their tolerance to human disturbance. FIDs were shorter in urban than in rural birds, but CORTf levels did not differ, nor were correlated to FIDs. Survival was twice as high in urban as in rural birds and links with CORTf varied between habitats: while a quadratic relationship supports stabilizing selection in urban birds, high predation rates may have masked CORTf-survival relationship in rural ones. These results evidence that urban life does not constitute an additional source of stress for urban individuals, as shown by their near identical CORTf values compared with rural conspecifics supporting the non-random distribution of individuals among habitats according to their behavioural phenotypes.

  8. Insights into metals in individual fine particles from municipal solid waste using synchrotron radiation-based micro-analytical techniques.

    PubMed

    Zhu, Yumin; Zhang, Hua; Shao, Liming; He, Pinjing

    2015-01-01

    Excessive inter-contamination with heavy metals hampers the application of biological treatment products derived from mixed or mechanically-sorted municipal solid waste (MSW). In this study, we investigated fine particles of <2mm, which are small fractions in MSW but constitute a significant component of the total heavy metal content, using bulk detection techniques. A total of 17 individual fine particles were evaluated using synchrotron radiation-based micro-X-ray fluorescence and micro-X-ray diffraction. We also discussed the association, speciation and source apportionment of heavy metals. Metals were found to exist in a diffuse distribution with heterogeneous intensities and intense hot-spots of <10 μm within the fine particles. Zn-Cu, Pb-Fe and Fe-Mn-Cr had significant correlations in terms of spatial distribution. The overlapped enrichment, spatial association, and the mineral phases of metals revealed the potential sources of fine particles from size-reduced waste fractions (such as scraps of organic wastes or ceramics) or from the importation of other particles. The diverse sources of heavy metal pollutants within the fine particles suggested that separate collection and treatment of the biodegradable waste fraction (such as food waste) is a preferable means of facilitating the beneficial utilization of the stabilized products. Copyright © 2014. Published by Elsevier B.V.

  9. Analysis of heavy metal sources in soil using kriging interpolation on principal components.

    PubMed

    Ha, Hoehun; Olson, James R; Bian, Ling; Rogerson, Peter A

    2014-05-06

    Anniston, Alabama has a long history of operation of foundries and other heavy industry. We assessed the extent of heavy metal contamination in soils by determining the concentrations of 11 heavy metals (Pb, As, Cd, Cr, Co, Cu, Mn, Hg, Ni, V, and Zn) based on 2046 soil samples collected from 595 industrial and residential sites. Principal Component Analysis (PCA) was adopted to characterize the distribution of heavy metals in soil in this region. In addition, a geostatistical technique (kriging) was used to create regional distribution maps for the interpolation of nonpoint sources of heavy metal contamination using geographical information system (GIS) techniques. There were significant differences found between sampling zones in the concentrations of heavy metals, with the exception of the levels of Ni. Three main components explaining the heavy metal variability in soils were identified. The results suggest that Pb, Cd, Cu, and Zn were associated with anthropogenic activities, such as the operations of some foundries and major railroads, which released these heavy metals, whereas the presence of Co, Mn, and V were controlled by natural sources, such as soil texture, pedogenesis, and soil hydrology. In general terms, the soil levels of heavy metals analyzed in this study were higher than those reported in previous studies in other industrial and residential communities.

  10. Powder Bed Layer Characteristics: The Overseen First-Order Process Input

    NASA Astrophysics Data System (ADS)

    Mindt, H. W.; Megahed, M.; Lavery, N. P.; Holmes, M. A.; Brown, S. G. R.

    2016-08-01

    Powder Bed Additive Manufacturing offers unique advantages in terms of manufacturing cost, lot size, and product complexity compared to traditional processes such as casting, where a minimum lot size is mandatory to achieve economic competitiveness. Many studies—both experimental and numerical—are dedicated to the analysis of how process parameters such as heat source power, scan speed, and scan strategy affect the final material properties. Apart from the general urge to increase the build rate using thicker powder layers, the coating process and how the powder is distributed on the processing table has received very little attention to date. This paper focuses on the first step of every powder bed build process: Coating the process table. A numerical study is performed to investigate how powder is transferred from the source to the processing table. A solid coating blade is modeled to spread commercial Ti-6Al-4V powder. The resulting powder layer is analyzed statistically to determine the packing density and its variation across the processing table. The results are compared with literature reports using the so-called "rain" models. A parameter study is performed to identify the influence of process table displacement and wiper velocity on the powder distribution. The achieved packing density and how that affects subsequent heat source interaction with the powder bed is also investigated numerically.

  11. Assessing the long-term variability of acetylene and ethane in the stratosphere of Jupiter

    NASA Astrophysics Data System (ADS)

    Melin, Henrik; Fletcher, L. N.; Donnelly, P. T.; Greathouse, T. K.; Lacy, J. H.; Orton, G. S.; Giles, R. S.; Sinclair, J. A.; Irwin, P. G. J.

    2018-05-01

    Acetylene (C2H2) and ethane (C2H6) are both produced in the stratosphere of Jupiter via photolysis of methane (CH4). Despite this common source, the latitudinal distribution of the two species is radically different, with acetylene decreasing in abundance towards the pole, and ethane increasing towards the pole. We present six years of NASA IRTF TEXES mid-infrared observations of the zonally-averaged emission of methane, acetylene and ethane. We confirm that the latitudinal distributions of ethane and acetylene are decoupled, and that this is a persistent feature over multiple years. The acetylene distribution falls off towards the pole, peaking at ∼ 30°N with a volume mixing ratio (VMR) of ∼ 0.8 parts per million (ppm) at 1 mbar and still falling off at ± 70° with a VMR of ∼ 0.3 ppm. The acetylene distributions are asymmetric on average, but as we move from 2013 to 2017, the zonally-averaged abundance becomes more symmetric about the equator. We suggest that both the short term changes in acetylene and its latitudinal asymmetry is driven by changes to the vertical stratospheric mixing, potentially related to propagating wave phenomena. Unlike acetylene, ethane has a symmetric distribution about the equator that increases toward the pole, with a peak mole fraction of ∼ 18 ppm at about ± 50° latitude, with a minimum at the equator of ∼ 10 ppm at 1 mbar. The ethane distribution does not appear to respond to mid-latitude stratospheric mixing in the same way as acetylene, potentially as a result of the vertical gradient of ethane being much shallower than that of acetylene. The equator-to-pole distributions of acetylene and ethane are consistent with acetylene having a shorter lifetime than ethane that is not sensitive to longer advective timescales, but is augmented by short-term dynamics, such as vertical mixing. Conversely, the long lifetime of ethane allows it to be transported to higher latitudes faster than it can be chemically depleted.

  12. What do popular Spanish women's magazines say about caesarean section? A 21-year survey.

    PubMed

    Torloni, M R; Campos Mansilla, B; Merialdi, M; Betrán, A P

    2014-04-01

    Caesarean section (CS) rates are increasing worldwide and maternal request is cited as one of the main reasons for this trend. Women's preferences for route of delivery are influenced by popular media, including magazines. We assessed the information on CS presented in Spanish women's magazines. Systematic review. Women's magazines printed from 1989 to 2009 with the largest national distribution. Articles with any information on CS. Articles were selected, read and abstracted in duplicate. Sources of information, scientific accuracy, comprehensiveness and women's testimonials were objectively extracted using a content analysis form designed for this study. Accuracy, comprehensiveness and sources of information. Most (67%) of the 1223 selected articles presented exclusively personal opinion/birth stories, 12% reported the potential benefits of CS, 26% mentioned the short-term and 10% mentioned the long-term maternal risks, and 6% highlighted the perinatal risks of CS. The most frequent short-term risks were the increased time for maternal recovery (n = 86), frustration/feelings of failure (n = 83) and increased post-surgical pain (n = 71). The most frequently cited long-term risks were uterine rupture (n = 57) and the need for another CS in any subsequent pregnancy (n = 42). Less than 5% of the selected articles reported that CS could increase the risks of infection (n = 53), haemorrhage (n = 31) or placenta praevia/accreta in future pregnancies (n = 6). The sources of information were not reported by 68% of the articles. The portrayal of CS in Spanish women's magazines is not sufficiently comprehensive and does not provide adequate important information to help the readership to understand the real benefits and risks of this route of delivery. © 2014 The Authors. BJOG An International Journal of Obstetrics and Gynaecology published by John Wiley & Sons Ltd on behalf of Royal College of Obstetricians and Gynaecologists.

  13. What do popular Spanish women's magazines say about caesarean section? A 21-year survey

    PubMed Central

    Torloni, MR; Campos Mansilla, B; Merialdi, M; Betrán, AP

    2014-01-01

    Objectives Caesarean section (CS) rates are increasing worldwide and maternal request is cited as one of the main reasons for this trend. Women's preferences for route of delivery are influenced by popular media, including magazines. We assessed the information on CS presented in Spanish women's magazines. Design Systematic review. Setting Women's magazines printed from 1989 to 2009 with the largest national distribution. Sample Articles with any information on CS. Methods Articles were selected, read and abstracted in duplicate. Sources of information, scientific accuracy, comprehensiveness and women's testimonials were objectively extracted using a content analysis form designed for this study. Main outcome measures Accuracy, comprehensiveness and sources of information. Results Most (67%) of the 1223 selected articles presented exclusively personal opinion/birth stories, 12% reported the potential benefits of CS, 26% mentioned the short-term and 10% mentioned the long-term maternal risks, and 6% highlighted the perinatal risks of CS. The most frequent short-term risks were the increased time for maternal recovery (n = 86), frustration/feelings of failure (n = 83) and increased post-surgical pain (n = 71). The most frequently cited long-term risks were uterine rupture (n = 57) and the need for another CS in any subsequent pregnancy (n = 42). Less than 5% of the selected articles reported that CS could increase the risks of infection (n = 53), haemorrhage (n = 31) or placenta praevia/accreta in future pregnancies (n = 6). The sources of information were not reported by 68% of the articles. Conclusions The portrayal of CS in Spanish women's magazines is not sufficiently comprehensive and does not provide adequate important information to help the readership to understand the real benefits and risks of this route of delivery. PMID:24467797

  14. The Planck Catalogue of Galactic Cold Clumps : Looking at the early stages of star-formation

    NASA Astrophysics Data System (ADS)

    Montier, Ludovic

    2015-08-01

    The Planck satellite has provided an unprecedented view of the submm sky, allowing us to search for the dust emission of Galactic cold sources. Combining Planck-HFI all-sky maps in the high frequency channels with the IRAS map at 100um, we built the Planck catalogue of Galactic Cold Clumps (PGCC, Planck 2015 results XXVIII 2015), counting 13188 sources distributed over the whole sky, and following mainly the Galactic structures at low and intermediate latitudes. This is the first all-sky catalogue of Galactic cold sources obtained with a single instrument at this resolution and sensitivity, which opens a new window on star-formation processes in our Galaxy.I will briefly describe the colour detection method used to extract the Galactic cold sources, i.e., the Cold Core Colour Detection Tool (CoCoCoDeT, Montier et al. 2010), and its application to the Planck data. I will discuss the statistical distribution of the properties of the PGCC sources (in terms of dust temperature, distance, mass, density and luminosity), which illustrates that the PGCC catalogue spans a large variety of environments and objects, from molecular clouds to cold cores, and covers various stages of evolution. The Planck catalogue is a very powerful tool to study the formation and the evolution of prestellar objects and star-forming regions.I will finally present an overview of the Herschel Key Program Galactic Cold Cores (PI. M.Juvela), which allowed us to follow-up about 350 Planck Galactic Cold Clumps, in various stages of evolution and environments. With this program, the nature and the composition of the 5' Planck sources have been revealed at a sub-arcmin resolution, showing very different configurations, such as starless cold cores or multiple Young Stellar objects still embedded in their cold envelope.

  15. Polycyclic aromatic hydrocarbons in the urban atmosphere of Nepal: Distribution, sources, seasonal trends, and cancer risk.

    PubMed

    Pokhrel, Balram; Gong, Ping; Wang, Xiaoping; Wang, Chuanfei; Gao, Shaoping

    2018-03-15

    Atmospheric polycyclic aromatic hydrocarbons (PAHs) in urban areas have always been a global concern, as these areas are considered to be the source region. Despite studies on the concentrations of PAHs in water, soils and sediments, knowledge of the distribution patterns, seasonality and sources of PAHs in urban areas of Nepal remains limited. In this study, polyurethane foam passive air samplers were used to measure gas-phase PAH concentrations over different land types in three major cities of Nepal-namely, Kathmandu (the capital) and Pokhara (both densely populated cities), and Hetauda (an agricultural city). The average concentrations of ∑15PAHs in ng/m 3 were 16.1±7.0 (6.4-28.6), 14.1±6.2 (6.8-29.4) and 11.1±9.0 (4.1-38.0) in Kathmandu, Pokhara and Hetauda, respectively. Molecular diagnostic ratio analysis suggested that fossil fuel combustion was a common PAH source for all three cities. In addition to this, coal combustion in Kathmandu, vehicle emissions in Pokhara, and grass/wood combustion in Hetauda were also possible sources of PAHs. In terms of cancer risk from PAH inhalation, a religious site with intense incense burning, a brick production area where extensive coal combustion is common, and a market place with heavy traffic emission, were associated with a higher risk than other areas. There were no clear seasonal trends in atmospheric PAHs. The estimated cancer risk due to inhalation of gas-phase PAHs exceeded the USEPA standard at >90% of the sites. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Multigrid Method for Modeling Multi-Dimensional Combustion with Detailed Chemistry

    NASA Technical Reports Server (NTRS)

    Zheng, Xiaoqing; Liu, Chaoqun; Liao, Changming; Liu, Zhining; McCormick, Steve

    1996-01-01

    A highly accurate and efficient numerical method is developed for modeling 3-D reacting flows with detailed chemistry. A contravariant velocity-based governing system is developed for general curvilinear coordinates to maintain simplicity of the continuity equation and compactness of the discretization stencil. A fully-implicit backward Euler technique and a third-order monotone upwind-biased scheme on a staggered grid are used for the respective temporal and spatial terms. An efficient semi-coarsening multigrid method based on line-distributive relaxation is used as the flow solver. The species equations are solved in a fully coupled way and the chemical reaction source terms are treated implicitly. Example results are shown for a 3-D gas turbine combustor with strong swirling inflows.

  17. The impact of circulation control on rotary aircraft controls systems

    NASA Technical Reports Server (NTRS)

    Kingloff, R. F.; Cooper, D. E.

    1987-01-01

    Application of circulation to rotary wing systems is a new development. Efforts to determine the near and far field flow patterns and to analytically predict those flow patterns have been underway for some years. Rotary wing applications present a new set of challenges in circulation control technology. Rotary wing sections must accommodate substantial Mach number, free stream dynamic pressure and section angle of attack variation at each flight condition within the design envelope. They must also be capable of short term circulation blowing modulation to produce control moments and vibration alleviation in addition to a lift augmentation function. Control system design must provide this primary control moment, vibration alleviation and lift augmentation function. To accomplish this, one must simultaneously control the compressed air source and its distribution. The control law algorithm must therefore address the compressor as the air source, the plenum as the air pressure storage and the pneumatic flow gates or valves that distribute and meter the stored pressure to the rotating blades. Also, mechanical collective blade pitch, rotor shaft angle of attack and engine power control must be maintained.

  18. Case study of open-source enterprise resource planning implementation in a small business

    NASA Astrophysics Data System (ADS)

    Olson, David L.; Staley, Jesse

    2012-02-01

    Enterprise resource planning (ERP) systems have been recognised as offering great benefit to some organisations, although they are expensive and problematic to implement. The cost and risk make well-developed proprietorial systems unaffordable to small businesses. Open-source software (OSS) has become a viable means of producing ERP system products. The question this paper addresses is the feasibility of OSS ERP systems for small businesses. A case is reported involving two efforts to implement freely distributed ERP software products in a small US make-to-order engineering firm. The case emphasises the potential of freely distributed ERP systems, as well as some of the hurdles involved in their implementation. The paper briefly reviews highlights of OSS ERP systems, with the primary focus on reporting the case experiences for efforts to implement ERPLite software and xTuple software. While both systems worked from a technical perspective, both failed due to economic factors. While these economic conditions led to imperfect results, the case demonstrates the feasibility of OSS ERP for small businesses. Both experiences are evaluated in terms of risk dimension.

  19. Technology, conflict early warning systems, public health, and human rights.

    PubMed

    Pham, Phuong N; Vinck, Patrick

    2012-12-15

    Public health and conflict early warning are evolving rapidly in response to technology changes for the gathering, management, analysis and communication of data. It is expected that these changes will provide an unprecedented ability to monitor, detect, and respond to crises. One of the potentially most profound and lasting expected change affects the roles of the various actors in providing and sharing information and in responding to early warning. Communities and civil society actors have the opportunity to be empowered as a source of information, analysis, and response, while the role of traditional actors shifts toward supporting those communities and building resilience. However, by creating new roles, relationships, and responsibilities, technology changes raise major concerns and ethical challenges for practitioners, pressing the need for practical guidelines and actionable recommendations in line with existing ethical principles. Copyright © 2012 Pham and Vinck. This is an open access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/), which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original author and source are credited.

  20. Sources and distribution of aromatic hydrocarbons in a tropical marine protected area estuary under influence of sugarcane cultivation.

    PubMed

    Arruda-Santos, Roxanny Helen de; Schettini, Carlos Augusto França; Yogui, Gilvan Takeshi; Maciel, Daniele Claudino; Zanardi-Lamardo, Eliete

    2018-05-15

    Goiana estuary is a well preserved marine protected area (MPA) located on the northeastern coast of Brazil. Despite its current state, human activities in the watershed represent a potential threat to long term local preservation. Dissolved/dispersed aromatic hydrocarbons and polycyclic aromatic hydrocarbons (PAHs) were investigated in water and sediments across the estuarine salt gradient. Concentration of aromatic hydrocarbons was low in all samples. According to results, aromatic hydrocarbons are associated to suspended particulate matter (SPM) carried to the estuary by river waters. An estuarine turbidity maximum (ETM) was identified in the upper estuary, indicating that both sediments and contaminants are trapped prior to an occasional export to the adjacent sea. PAHs distribution in sediments were associated with organic matter and mud content. Diagnostic ratios indicated pyrolytic processes as the main local source of PAHs that are probably associated with sugarcane burning and combustion engines. Low PAH concentrations probably do not cause adverse biological effects to the local biota although their presence indicate anthropogenic contamination and pressure on the Goiana estuary MPA. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Micro-scale variability of particulate matter and the influence of urban fabric on the aerosol distribution in two mid-sized German cities

    NASA Astrophysics Data System (ADS)

    Paas, Bastian; Schneider, Christoph

    2016-04-01

    Spatial micro-scale variability of particle mass concentrations is an important criterion for urban air quality assessment. The major proportion of the world's population lives in cities, where exceedances of air quality standards occur regularly. Current research suggests that both long-term and even short-term stays, e.g. during commuting or relaxing, at locations with high PM concentrations could have significant impacts on health. In this study we present results from model calculations in comparison to high resolution spatial and temporal measurements. Airborne particles were sampled using an optical particle counter in two inner-city park areas in Aachen and Munster. Both are mid-sized German cities which, however, are characterized by a different topology. The measurement locations represent spots with different degrees of outdoor particle exposure that can be experienced by a pedestrian walking in an intra-urban recreational area. Simulations of aerosol distributions induced by road traffic were conducted using both the German reference dispersion model Austal2000 and the numerical microclimate model ENVI-met. Simulation results reveal details in the distribution of urban particles with highest concentrations of PM10 in direct vicinity to traffic lines. The corresponding concentrations rapidly decline as the distances to the line sources increase. Still, urban fabric and obstacles like shrubs or trees are proved to have a major impact on the aerosol distribution in the area. Furthermore, the distribution of particles was highly dependent of wind direction and turbulence characteristics. The analysis of observational data leads to the hypothesis that besides motor traffic numerous diffuse particle sources e.g. on the ability of surfaces to release particles by resuspension which were dominantly apparent in measured PM(1;10) and PM(0.25;10) data are present in the urban roughness layer. The results highlight that a conclusive picture concerning micro-scale patterns of PM helps to understand the effects of urban fabric and obstacles of both natural and artificial origin (e.g. street furniture, vegetation elements and buildings) on the local patterns of aerosol distribution. Simulation results with Austal2000 and ENVI-met indicate that there is potential to support urban planners in designing urban infrastructure and open spaces with reduced local particle concentrations through modelling. This approach seemingly is i.e. relevant for inner-city recreational areas.

  2. Investigating competing uses of unevenly distributed resources in Nicaragua applying the Climate, Land Use (Food), Energy and Water strategies framework

    NASA Astrophysics Data System (ADS)

    Ramos, Eunice; Sridharan, Vignesh; Howells, Mark

    2017-04-01

    The distribution of resources in Nicaragua is not even, as it is the case in many countries in the world. However, in the particular case of water resources, commonly used by different sectors and essential to basic human activities, their availability differs along the main drainage basins and is often mismatched with sectoral demands. For example, the population is distributed unevenly, with 80% being located in water scarce areas of the Pacific and Central region of Nicaragua. Agricultural activities also take place in regions where water resources are vulnerable. The spatial distribution of water and energy resources, population and land use in Nicaragua allowed for the identification of three target regions for the analysis: the Pacific coast, the Dry Corridor zone, and the Atlantic region. Each of these zones has different challenges on which the CLEWs assessment focused on. Water sources in the Pacific coast are mostly groundwater, and uncertainty exists related to the long-term availability of such source. This is also the region where most of the sugarcane, an important source of revenue for Nicaragua, is produced. As sugarcane needs to be irrigated, this increases the pressure on water resources. The Dry Corridor is an arid stretch in Central America cyclically affected by droughts that have a severe impact on the households whose economy and subsistence depends on agriculture of grains and coffee beans. It is expected that climate change will exacerbate further the food security problem. When water is lacking, also population experiences limited access to water for drinking and cooking. In addition, two major hydropower plants are located in this zone. Water resources are available both from surface and groundwater sources, however, due to their intensive use and vulnerability to climate, their availability can affect severely different sectors, presenting risks to food, water and energy security. Hydropower potential is foreseen to be exploited in the Matagalpa and Escondido River Basins draining to the Atlantic Ocean. Although competition for water resources in not as acute as in other regions due to abundant surface water and lower population density, climate change and the use of land for grazing could present risks to the exploitation of the renewable energy potential. This could have an impact on medium and long-term energy planning and the ambition of decreasing fuel imports for electricity generation and increase electricity access. To assess the potential implications of the previous challenges and provide insights on solutions where conflicts are more stringent, in line with sustainable development priorities, the CLEWs framework was used to perform the integration of resource systems models. WEAP was used for the representation of the water and land use systems, and then soft-linked with the energy systems model for Nicaragua, developed using the long-term energy planning tool OSeMOSYS. Hydropower expansion, the development of the electricity system, water availability for crop production, water allocation across sectors, sugarcane cultivation and bi-products use in electricity generation, and potential impacts of climate change, are amongst the issues investigated with the region-specific scenarios defined for the study.

  3. Chloride circulation in a lowland catchment and the formulation of transport by travel time distributions

    NASA Astrophysics Data System (ADS)

    Benettin, Paolo; van der Velde, Ype; van der Zee, Sjoerd E. A. T. M.; Rinaldo, Andrea; Botter, Gianluca

    2013-08-01

    Travel times are fundamental catchment descriptors that blend key information about storage, geochemistry, flow pathways and sources of water into a coherent mathematical framework. Here we analyze travel time distributions (TTDs) (and related attributes) estimated on the basis of the extensive hydrochemical information available for the Hupsel Brook lowland catchment in the Netherlands. The relevance of the work is perceived to lie in the general importance of characterizing nonstationary TTDs to capture catchment transport properties, here chloride flux concentrations at the basin outlet. The relative roles of evapotranspiration, water storage dynamics, hydrologic pathways and mass sources/sinks are discussed. Different hydrochemical models are tested and ranked, providing compelling examples of the improved process understanding achieved through coupled calibration of flow and transport processes. The ability of the model to reproduce measured flux concentrations is shown to lie mostly in the description of nonstationarities of TTDs at multiple time scales, including short-term fluctuations induced by soil moisture dynamics in the root zone and long-term seasonal dynamics. Our results prove reliable and suggest, for instance, that drastically reducing fertilization loads for one or more years would not result in significant permanent decreases in average solute concentrations in the Hupsel runoff because of the long memory shown by the system. Through comparison of field and theoretical evidence, our results highlight, unambiguously, the basic transport mechanisms operating in the catchment at hand, with a view to general applications.

  4. The right to water in rural Punjab: assessing equitable access to water in the context of the ongoing Punjab Rural Water Supply Proejct.

    PubMed

    Samra, Shamsher; Crowley, Julia; Smith Fawzi, Mary C

    2011-12-15

    Although India is poised to meet its Millennium Development Goal for providing access to safe drinking water, there remains a worrying discrepancy in access between urban and rural areas. In 2006, 96% of the urban population versus 86% of the rural population obtained their drinking water from an improved water source. To increase access to potable water in rural areas, the World Bank and the state of Punjab have implemented the Punjab Rural Water Supply and Sanitation Project (PRWSS) to improve or construct water supply systems in 3,000 villages deemed to have inadequate access to clean drinking water. This study aimed to examine whether the right to water was fulfilled in six towns in rural Punjab during implementation of the PRWSS. The normative content of the right to water requires that water be of adequate quantity, safety, accessibility, affordability, and acceptability in terms of quality. While our findings suggest that the PRWSS improved water quality, they also indicate that access to water was limited due to affordability and the low socioeconomic status of some people living in the target communities. Copyright © 2011 Samra, Crowley, and Smith Fawzi. This is an open access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/), which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original author and source are credited.

  5. Quantification of the evolution of firm size distributions due to mergers and acquisitions.

    PubMed

    Lera, Sandro Claudio; Sornette, Didier

    2017-01-01

    The distribution of firm sizes is known to be heavy tailed. In order to account for this stylized fact, previous economic models have focused mainly on growth through investments in a company's own operations (internal growth). Thereby, the impact of mergers and acquisitions (M&A) on the firm size (external growth) is often not taken into consideration, notwithstanding its potential large impact. In this article, we make a first step into accounting for M&A. Specifically, we describe the effect of mergers and acquisitions on the firm size distribution in terms of an integro-differential equation. This equation is subsequently solved both analytically and numerically for various initial conditions, which allows us to account for different observations of previous empirical studies. In particular, it rationalises shortcomings of past work by quantifying that mergers and acquisitions develop a significant influence on the firm size distribution only over time scales much longer than a few decades. This explains why M&A has apparently little impact on the firm size distributions in existing data sets. Our approach is very flexible and can be extended to account for other sources of external growth, thus contributing towards a holistic understanding of the distribution of firm sizes.

  6. 14 CFR 25.1310 - Power source capacity and distribution.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Power source capacity and distribution. 25... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY AIRPLANES Equipment General § 25.1310 Power source capacity and distribution. (a) Each installation whose functioning is required for type...

  7. 14 CFR 25.1310 - Power source capacity and distribution.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Power source capacity and distribution. 25... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY AIRPLANES Equipment General § 25.1310 Power source capacity and distribution. (a) Each installation whose functioning is required for type...

  8. 14 CFR 25.1310 - Power source capacity and distribution.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Power source capacity and distribution. 25... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY AIRPLANES Equipment General § 25.1310 Power source capacity and distribution. (a) Each installation whose functioning is required for type...

  9. Long-term monitoring on environmental disasters using multi-source remote sensing technique

    NASA Astrophysics Data System (ADS)

    Kuo, Y. C.; Chen, C. F.

    2017-12-01

    Environmental disasters are extreme events within the earth's system that cause deaths and injuries to humans, as well as causing damages and losses of valuable assets, such as buildings, communication systems, farmlands, forest and etc. In disaster management, a large amount of multi-temporal spatial data is required. Multi-source remote sensing data with different spatial, spectral and temporal resolutions is widely applied on environmental disaster monitoring. With multi-source and multi-temporal high resolution images, we conduct rapid, systematic and seriate observations regarding to economic damages and environmental disasters on earth. It is based on three monitoring platforms: remote sensing, UAS (Unmanned Aircraft Systems) and ground investigation. The advantages of using UAS technology include great mobility and availability in real-time rapid and more flexible weather conditions. The system can produce long-term spatial distribution information from environmental disasters, obtaining high-resolution remote sensing data and field verification data in key monitoring areas. It also supports the prevention and control on ocean pollutions, illegally disposed wastes and pine pests in different scales. Meanwhile, digital photogrammetry can be applied on the camera inside and outside the position parameters to produce Digital Surface Model (DSM) data. The latest terrain environment information is simulated by using DSM data, and can be used as references in disaster recovery in the future.

  10. The high-resolution version of TM5-MP for optimized satellite retrievals: description and validation

    NASA Astrophysics Data System (ADS)

    Williams, Jason E.; Folkert Boersma, K.; Le Sager, Phillipe; Verstraeten, Willem W.

    2017-02-01

    We provide a comprehensive description of the high-resolution version of the TM5-MP global chemistry transport model, which is to be employed for deriving highly resolved vertical profiles of nitrogen dioxide (NO2), formaldehyde (CH2O), and sulfur dioxide (SO2) for use in satellite retrievals from platforms such as the Ozone Monitoring Instrument (OMI) and the Sentinel-5 Precursor, and the TROPOspheric Monitoring Instrument (tropOMI). Comparing simulations conducted at horizontal resolutions of 3° × 2° and 1° × 1° reveals differences of ±20 % exist in the global seasonal distribution of 222Rn, being larger near specific coastal locations and tropical oceans. For tropospheric ozone (O3), analysis of the chemical budget terms shows that the impact on globally integrated photolysis rates is rather low, in spite of the higher spatial variability of meteorological data fields from ERA-Interim at 1° × 1°. Surface concentrations of O3 in high-NOx regions decrease between 5 and 10 % at 1° × 1° due to a reduction in NOx recycling terms and an increase in the associated titration term of O3 by NO. At 1° × 1°, the net global stratosphere-troposphere exchange of O3 decreases by ˜ 7 %, with an associated shift in the hemispheric gradient. By comparing NO, NO2, HNO3 and peroxy-acetyl-nitrate (PAN) profiles against measurement composites, we show that TM5-MP captures the vertical distribution of NOx and long-lived NOx reservoirs at background locations, again with modest changes at 1° × 1°. Comparing monthly mean distributions in lightning NOx and applying ERA-Interim convective mass fluxes, we show that the vertical re-distribution of lightning NOx changes with enhanced release of NOx in the upper troposphere. We show that surface mixing ratios in both NO and NO2 are generally underestimated in both low- and high-NOx scenarios. For Europe, a negative bias exists for [NO] at the surface across the whole domain, with lower biases at 1° × 1° at only ˜ 20 % of sites. For NO2, biases are more variable, with lower (higher) biases at 1° × 1° occurring at ˜ 35 % ( ˜ 20 %) of sites, with the remainder showing little change. For CH2O, the impact of higher resolution on the chemical budget terms is rather modest, with changes of less than 5 %. The simulated vertical distribution of CH2O agrees reasonably well with measurements in pristine locations, although column-integrated values are generally underestimated relative to satellite measurements in polluted regions. For SO2, the performance at 1° × 1° is principally governed by the quality of the emission inventory, with limited improvements in the site-specific biases, with most showing no significant improvement. For the vertical column, improvements near strong source regions occur which reduce the biases in the integrated column. For remote regions missing biogenic source terms are inferred.

  11. Strategies for satellite-based monitoring of CO2 from distributed area and point sources

    NASA Astrophysics Data System (ADS)

    Schwandner, Florian M.; Miller, Charles E.; Duren, Riley M.; Natraj, Vijay; Eldering, Annmarie; Gunson, Michael R.; Crisp, David

    2014-05-01

    Atmospheric CO2 budgets are controlled by the strengths, as well as the spatial and temporal variabilities of CO2 sources and sinks. Natural CO2 sources and sinks are dominated by the vast areas of the oceans and the terrestrial biosphere. In contrast, anthropogenic and geogenic CO2 sources are dominated by distributed area and point sources, which may constitute as much as 70% of anthropogenic (e.g., Duren & Miller, 2012), and over 80% of geogenic emissions (Burton et al., 2013). Comprehensive assessments of CO2 budgets necessitate robust and highly accurate satellite remote sensing strategies that address the competing and often conflicting requirements for sampling over disparate space and time scales. Spatial variability: The spatial distribution of anthropogenic sources is dominated by patterns of production, storage, transport and use. In contrast, geogenic variability is almost entirely controlled by endogenic geological processes, except where surface gas permeability is modulated by soil moisture. Satellite remote sensing solutions will thus have to vary greatly in spatial coverage and resolution to address distributed area sources and point sources alike. Temporal variability: While biogenic sources are dominated by diurnal and seasonal patterns, anthropogenic sources fluctuate over a greater variety of time scales from diurnal, weekly and seasonal cycles, driven by both economic and climatic factors. Geogenic sources typically vary in time scales of days to months (geogenic sources sensu stricto are not fossil fuels but volcanoes, hydrothermal and metamorphic sources). Current ground-based monitoring networks for anthropogenic and geogenic sources record data on minute- to weekly temporal scales. Satellite remote sensing solutions would have to capture temporal variability through revisit frequency or point-and-stare strategies. Space-based remote sensing offers the potential of global coverage by a single sensor. However, no single combination of orbit and sensor provides the full range of temporal sampling needed to characterize distributed area and point source emissions. For instance, point source emission patterns will vary with source strength, wind speed and direction. Because wind speed, direction and other environmental factors change rapidly, short term variabilities should be sampled. For detailed target selection and pointing verification, important lessons have already been learned and strategies devised during JAXA's GOSAT mission (Schwandner et al, 2013). The fact that competing spatial and temporal requirements drive satellite remote sensing sampling strategies dictates a systematic, multi-factor consideration of potential solutions. Factors to consider include vista, revisit frequency, integration times, spatial resolution, and spatial coverage. No single satellite-based remote sensing solution can address this problem for all scales. It is therefore of paramount importance for the international community to develop and maintain a constellation of atmospheric CO2 monitoring satellites that complement each other in their temporal and spatial observation capabilities: Polar sun-synchronous orbits (fixed local solar time, no diurnal information) with agile pointing allow global sampling of known distributed area and point sources like megacities, power plants and volcanoes with daily to weekly temporal revisits and moderate to high spatial resolution. Extensive targeting of distributed area and point sources comes at the expense of reduced mapping or spatial coverage, and the important contextual information that comes with large-scale contiguous spatial sampling. Polar sun-synchronous orbits with push-broom swath-mapping but limited pointing agility may allow mapping of individual source plumes and their spatial variability, but will depend on fortuitous environmental conditions during the observing period. These solutions typically have longer times between revisits, limiting their ability to resolve temporal variations. Geostationary and non-sun-synchronous low-Earth-orbits (precessing local solar time, diurnal information possible) with agile pointing have the potential to provide, comprehensive mapping of distributed area sources such as megacities with longer stare times and multiple revisits per day, at the expense of global access and spatial coverage. An ad hoc CO2 remote sensing constellation is emerging. NASA's OCO-2 satellite (launch July 2014) joins JAXA's GOSAT satellite in orbit. These will be followed by GOSAT-2 and NASA's OCO-3 on the International Space Station as early as 2017. Additional polar orbiting satellites (e.g., CarbonSat, under consideration at ESA) and geostationary platforms may also become available. However, the individual assets have been designed with independent science goals and requirements, and limited consideration of coordinated observing strategies. Every effort must be made to maximize the science return from this constellation. We discuss the opportunities to exploit the complementary spatial and temporal coverage provided by these assets as well as the crucial gaps in the capabilities of this constellation. References Burton, M.R., Sawyer, G.M., and Granieri, D. (2013). Deep carbon emissions from volcanoes. Rev. Mineral. Geochem. 75: 323-354. Duren, R.M., Miller, C.E. (2012). Measuring the carbon emissions of megacities. Nature Climate Change 2, 560-562. Schwandner, F.M., Oda, T., Duren, R., Carn, S.A., Maksyutov, S., Crisp, D., Miller, C.E. (2013). Scientific Opportunities from Target-Mode Capabilities of GOSAT-2. NASA Jet Propulsion Laboratory, California Institute of Technology, Pasadena CA, White Paper, 6p., March 2013.

  12. Accuracy-preserving source term quadrature for third-order edge-based discretization

    NASA Astrophysics Data System (ADS)

    Nishikawa, Hiroaki; Liu, Yi

    2017-09-01

    In this paper, we derive a family of source term quadrature formulas for preserving third-order accuracy of the node-centered edge-based discretization for conservation laws with source terms on arbitrary simplex grids. A three-parameter family of source term quadrature formulas is derived, and as a subset, a one-parameter family of economical formulas is identified that does not require second derivatives of the source term. Among the economical formulas, a unique formula is then derived that does not require gradients of the source term at neighbor nodes, thus leading to a significantly smaller discretization stencil for source terms. All the formulas derived in this paper do not require a boundary closure, and therefore can be directly applied at boundary nodes. Numerical results are presented to demonstrate third-order accuracy at interior and boundary nodes for one-dimensional grids and linear triangular/tetrahedral grids over straight and curved geometries.

  13. Maximizing the spatial representativeness of NO2 monitoring data using a combination of local wind-based sectoral division and seasonal and diurnal correction factors.

    PubMed

    Donnelly, Aoife; Naughton, Owen; Misstear, Bruce; Broderick, Brian

    2016-10-14

    This article describes a new methodology for increasing the spatial representativeness of individual monitoring sites. Air pollution levels at a given point are influenced by emission sources in the immediate vicinity. Since emission sources are rarely uniformly distributed around a site, concentration levels will inevitably be most affected by the sources in the prevailing upwind direction. The methodology provides a means of capturing this effect and providing additional information regarding source/pollution relationships. The methodology allows for the division of the air quality data from a given monitoring site into a number of sectors or wedges based on wind direction and estimation of annual mean values for each sector, thus optimising the information that can be obtained from a single monitoring station. The method corrects for short-term data, diurnal and seasonal variations in concentrations (which can produce uneven weighting of data within each sector) and uneven frequency of wind directions. Significant improvements in correlations between the air quality data and the spatial air quality indicators were obtained after application of the correction factors. This suggests the application of these techniques would be of significant benefit in land-use regression modelling studies. Furthermore, the method was found to be very useful for estimating long-term mean values and wind direction sector values using only short-term monitoring data. The methods presented in this article can result in cost savings through minimising the number of monitoring sites required for air quality studies while also capturing a greater degree of variability in spatial characteristics. In this way, more reliable, but also more expensive monitoring techniques can be used in preference to a higher number of low-cost but less reliable techniques. The methods described in this article have applications in local air quality management, source receptor analysis, land-use regression mapping and modelling and population exposure studies.

  14. INEEL Subregional Conceptual Model Report Volume 3: Summary of Existing Knowledge of Natural and Anthropogenic Influences on the Release of Contaminants to the Subsurface Environment from Waste Source Terms at the INEEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paul L. Wichlacz

    2003-09-01

    This source-term summary document is intended to describe the current understanding of contaminant source terms and the conceptual model for potential source-term release to the environment at the Idaho National Engineering and Environmental Laboratory (INEEL), as presented in published INEEL reports. The document presents a generalized conceptual model of the sources of contamination and describes the general categories of source terms, primary waste forms, and factors that affect the release of contaminants from the waste form into the vadose zone and Snake River Plain Aquifer. Where the information has previously been published and is readily available, summaries of the inventorymore » of contaminants are also included. Uncertainties that affect the estimation of the source term release are also discussed where they have been identified by the Source Term Technical Advisory Group. Areas in which additional information are needed (i.e., research needs) are also identified.« less

  15. The development of the rhizosphere: simulation of root exudation for two contrasting exudates: citrate and mucilage

    NASA Astrophysics Data System (ADS)

    Sheng, Cheng; Bol, Roland; Vetterlein, Doris; Vanderborght, Jan; Schnepf, Andrea

    2017-04-01

    Different types of root exudates and their effect on soil/rhizosphere properties have received a lot of attention. Since their influence of rhizosphere properties and processes depends on their concentration in the soil, the assessment of the spatial-temporal exudate concentration distribution around roots is of key importance for understanding the functioning of the rhizosphere. Different root systems have different root architectures. Different types of root exudates diffuse in the rhizosphere with different diffusion coefficient. Both of them are responsible for the dynamics of exudate concentration distribution in the rhizosphere. Hence, simulations of root exudation involving four kinds of plant root systems (Vicia faba, Lupinus albus, Triticum aestivum and Zea mays) and two kinds of root exudates (citrate and mucilage) were conducted. We consider a simplified root architecture where each root is represented by a straight line. Assuming that root tips move at a constant velocity and that mucilage transport is linear, concentration distributions can be obtained from a convolution of the analytical solution of the transport equation in a stationary flow field for an instantaneous point source injection with the spatial-temporal distribution of the source strength. By coupling the analytical equation with a root growth model that delivers the spatial-temporal source term, we simulated exudate concentration distributions for citrate and mucilage with MATLAB. From the simulation results, we inferred the following information about the rhizosphere: (a) the dynamics of the root architecture development is the main effect of exudate distribution in the root zone; (b) a steady rhizosphere with constant width is more likely to develop for individual roots when the diffusion coefficient is small. The simulations suggest that rhizosphere development depends in the following way on the root and exudate properties: the dynamics of the root architecture result in various development patterns of the rhizosphere. Meanwhile, Results improve our understanding of the impact of the spatial and temporal heterogeneity of exudate input on rhizosphere development for different root system types and substances. In future work, we will use the simulation tool to infer critical parameters that determine the spatial-temporal extent of the rhizosphere from experimental data.

  16. PolEASIA Project: Pollution in Eastern Asia - towards better Air Quality Prevision and Impacts' Evaluation

    NASA Astrophysics Data System (ADS)

    Dufour, Gaëlle; Albergel, Armand; Balkanski, Yves; Beekmann, Matthias; Cai, Zhaonan; Fortems-Cheiney, Audrey; Cuesta, Juan; Derognat, Claude; Eremenko, Maxim; Foret, Gilles; Hauglustaine, Didier; Lachatre, Matthieu; Laurent, Benoit; Liu, Yi; Meng, Fan; Siour, Guillaume; Tao, Shu; Velay-Lasry, Fanny; Zhang, Qijie; Zhang, Yuli

    2017-04-01

    The rapid economic development and urbanization of China during the last decades resulted in rising pollutant emissions leading to amongst the largest pollutant concentrations in the world for the major pollutants (ozone, PM2.5, and PM10). Robust monitoring and forecasting systems associated with downstream services providing comprehensive risk indicators are highly needed to establish efficient pollution mitigation strategies. In addition, a precise evaluation of the present and future impacts of Chinese pollutant emissions is of importance to quantify: first, the consequences of pollutants export on atmospheric composition and air quality all over the globe; second, the additional radiative forcing induced by the emitted and produced short-lived climate forcers (ozone and aerosols); third, the long-term health consequences of pollution exposure. To achieve this, a detailed understanding of East Asian pollution is necessary. The French PolEASIA project aims at addressing these different issues by providing a better quantification of major pollutants sources and distributions as well as of their recent and future evolution. The main objectives, methodologies and tools of this starting 4-year project will be presented. An ambitious synergistic and multi-scale approach coupling innovative satellite observations, in situ measurements and chemical transport model simulations will be developed to characterize the spatial distribution, the interannual to daily variability and the trends of the major pollutants (ozone and aerosols) and their sources over East Asia, and to quantify the role of the different processes (emissions, transport, chemical transformation) driving the observed pollutant distributions. A particular attention will be paid to assess the natural and anthropogenic contributions to East Asian pollution. Progress made with the understanding of pollutant sources, especially in terms of modeling of pollution over East Asia and advanced numerical approaches such as inverse modeling will serve the development of an efficient and marketable forecasting system for regional outdoor air pollution. The performances of this upgraded forecasting system will be evaluated and promoted to ensure a good visibility of the French technology. In addition, the contribution of Chinese pollution to the regional and global atmospheric composition, as well as the resulting radiative forcing of short-lived species will be determined using both satellite observations and model simulations. Health Impact Assessment (HIA) methods coupled with model simulations will be used to estimate the long-term impacts of exposure to pollutants (PM2.5 and ozone) on cardiovascular and respiratory mortality. First results obtained in this framework will be presented.

  17. Organic aerosols over Indo-Gangetic Plain: Sources, distributions and climatic implications

    NASA Astrophysics Data System (ADS)

    Singh, Nandita; Mhawish, Alaa; Deboudt, Karine; Singh, R. S.; Banerjee, Tirthankar

    2017-05-01

    Organic aerosol (OA) constitutes a dominant fraction of airborne particulates over Indo-Gangetic Plain (IGP) especially during post-monsoon and winter. Its exposure has been associated with adverse health effects while there are evidences of its interference with Earth's radiation balance and cloud condensation (CC), resulting possible alteration of hydrological cycle. Therefore, presence and effects of OA directly link it with food security and thereby, sustainability issues. In these contexts, atmospheric chemistry involving formation, volatility and aging of primary OA (POA) and secondary OA (SOA) have been reviewed with specific reference to IGP. Systematic reviews on science of OA sources, evolution and climate perturbations are presented with databases collected from 82 publications available throughout IGP till 2016. Both gaseous and aqueous phase chemical reactions were studied in terms of their potential to form SOA. Efforts were made to recognize the regional variation of OA, its chemical constituents and sources throughout IGP and inferences were made on its possible impacts on regional air quality. Mass fractions of OA to airborne particulate showed spatial variation likewise in Lahore (37 and 44% in fine and coarse fractions, respectively), Patiala (28 and 37%), Delhi (25 and 38%), Kanpur (24 and 30%), Kolkata (11 and 21%) and Dhaka. Source apportionment studies indicate biomass burning, coal combustion and vehicular emissions as predominant OA sources. However, sources represent considerable seasonal variations with dominance of gasoline and diesel emissions during summer and coal and biomass based emissions during winter and post-monsoon. Crop residue burning over upper-IGP was also frequently held responsible for massive OA emission, mostly characterized by its hygroscopic nature, thus having potential to act as CC nuclei. Conclusively, climatic implication of particulate bound OA has been discussed in terms of its interaction with radiation balance.

  18. Investigation of Magnetotelluric Source Effect Based on Twenty Years of Telluric and Geomagnetic Observation

    NASA Astrophysics Data System (ADS)

    Kis, A.; Lemperger, I.; Wesztergom, V.; Menvielle, M.; Szalai, S.; Novák, A.; Hada, T.; Matsukiyo, S.; Lethy, A. M.

    2016-12-01

    Magnetotelluric method is widely applied for investigation of subsurface structures by imaging the spatial distribution of electric conductivity. The method is based on the experimental determination of surface electromagnetic impedance tensor (Z) by surface geomagnetic and telluric registrations in two perpendicular orientation. In practical explorations the accurate estimation of Z necessitates the application of robust statistical methods for two reasons:1) the geomagnetic and telluric time series' are contaminated by man-made noise components and2) the non-homogeneous behavior of ionospheric current systems in the period range of interest (ELF-ULF and longer periods) results in systematic deviation of the impedance of individual time windows.Robust statistics manage both load of Z for the purpose of subsurface investigations. However, accurate analysis of the long term temporal variation of the first and second statistical moments of Z may provide valuable information about the characteristics of the ionospheric source current systems. Temporal variation of extent, spatial variability and orientation of the ionospheric source currents has specific effects on the surface impedance tensor. Twenty year long geomagnetic and telluric recordings of the Nagycenk Geophysical Observatory provides unique opportunity to reconstruct the so called magnetotelluric source effect and obtain information about the spatial and temporal behavior of ionospheric source currents at mid-latitudes. Detailed investigation of time series of surface electromagnetic impedance tensor has been carried out in different frequency classes of the ULF range. The presentation aims to provide a brief review of our results related to long term periodic modulations, up to solar cycle scale and about eventual deviations of the electromagnetic impedance and so the reconstructed equivalent ionospheric source effects.

  19. Atmospheric Tracer Inverse Modeling Using Markov Chain Monte Carlo (MCMC)

    NASA Astrophysics Data System (ADS)

    Kasibhatla, P.

    2004-12-01

    In recent years, there has been an increasing emphasis on the use of Bayesian statistical estimation techniques to characterize the temporal and spatial variability of atmospheric trace gas sources and sinks. The applications have been varied in terms of the particular species of interest, as well as in terms of the spatial and temporal resolution of the estimated fluxes. However, one common characteristic has been the use of relatively simple statistical models for describing the measurement and chemical transport model error statistics and prior source statistics. For example, multivariate normal probability distribution functions (pdfs) are commonly used to model these quantities and inverse source estimates are derived for fixed values of pdf paramaters. While the advantage of this approach is that closed form analytical solutions for the a posteriori pdfs of interest are available, it is worth exploring Bayesian analysis approaches which allow for a more general treatment of error and prior source statistics. Here, we present an application of the Markov Chain Monte Carlo (MCMC) methodology to an atmospheric tracer inversion problem to demonstrate how more gereral statistical models for errors can be incorporated into the analysis in a relatively straightforward manner. The MCMC approach to Bayesian analysis, which has found wide application in a variety of fields, is a statistical simulation approach that involves computing moments of interest of the a posteriori pdf by efficiently sampling this pdf. The specific inverse problem that we focus on is the annual mean CO2 source/sink estimation problem considered by the TransCom3 project. TransCom3 was a collaborative effort involving various modeling groups and followed a common modeling and analysis protocoal. As such, this problem provides a convenient case study to demonstrate the applicability of the MCMC methodology to atmospheric tracer source/sink estimation problems.

  20. Chandra Detection of Intracluster X-Ray sources in Virgo

    NASA Astrophysics Data System (ADS)

    Hou, Meicun; Li, Zhiyuan; Peng, Eric W.; Liu, Chengze

    2017-09-01

    We present a survey of X-ray point sources in the nearest and dynamically young galaxy cluster, Virgo, using archival Chandra observations that sample the vicinity of 80 early-type member galaxies. The X-ray source populations at the outskirts of these galaxies are of particular interest. We detect a total of 1046 point sources (excluding galactic nuclei) out to a projected galactocentric radius of ˜40 kpc and down to a limiting 0.5-8 keV luminosity of ˜ 2× {10}38 {erg} {{{s}}}-1. Based on the cumulative spatial and flux distributions of these sources, we statistically identify ˜120 excess sources that are not associated with the main stellar content of the individual galaxies, nor with the cosmic X-ray background. This excess is significant at a 3.5σ level, when Poisson error and cosmic variance are taken into account. On the other hand, no significant excess sources are found at the outskirts of a control sample of field galaxies, suggesting that at least some fraction of the excess sources around the Virgo galaxies are truly intracluster X-ray sources. Assisted with ground-based and HST optical imaging of Virgo, we discuss the origins of these intracluster X-ray sources, in terms of supernova-kicked low-mass X-ray binaries (LMXBs), globular clusters, LMXBs associated with the diffuse intracluster light, stripped nucleated dwarf galaxies and free-floating massive black holes.

  1. Rebound of a coal tar creosote plume following partial source zone treatment with permanganate.

    PubMed

    Thomson, N R; Fraser, M J; Lamarche, C; Barker, J F; Forsey, S P

    2008-11-14

    The long-term management of dissolved plumes originating from a coal tar creosote source is a technical challenge. For some sites stabilization of the source may be the best practical solution to decrease the contaminant mass loading to the plume and associated off-site migration. At the bench-scale, the deposition of manganese oxides, a permanganate reaction byproduct, has been shown to cause pore plugging and the formation of a manganese oxide layer adjacent to the non-aqueous phase liquid creosote which reduces post-treatment mass transfer and hence mass loading from the source. The objective of this study was to investigate the potential of partial permanganate treatment to reduce the ability of a coal tar creosote source zone to generate a multi-component plume at the pilot-scale over both the short-term (weeks to months) and the long-term (years) at a site where there is >10 years of comprehensive synoptic plume baseline data available. A series of preliminary bench-scale experiments were conducted to support this pilot-scale investigation. The results from the bench-scale experiments indicated that if sufficient mass removal of the reactive compounds is achieved then the effective solubility, aqueous concentration and rate of mass removal of the more abundant non-reactive coal tar creosote compounds such as biphenyl and dibenzofuran can be increased. Manganese oxide formation and deposition caused an order-of-magnitude decrease in hydraulic conductivity. Approximately 125 kg of permanganate were delivered into the pilot-scale source zone over 35 days, and based on mass balance estimates <10% of the initial reactive coal tar creosote mass in the source zone was oxidized. Mass discharge estimated at a down-gradient fence line indicated >35% reduction for all monitored compounds except for biphenyl, dibenzofuran and fluoranthene 150 days after treatment, which is consistent with the bench-scale experimental results. Pre- and post-treatment soil core data indicated a highly variable and random spatial distribution of mass within the source zone and provided no insight into the mass removed of any of the monitored species. The down-gradient plume was monitored approximately 1, 2 and 4 years following treatment. The data collected at 1 and 2 years post-treatment showed a decrease in mass discharge (10 to 60%) and/or total plume mass (0 to 55%); however, by 4 years post-treatment there was a rebound in both mass discharge and total plume mass for all monitored compounds to pre-treatment values or higher. The variability of the data collected was too large to resolve subtle changes in plume morphology, particularly near the source zone, that would provide insight into the impact of the formation and deposition of manganese oxides that occurred during treatment on mass transfer and/or flow by-passing. Overall, the results from this pilot-scale investigation indicate that there was a significant but short-term (months) reduction of mass emanating from the source zone as a result of permanganate treatment but there was no long-term (years) impact on the ability of this coal tar creosote source zone to generate a multi-component plume.

  2. Assessment of optimum threshold and particle shape parameter for the image analysis of aggregate size distribution of concrete sections

    NASA Astrophysics Data System (ADS)

    Ozen, Murat; Guler, Murat

    2014-02-01

    Aggregate gradation is one of the key design parameters affecting the workability and strength properties of concrete mixtures. Estimating aggregate gradation from hardened concrete samples can offer valuable insights into the quality of mixtures in terms of the degree of segregation and the amount of deviation from the specified gradation limits. In this study, a methodology is introduced to determine the particle size distribution of aggregates from 2D cross sectional images of concrete samples. The samples used in the study were fabricated from six mix designs by varying the aggregate gradation, aggregate source and maximum aggregate size with five replicates of each design combination. Each sample was cut into three pieces using a diamond saw and then scanned to obtain the cross sectional images using a desktop flatbed scanner. An algorithm is proposed to determine the optimum threshold for the image analysis of the cross sections. A procedure was also suggested to determine a suitable particle shape parameter to be used in the analysis of aggregate size distribution within each cross section. Results of analyses indicated that the optimum threshold hence the pixel distribution functions may be different even for the cross sections of an identical concrete sample. Besides, the maximum ferret diameter is the most suitable shape parameter to estimate the size distribution of aggregates when computed based on the diagonal sieve opening. The outcome of this study can be of practical value for the practitioners to evaluate concrete in terms of the degree of segregation and the bounds of mixture's gradation achieved during manufacturing.

  3. Long-term monitoring of persistent organic pollutants (POPs) at the Norwegian Troll station in Dronning Maud Land, Antarctica

    NASA Astrophysics Data System (ADS)

    Kallenborn, R.; Breivik, K.; Eckhardt, S.; Lunder, C. R.; Manø, S.; Schlabach, M.; Stohl, A.

    2013-07-01

    A first long-term monitoring of selected persistent organic pollutants (POPs) in Antarctic air has been conducted at the Norwegian research station Troll (Dronning Maud Land). As target contaminants 32 PCB congeners, α- and γ-hexachlorocyclohexane (HCH), trans- and cis-chlordane, trans- and cis-nonachlor, p,p'- and o,p-DDT, DDD, DDE as well as hexachlorobenzene (HCB) were selected. The monitoring program with weekly samples taken during the period 2007-2010 was coordinated with the parallel program at the Norwegian Arctic monitoring site (Zeppelin mountain, Ny-Ålesund, Svalbard) in terms of priority compounds, sampling schedule as well as analytical methods. The POP concentration levels found in Antarctica were considerably lower than Arctic atmospheric background concentrations. Similar to observations for Arctic samples, HCB is the predominant POP compound, with levels of around 22 pg m-3 throughout the entire monitoring period. In general, the following concentration distribution was found for the Troll samples analyzed: HCB > Sum HCH > Sum PCB > Sum DDT > Sum chlordanes. Atmospheric long-range transport was identified as a major contamination source for POPs in Antarctic environments. Several long-range transport events with elevated levels of pesticides and/or compounds with industrial sources were identified based on retroplume calculations with a Lagrangian particle dispersion model (FLEXPART).

  4. SeeStar: an open-source, low-cost imaging system for subsea observations

    NASA Astrophysics Data System (ADS)

    Cazenave, F.; Kecy, C. D.; Haddock, S.

    2016-02-01

    Scientists and engineers at the Monterey Bay Aquarium Research Institute (MBARI) have collaborated to develop SeeStar, a modular, light weight, self-contained, low-cost subsea imaging system for short- to long-term monitoring of marine ecosystems. SeeStar is composed of separate camera, battery, and LED lighting modules. Two versions of the system exist: one rated to 300 meters depth, the other rated to 1500 meters. Users can download plans and instructions from an online repository and build the system using low-cost off-the-shelf components. The system utilizes an easily programmable Arduino based controller, and the widely distributed GoPro camera. The system can be deployed in a variety of scenarios taking still images and video and can be operated either autonomously or tethered on a range of platforms, including ROVs, AUVs, landers, piers, and moorings. Several Seestar systems have been built and used for scientific studies and engineering tests. The long-term goal of this project is to have a widely distributed marine imaging network across thousands of locations, to develop baselines of biological information.

  5. Statistical study of cold-dense plasma sheet: spatial distribution and semi-annual variation

    NASA Astrophysics Data System (ADS)

    Shi, Q.; Bai, S.; Tian, A.; Nowada, M.; Degeling, A. W.; Zhou, X. Z.; Zong, Q.; Rae, J.; Fu, S.; Zhang, H.; Pu, Z.; Fazakerley, A. N.

    2017-12-01

    The cold-dense plasma sheet (CDPS), which plays an important role in the solar wind-magnetosphere coupling during geomagnetic quiet times, is often observed in the magnetosphere, and also be considered as an important particle source for the ring current during geomagnetic storms. However, the long term variation of CDPS occurrences has not been investigated. Using 21 years of Geotail data (1996-2016), we found 677 CDPS events and investigated the long term variation of CDPS occurrence. The spatial distribution of CDPS is also investigated using the in situ observation of Geotail. Since the solar wind entry is easier to occur under stronger northward IMF conditions, we investigated the IMF conditions using 49 years of IMF data (1968-2016) from OMNI data set. We found that both the CDPS occurrence and positive IMF Bz have semi-annual variations, and the variation of positive IMF Bz is consistent with the Russell-McPherron (R-M) effect. Therefore we consider that the semi-annual variation of CDPS occurrence is related to the R-M effect.

  6. Delay and cost performance analysis of the diffie-hellman key exchange protocol in opportunistic mobile networks

    NASA Astrophysics Data System (ADS)

    Soelistijanto, B.; Muliadi, V.

    2018-03-01

    Diffie-Hellman (DH) provides an efficient key exchange system by reducing the number of cryptographic keys distributed in the network. In this method, a node broadcasts a single public key to all nodes in the network, and in turn each peer uses this key to establish a shared secret key which then can be utilized to encrypt and decrypt traffic between the peer and the given node. In this paper, we evaluate the key transfer delay and cost performance of DH in opportunistic mobile networks, a specific scenario of MANETs where complete end-to-end paths rarely exist between sources and destinations; consequently, the end-to-end delays in these networks are much greater than typical MANETs. Simulation results, driven by a random node movement model and real human mobility traces, showed that DH outperforms a typical key distribution scheme based on the RSA algorithm in terms of key transfer delay, measured by average key convergence time; however, DH performs as well as the benchmark in terms of key transfer cost, evaluated by total key (copies) forwards.

  7. Imaging the complex geometry of a magma reservoir using FEM-based linear inverse modeling of InSAR data: application to Rabaul Caldera, Papua New Guinea

    NASA Astrophysics Data System (ADS)

    Ronchin, Erika; Masterlark, Timothy; Dawson, John; Saunders, Steve; Martì Molist, Joan

    2017-06-01

    We test an innovative inversion scheme using Green's functions from an array of pressure sources embedded in finite-element method (FEM) models to image, without assuming an a-priori geometry, the composite and complex shape of a volcano deformation source. We invert interferometric synthetic aperture radar (InSAR) data to estimate the pressurization and shape of the magma reservoir of Rabaul caldera, Papua New Guinea. The results image the extended shallow magmatic system responsible for a broad and long-term subsidence of the caldera between 2007 February and 2010 December. Elastic FEM solutions are integrated into the regularized linear inversion of InSAR data of volcano surface displacements in order to obtain a 3-D image of the source of deformation. The Green's function matrix is constructed from a library of forward line-of-sight displacement solutions for a grid of cubic elementary deformation sources. Each source is sequentially generated by removing the corresponding cubic elements from a common meshed domain and simulating the injection of a fluid mass flux into the cavity, which results in a pressurization and volumetric change of the fluid-filled cavity. The use of a single mesh for the generation of all FEM models avoids the computationally expensive process of non-linear inversion and remeshing a variable geometry domain. Without assuming an a-priori source geometry other than the configuration of the 3-D grid that generates the library of Green's functions, the geodetic data dictate the geometry of the magma reservoir as a 3-D distribution of pressure (or flux of magma) within the source array. The inversion of InSAR data of Rabaul caldera shows a distribution of interconnected sources forming an amorphous, shallow magmatic system elongated under two opposite sides of the caldera. The marginal areas at the sides of the imaged magmatic system are the possible feeding reservoirs of the ongoing Tavurvur volcano eruption of andesitic products on the east side and of the past Vulcan volcano eruptions of more evolved materials on the west side. The interconnection and spatial distributions of sources correspond to the petrography of the volcanic products described in the literature and to the dynamics of the single and twin eruptions that characterize the caldera. The ability to image the complex geometry of deformation sources in both space and time can improve our ability to monitor active volcanoes, widen our understanding of the dynamics of active volcanic systems and improve the predictions of eruptions.

  8. The Compositional Evolution of C/2012 S1 (ISON) from Ground-Based High-Resolution Infrared Spectroscopy as Part of a Worldwide Observing Campaign

    NASA Technical Reports Server (NTRS)

    Russo, N. Dello; Vervack, R. J., Jr.; Kawakita, H.; Cochran, A.; McKay, A. J.; Harris, W. M.; Weaver, H.A.; Lisse, C. M.; DiSanti, M. A.; Kobayashi, H.

    2015-01-01

    Volatile production rates, relative abundances, rotational temperatures, and spatial distributions in the coma were measured in C/2012 S1 (ISON) using long-slit high-dispersion (lambda/delta lambda approximately 2.5 times 10 (sup 4)) infrared spectroscopy as part of a worldwide observing campaign. Spectra were obtained on Universal Time 2013 October 26 and 28 with NIRSPEC (Near Infrared Spectrometer) at the W.M. Keck Observatory, and Universal Time 2013 November 19 and 20 with CSHELL (Cryogenic Echelle Spectrograph) at the NASA IRTF (Infrared Telescope Facility). H2O was detected on all dates, with production rates increasing markedly from (8.7 plus or minus 1.5) times 10 (sup 27) molecules per second on October 26 (Heliocentric Distance = 1.12 Astronomical Units) to (3.7 plus or minus 0.4) times 10 (sup 29) molecules per second on November 20 (Heliocentric Distance = 0.43 Astronomical Units). Short-term variability of H2O production is also seen as observations on November 19 show an increase in H2O production rate of nearly a factor of two over a period of about 6 hours. C2H6, CH3OH and CH4 abundances in ISON (International Scientific Optical Network) are slightly depleted relative to H2O when compared to mean values for comets measured at infrared wavelengths. On the November dates, C2H2, HCN and OCS abundances relative to H2O appear to be within the range of mean values, whereas H2CO and NH3 were significantly enhanced. There is evidence that the abundances with respect to H2O increased for some species but not others between October 28 (Heliocentric Distance = 1.07 Astronomical Units) and November 19 (Heliocentric Distance = 0.46 Astronomical Units). The high mixing ratios of H2CO to CH3OH and C2H2 to C2H6 on November 19, and changes in the mixing ratios of some species with respect to H2O between October 28 to November 19, indicates compositional changes that may be the result of a transition from sampling radiation-processed outer layers in this dynamically new comet to sampling more pristine natal material as the outer processed layer was increasingly eroded and the thermal wave propagated into the nucleus as the comet approached perihelion for the first time. On November 19 and 20, the spatial distribution for dust appears asymmetric and enhanced in the antisolar direction, whereas spatial distributions for volatiles (excepting CN) appear symmetric with their peaks slightly offset in the sunward direction compared to the dust. Spatial distributions for H2O, HCN, C2H6, C2H2, and H2CO on November 19 show no definitive evidence for significant contributions from extended sources; however, broader spatial distributions for NH3 and OCS may be consistent with extended sources for these species. Abundances of HCN and C2H2 on November 19 and 20 are insufficient to account for reported abundances of CN and C2 in ISON near this time. Differences in HCN and CN spatial distributions are also consistent with HCN as only a minor source of CN in ISON on November 19 as the spatial distribution of CN in the coma suggests a dominant distributed source that is correlated with dust and not volatile release. The spatial distributions for NH3 and NH2 are similar, suggesting that NH3 is the primary source of NH2 with no evidence of a significant dust source of NH2; however, the higher production rates derived for NH3 compared to NH2 on November 19 and 20 remain unexplained. This suggests a more complete analysis that treats NH2 as a distributed source and accounts for its emission mechanism is needed for future work.

  9. The North American Energy System: Overview of the 3rd Chapter of SOCCR-2

    NASA Astrophysics Data System (ADS)

    Marcotullio, P. J.

    2016-12-01

    North America, including Canada, Mexico and the United States, has a large and complex energy system, which includes the extraction and conversion of primary energy sources and their storage, transmission, distribution and ultimate end use in the building, transportation and industrial sectors. The chapter overviews this system focusing on our understanding of the energy trends and system feedback dynamics, key drivers of change, and subsequent carbon emissions and the basis for carbon management. We also put the carbon emissions from the North American system in global context. Highlights include the changes to the system (sources, fuel mix, drivers, infrastructure, etc.,) over the past decade, and a review of scenarios that provide glimpses into future emissions levels and meeting the requirements for decarbonization in the medium and longer term.

  10. Survey of ion plating sources. [conferences

    NASA Technical Reports Server (NTRS)

    Spalvins, T.

    1979-01-01

    Based on the type of evaporation source, gaseous media and mode of transport, the following is discussed: resistance, electron beam, sputtering, reactive and ion beam evaporation. Ionization efficiencies and ion energies in the glow discharge determine the percentage of atoms which are ionized under typical ion plating conditions. The plating flux consists of a small number of energetic ions and a large number of energetic neutrals. The energy distribution ranges from thermal energies up to a maximum energy of the discharge. The various reaction mechanisms which contribute to the exceptionally strong adherence - formation of a graded sustrate/coating interface are not fully understood, however the controlling factors are evaluated. The influence of process variables on the nucleation and growth characteristics are illustrated in terms of morphological changes which affect the mechanical and tribological properties of the coating.

  11. Superthermal photon bunching in terms of simple probability distributions

    NASA Astrophysics Data System (ADS)

    Lettau, T.; Leymann, H. A. M.; Melcher, B.; Wiersig, J.

    2018-05-01

    We analyze the second-order photon autocorrelation function g(2 ) with respect to the photon probability distribution and discuss the generic features of a distribution that results in superthermal photon bunching [g(2 )(0 ) >2 ]. Superthermal photon bunching has been reported for a number of optical microcavity systems that exhibit processes such as superradiance or mode competition. We show that a superthermal photon number distribution cannot be constructed from the principle of maximum entropy if only the intensity and the second-order autocorrelation are given. However, for bimodal systems, an unbiased superthermal distribution can be constructed from second-order correlations and the intensities alone. Our findings suggest modeling superthermal single-mode distributions by a mixture of a thermal and a lasinglike state and thus reveal a generic mechanism in the photon probability distribution responsible for creating superthermal photon bunching. We relate our general considerations to a physical system, i.e., a (single-emitter) bimodal laser, and show that its statistics can be approximated and understood within our proposed model. Furthermore, the excellent agreement of the statistics of the bimodal laser and our model reveals that the bimodal laser is an ideal source of bunched photons, in the sense that it can generate statistics that contain no other features but the superthermal bunching.

  12. Assessment of source probabilities for potential tsunamis affecting the U.S. Atlantic coast

    USGS Publications Warehouse

    Geist, E.L.; Parsons, T.

    2009-01-01

    Estimating the likelihood of tsunamis occurring along the U.S. Atlantic coast critically depends on knowledge of tsunami source probability. We review available information on both earthquake and landslide probabilities from potential sources that could generate local and transoceanic tsunamis. Estimating source probability includes defining both size and recurrence distributions for earthquakes and landslides. For the former distribution, source sizes are often distributed according to a truncated or tapered power-law relationship. For the latter distribution, sources are often assumed to occur in time according to a Poisson process, simplifying the way tsunami probabilities from individual sources can be aggregated. For the U.S. Atlantic coast, earthquake tsunami sources primarily occur at transoceanic distances along plate boundary faults. Probabilities for these sources are constrained from previous statistical studies of global seismicity for similar plate boundary types. In contrast, there is presently little information constraining landslide probabilities that may generate local tsunamis. Though there is significant uncertainty in tsunami source probabilities for the Atlantic, results from this study yield a comparative analysis of tsunami source recurrence rates that can form the basis for future probabilistic analyses.

  13. Excitation efficiency of an optical fiber core source

    NASA Technical Reports Server (NTRS)

    Egalon, Claudio O.; Rogowski, Robert S.; Tai, Alan C.

    1992-01-01

    The exact field solution of a step-index profile fiber is used to determine the excitation efficiency of a distribution of sources in the core of an optical fiber. Previous results of a thin-film cladding source distribution to its core source counterpart are used for comparison. The behavior of power efficiency with the fiber parameters is examined and found to be similar to the behavior exhibited by cladding sources. It is also found that a core-source fiber is two orders of magnitude more efficient than a fiber with a bulk distribution of cladding sources. This result agrees qualitatively with previous ones obtained experimentally.

  14. A probabilistic analysis of cumulative carbon emissions and long-term planetary warming

    DOE PAGES

    Fyke, Jeremy Garmeson; Matthews, H. Damon

    2015-11-16

    Efforts to mitigate and adapt to long-term climate change could benefit greatly from probabilistic estimates of cumulative carbon emissions due to fossil fuel burning and resulting CO 2-induced planetary warming. Here we demonstrate the use of a reduced-form model to project these variables. We performed simulations using a large-ensemble framework with parametric uncertainty sampled to produce distributions of future cumulative emissions and consequent planetary warming. A hind-cast ensemble of simulations captured 1980–2012 historical CO 2 emissions trends and an ensemble of future projection simulations generated a distribution of emission scenarios that qualitatively resembled the suite of Representative and Extended Concentrationmore » Pathways. The resulting cumulative carbon emission and temperature change distributions are characterized by 5–95th percentile ranges of 0.96–4.9 teratonnes C (Tt C) and 1.4 °C–8.5 °C, respectively, with 50th percentiles at 3.1 Tt C and 4.7 °C. Within the wide range of policy-related parameter combinations that produced these distributions, we found that low-emission simulations were characterized by both high carbon prices and low costs of non-fossil fuel energy sources, suggesting the importance of these two policy levers in particular for avoiding dangerous levels of climate warming. With this analysis we demonstrate a probabilistic approach to the challenge of identifying strategies for limiting cumulative carbon emissions and assessing likelihoods of surpassing dangerous temperature thresholds.« less

  15. The Competition Between a Localised and Distributed Source of Buoyancy

    NASA Astrophysics Data System (ADS)

    Partridge, Jamie; Linden, Paul

    2012-11-01

    We propose a new mathematical model to study the competition between localised and distributed sources of buoyancy within a naturally ventilated filling box. The main controlling parameters in this configuration are the buoyancy fluxes of the distributed and local source, specifically their ratio Ψ. The steady state dynamics of the flow are heavily dependent on this parameter. For large Ψ, where the distributed source dominates, we find the space becomes well mixed as expected if driven by an distributed source alone. Conversely, for small Ψ we find the space reaches a stable two layer stratification. This is analogous to the classical case of a purely local source but here the lower layer is buoyant compared to the ambient, due to the constant flux of buoyancy emanating from the distributed source. The ventilation flow rate, buoyancy of the layers and also the location of the interface height, which separates the two layer stratification, are obtainable from the model. To validate the theoretical model, small scale laboratory experiments were carried out. Water was used as the working medium with buoyancy being driven directly by temperature differences. Theoretical results were compared with experimental data and overall good agreement was found. A CASE award project with Arup.

  16. Assessing the short-term clock drift of early broadband stations with burst events of the 26 s persistent and localized microseism

    NASA Astrophysics Data System (ADS)

    Xie, J.; Ni, S.; Chu, R.; Xia, Y.

    2017-12-01

    Accurate seismometer clock plays an important role in seismological studies including earthquake location and tomography. However, some seismic stations may have clock drift larger than 1 second, especially in early days of global seismic network. The 26 s Persistent Localized (PL) microseism event in the Gulf of Guinea sometime excites strong and coherent signals, and can be used as repeating source for assessing stability of seismometer clocks. Taking station GSC/TS in southern California, USA as an example, the 26 s PL signal can be easily observed in the ambient Noise Cross-correlation Function (NCF) between GSC/TS and a remote station. The variation of travel-time of this 26 s signal in the NCF is used to infer clock error. A drastic clock error is detected during June, 1992. This short-term clock error is confirmed by both teleseismic and local earthquake records with a magnitude of ±25 s. Using 26 s PL source, the clock can be validated for historical records of sparsely distributed stations, where usual NCF of short period microseism (<20 s) might be less effective due to its attenuation over long interstation distances. However, this method suffers from cycling problem, and should be verified by teleseismic/local P waves. The location change of the 26 s PL source may influence the measured clock drift, using regional stations with stable clock, we estimate the possible location change of the source.

  17. The importance of geospatial data to calculate the optimal distribution of renewable energies

    NASA Astrophysics Data System (ADS)

    Díaz, Paula; Masó, Joan

    2013-04-01

    Specially during last three years, the renewable energies are revolutionizing the international trade while they are geographically diversifying markets. Renewables are experiencing a rapid growth in power generation. According to REN21 (2012), during last six years, the total renewables capacity installed grew at record rates. In 2011, the EU raised its share of global new renewables capacity till 44%. The BRICS nations (Brazil, Russia, India and China) accounted for about 26% of the total global. Moreover, almost twenty countries in the Middle East, North Africa, and sub-Saharan Africa have currently active markets in renewables. The energy return ratios are commonly used to calculate the efficiency of the traditional energy sources. The Energy Return On Investment (EROI) compares the energy returned for a certain source and the energy used to get it (explore, find, develop, produce, extract, transform, harvest, grow, process, etc.). These energy return ratios have demonstrated a general decrease of efficiency of the fossil fuels and gas. When considering the limitations of the quantity of energy produced by some sources, the energy invested to obtain them and the difficulties of finding optimal locations for the establishment of renewables farms (e.g. due to an ever increasing scarce of appropriate land) the EROI becomes relevant in renewables. A spatialized EROI, which uses variables with spatial distribution, enables the optimal position in terms of both energy production and associated costs. It is important to note that the spatialized EROI can be mathematically formalized and calculated the same way for different locations in a reproducible way. This means that having established a concrete EROI methodology it is possible to generate a continuous map that will highlight the best productive zones for renewable energies in terms of maximum energy return at minimum cost. Relevant variables to calculate the real energy invested are the grid connections between production and consumption, transportation loses and efficiency of the grid. If appropriate, the spatialized EROI analysis could include any indirect costs that the source of energy might produce, such as visual impacts, food market impacts and land price. Such a spatialized study requires GIS tools to compute operations using both spatial relations like distances and frictions, and topological relations like connectivity, not easy to consider in the way that EROI is currently calculated. In a broader perspective, by applying the EROI to various energy sources, a comparative analysis of the efficiency to obtain different source can be done in a quantitative way. The increase in energy investment is also accompanied by the increase of manufactures and policies. Further efforts will be necessary in the coming years to provide energy access through smart grids and to determine the efficient areas in terms of cost of production and energy returned on investment. The authors present the EROI as a reliable solution to address the input and output energy relationship and increase the efficiency in energy investment considering the appropriate geospatial variables. The spatialized EROI can be a useful tool to consider by decision makers when designing energy policies and programming energy funds, because it is an objective demonstration of which energy sources are more convenient in terms of costs and efficiency.

  18. Sources, Transport, and Climate Impacts of Biomass Burning Aerosols

    NASA Technical Reports Server (NTRS)

    Chin, Mian

    2010-01-01

    In this presentation, I will first talk about fundamentals of modeling of biomass burning emissions of aerosols, then show the results of GOCART model simulated biomass burning aerosols. I will compare the model results with observations of satellite and ground-based network in terms of total aerosol optical depth, aerosol absorption optical depth, and vertical distributions. Finally the long-range transport of biomass burning aerosols and the climate effects will be addressed. I will also discuss the uncertainties associated with modeling and observations of biomass burning aerosols

  19. Informal Taxation*

    PubMed Central

    Olken, Benjamin A.; Singhal, Monica

    2011-01-01

    Informal payments are a frequently overlooked source of local public finance in developing countries. We use microdata from ten countries to establish stylized facts on the magnitude, form, and distributional implications of this “informal taxation.” Informal taxation is widespread, particularly in rural areas, with substantial in-kind labor payments. The wealthy pay more, but pay less in percentage terms, and informal taxes are more regressive than formal taxes. Failing to include informal taxation underestimates household tax burdens and revenue decentralization in developing countries. We discuss various explanations for and implications of these observed stylized facts. PMID:22199993

  20. Studies of the Intrinsic Complexities of Magnetotail Ion Distributions: Theory and Observations

    NASA Technical Reports Server (NTRS)

    Ashour-Abdalla, Maha

    1998-01-01

    This year we have studied the relationship between the structure seen in measured distribution functions and the detailed magnetospheric configuration. Results from our recent studies using time-dependent large-scale kinetic (LSK) calculations are used to infer the sources of the ions in the velocity distribution functions measured by a single spacecraft (Geotail). Our results strongly indicate that the different ion sources and acceleration mechanisms producing a measured distribution function can explain this structure. Moreover, individual structures within distribution functions were traced back to single sources. We also confirmed the fractal nature of ion distributions.

  1. 10 CFR 32.74 - Manufacture and distribution of sources or devices containing byproduct material for medical use.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Manufacture and distribution of sources or devices... SPECIFIC DOMESTIC LICENSES TO MANUFACTURE OR TRANSFER CERTAIN ITEMS CONTAINING BYPRODUCT MATERIAL Generally Licensed Items § 32.74 Manufacture and distribution of sources or devices containing byproduct material for...

  2. A novel integrated approach for the hazardous radioactive dust source terms estimation in future nuclear fusion power plants.

    PubMed

    Poggi, L A; Malizia, A; Ciparisse, J F; Gaudio, P

    2016-10-01

    An open issue still under investigation by several international entities working on the safety and security field for the foreseen nuclear fusion reactors is the estimation of source terms that are a hazard for the operators and public, and for the machine itself in terms of efficiency and integrity in case of severe accident scenarios. Source term estimation is a crucial key safety issue to be addressed in the future reactors safety assessments, and the estimates available at the time are not sufficiently satisfactory. The lack of neutronic data along with the insufficiently accurate methodologies used until now, calls for an integrated methodology for source term estimation that can provide predictions with an adequate accuracy. This work proposes a complete methodology to estimate dust source terms starting from a broad information gathering. The wide number of parameters that can influence dust source term production is reduced with statistical tools using a combination of screening, sensitivity analysis, and uncertainty analysis. Finally, a preliminary and simplified methodology for dust source term production prediction for future devices is presented.

  3. Seasonal variability of carbon in humic-like matter of ambient size-segregated water soluble organic aerosols from urban background environment

    NASA Astrophysics Data System (ADS)

    Frka, Sanja; Grgić, Irena; Turšič, Janja; Gini, Maria I.; Eleftheriadis, Konstantinos

    2018-01-01

    Long-term measurements of carbon in HUmic-LIke Substances (HULIS-C) of ambient size-segregated water soluble organic aerosols were performed using a ten-stage low-pressure Berner impactor from December 2014 to November 2015 at an urban background environment in Ljubljana, Slovenia. The mass size distribution patterns of measured species (PM - particulate matter, WSOC - water-soluble organic carbon and HULIS-C) for all seasons were generally tri-modal (primarily accumulation mode) but with significant seasonal variability. HULIS-C was found to have similar distributions as WSOC, with nearly the same mass median aerodynamic diameters (MMADs), except for winter when the HULIS-C size distribution was bimodal. In autumn and winter, the dominant accumulation mode with MMAD at ca. 0.65 μm contributed 83 and 97% to the total HULIS-C concentration, respectively. HULIS-C accounted for a large fraction of WSOC, averaging more than 50% in autumn and 40% in winter. Alternatively, during warmer periods the contributions of ultrafine (27% in summer) and coarse mode (27% in spring) were also substantial. Based on mass size distribution characteristics, HULIS-C was found to be of various sources. In colder seasons, wood burning was confirmed as the most important HULIS source; secondary formation in atmospheric liquid water also contributed significantly, as revealed by the MMADs of the accumulation mode shifting to larger sizes. The distinct difference between the spring and summer ratios of HULIS-C/WSOC in fine particles (ca. 50% in spring, but only 10% in summer) indicated different sources and chemical composition of WSOC in summer (e.g., SOA formation from biogenic volatile organic compounds (BVOCs) via photochemistry). The enlarged amount of HULIS-C in the ultrafine mode in summer suggests that the important contribution was most likely from new particle formation during higher emissions of BVOC due to the vicinity of a mixed deciduous forest; the higher contribution of HULIS-C in the coarse mode demonstrated that beside soil erosion other sources, such as pollen and plant fragments, could also be responsible.

  4. The impact of a large penetration of intermittent sources on the power system operation and planning

    NASA Astrophysics Data System (ADS)

    Ausin, Juan Carlos

    This research investigated the impact on the power system of a large penetration of intermittent renewable sources, mainly wind and photovoltaic generation. Currently, electrical utilities deal with wind and PV plants as if they were sources of negative demand, that is to say, they have no control over the power output produced. In this way, the grid absorbs all the power fluctuation as if it were coming from a common load. With the level of wind penetration growing so quickly, there is growing concern amongst the utilities and the grid operators, as they will have to deal with a much higher level of fluctuation. In the same way, the potential cost reduction of PV technologies suggests that a similar development may be expected for solar production in the mid term. The first part of the research was focused on the issues that affect utility planning and reinforcement decision making. Although DG is located mainly on the distribution network, a large penetration may alter the flows, not only on the distribution lines, but also on the transmission system and through the transmission - distribution interfaces. The optimal capacity and production costs for the UK transmission network have been calculated for several combinations of load profiles and typical wind/PV output scenarios. A full economic analysis is developed, showing the benefits and disadvantages that a large penetration of these distributed generators may have on transmission system operator reinforcement strategies. Closely related to planning factors are institutional, revelatory, and economic considerations, such as transmission pricing, which may hamper the integration of renewable energy technologies into the electric utility industry. The second part of the research related to the impact of intermittent renewable energy technologies on the second by second, minute by minute, and half-hour by half-hour operations of power systems. If a large integration of these new generators partially replaces the conventional rotating machines the aggregate fluctuation starts to become an important factor, and should be taken into account for the calculation of the balancing requirements. Additional balancing requirements would increase the total balancing cost and this could stop the future development of the intermittent sources.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeze, R.A.; McWhorter, D.B.

    Many emerging remediation technologies are designed to remove contaminant mass from source zones at DNAPL sites in response to regulatory requirements. There is often concern in the regulated community as to whether mass removal actually reduces risk, or whether the small risk reductions achieved warrant the large costs incurred. This paper sets out a proposed framework for quantifying the degree to which risk is reduced as mass is removed from DNAPL source areas in shallow, saturated, low-permeability media. Risk is defined in terms of meeting an alternate concentration limit (ACL) at a compliance well in an aquifer underlying the sourcemore » zone. The ACL is back-calculated from a carcinogenic health-risk characterization at a downgradient water-supply well. Source-zone mass-removal efficiencies are heavily dependent on the distribution of mass between media (fractures, matrix) and phase (aqueous, sorbed, NAPL). Due to the uncertainties in currently available technology performance data, the scope of the paper is limited to developing a framework for generic technologies rather than making specific risk-reduction calculations for individual technologies. Despite the qualitative nature of the exercise, results imply that very high total mass-removal efficiencies are required to achieve significant long-term risk reduction with technology applications of finite duration. This paper is not an argument for no action at contaminated sites. Rather, it provides support for the conclusions of Cherry et al. (1992) that the primary goal of current remediation should be short-term risk reduction through containment, with the aim to pass on to future generations site conditions that are well-suited to the future applications of emerging technologies with improved mass-removal capabilities.« less

  6. A methodology for efficiency optimization of betavoltaic cell design using an isotropic planar source having an energy dependent beta particle distribution.

    PubMed

    Theirrattanakul, Sirichai; Prelas, Mark

    2017-09-01

    Nuclear batteries based on silicon carbide betavoltaic cells have been studied extensively in the literature. This paper describes an analysis of design parameters, which can be applied to a variety of materials, but is specific to silicon carbide. In order to optimize the interface between a beta source and silicon carbide p-n junction, it is important to account for the specific isotope, angular distribution of the beta particles from the source, the energy distribution of the source as well as the geometrical aspects of the interface between the source and the transducer. In this work, both the angular distribution and energy distribution of the beta particles are modeled using a thin planar beta source (e.g., H-3, Ni-63, S-35, Pm-147, Sr-90, and Y-90) with GEANT4. Previous studies of betavoltaics with various source isotopes have shown that Monte Carlo based codes such as MCNPX, GEANT4 and Penelope generate similar results. GEANT4 is chosen because it has important strengths for the treatment of electron energies below one keV and it is widely available. The model demonstrates the effects of angular distribution, the maximum energy of the beta particle and energy distribution of the beta source on the betavoltaic and it is useful in determining the spatial profile of the power deposition in the cell. Copyright © 2017. Published by Elsevier Ltd.

  7. Source structure errors in radio-interferometric clock synchronization for ten measured distributions

    NASA Technical Reports Server (NTRS)

    Thomas, J. B.

    1981-01-01

    The effects of source structure on radio interferometry measurements were investigated. The brightness distribution measurements for ten extragalactic sources were analyzed. Significant results are reported.

  8. 2dFLenS and KiDS: determining source redshift distributions with cross-correlations

    NASA Astrophysics Data System (ADS)

    Johnson, Andrew; Blake, Chris; Amon, Alexandra; Erben, Thomas; Glazebrook, Karl; Harnois-Deraps, Joachim; Heymans, Catherine; Hildebrandt, Hendrik; Joudaki, Shahab; Klaes, Dominik; Kuijken, Konrad; Lidman, Chris; Marin, Felipe A.; McFarland, John; Morrison, Christopher B.; Parkinson, David; Poole, Gregory B.; Radovich, Mario; Wolf, Christian

    2017-03-01

    We develop a statistical estimator to infer the redshift probability distribution of a photometric sample of galaxies from its angular cross-correlation in redshift bins with an overlapping spectroscopic sample. This estimator is a minimum-variance weighted quadratic function of the data: a quadratic estimator. This extends and modifies the methodology presented by McQuinn & White. The derived source redshift distribution is degenerate with the source galaxy bias, which must be constrained via additional assumptions. We apply this estimator to constrain source galaxy redshift distributions in the Kilo-Degree imaging survey through cross-correlation with the spectroscopic 2-degree Field Lensing Survey, presenting results first as a binned step-wise distribution in the range z < 0.8, and then building a continuous distribution using a Gaussian process model. We demonstrate the robustness of our methodology using mock catalogues constructed from N-body simulations, and comparisons with other techniques for inferring the redshift distribution.

  9. Towards Full-Waveform Ambient Noise Inversion

    NASA Astrophysics Data System (ADS)

    Sager, Korbinian; Ermert, Laura; Afanasiev, Michael; Boehm, Christian; Fichtner, Andreas

    2017-04-01

    Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green function between the two receivers. This assumption, however, is only met under specific conditions, e.g. wavefield diffusivity and equipartitioning, or the isotropic distribution of both mono- and dipolar uncorrelated noise sources. These assumptions are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations in order to constrain Earth structure and noise generation. To overcome this limitation, we attempt to develop a method that consistently accounts for the distribution of noise sources, 3D heterogeneous Earth structure and the full seismic wave propagation physics. This is intended to improve the resolution of tomographic images, to refine noise source distribution, and thereby to contribute to a better understanding of both Earth structure and noise generation. First, we develop an inversion strategy based on a 2D finite-difference code using adjoint techniques. To enable a joint inversion for noise sources and Earth structure, we investigate the following aspects: i) the capability of different misfit functionals to image wave speed anomalies and source distribution and ii) possible source-structure trade-offs, especially to what extent unresolvable structure can be mapped into the inverted noise source distribution and vice versa. In anticipation of real-data applications, we present an extension of the open-source waveform modelling and inversion package Salvus (http://salvus.io). It allows us to compute correlation functions in 3D media with heterogeneous noise sources at the surface and the corresponding sensitivity kernels for the distribution of noise sources and Earth structure. By studying the effect of noise sources on correlation functions in 3D, we validate the aforementioned inversion strategy and prepare the workflow necessary for the first application of full waveform ambient noise inversion to a global dataset, for which a model for the distribution of noise sources is already available.

  10. Quantitative Determination of Isotope Ratios from Experimental Isotopic Distributions

    PubMed Central

    Kaur, Parminder; O’Connor, Peter B.

    2008-01-01

    Isotope variability due to natural processes provides important information for studying a variety of complex natural phenomena from the origins of a particular sample to the traces of biochemical reaction mechanisms. These measurements require high-precision determination of isotope ratios of a particular element involved. Isotope Ratio Mass Spectrometers (IRMS) are widely employed tools for such a high-precision analysis, which have some limitations. This work aims at overcoming the limitations inherent to IRMS by estimating the elemental isotopic abundance from the experimental isotopic distribution. In particular, a computational method has been derived which allows the calculation of 13C/12C ratios from the whole isotopic distributions, given certain caveats, and these calculations are applied to several cases to demonstrate their utility. The limitations of the method in terms of the required number of ions and S/N ratio are discussed. For high-precision estimates of the isotope ratios, this method requires very precise measurement of the experimental isotopic distribution abundances, free from any artifacts introduced by noise, sample heterogeneity, or other experimental sources. PMID:17263354

  11. Community-based distributive medical education: Advantaging society

    PubMed Central

    Farnsworth, Tracy J.; Frantz, Alan C.; McCune, Ronald W.

    2012-01-01

    This paper presents a narrative summary of an increasingly important trend in medical education by addressing the merits of community-based distributive medical education (CBDME). This is a relatively new and compelling model for teaching and training physicians in a manner that may better meet societal needs and expectations. Issues and trends regarding the growing shortage and imbalanced distribution of physicians in the USA are addressed, including the role of international medical graduates. A historical overview of costs and funding sources for medical education is presented, as well as initiatives to increase the training and placement of physicians cost-effectively through new and expanded medical schools, two- and four-year regional or branch campuses and CBDME. Our research confirms that although medical schools have responded to Association of American Medical Colleges calls for higher student enrollment and societal concerns about the distribution and placement of physicians, significant opportunities for improvement remain. Finally, the authors recommend further research be conducted to guide policy on incentives for physicians to locate in underserved communities, and determine the cost-effectiveness of the CBDME model in both the near and long terms. PMID:22355240

  12. Keeping an eye on the ring: COMS plaque loading optimization for improved dose conformity and homogeneity.

    PubMed

    Gagne, Nolan L; Cutright, Daniel R; Rivard, Mark J

    2012-09-01

    To improve tumor dose conformity and homogeneity for COMS plaque brachytherapy by investigating the dosimetric effects of varying component source ring radionuclides and source strengths. The MCNP5 Monte Carlo (MC) radiation transport code was used to simulate plaque heterogeneity-corrected dose distributions for individually-activated source rings of 14, 16 and 18 mm diameter COMS plaques, populated with (103)Pd, (125)I and (131)Cs sources. Ellipsoidal tumors were contoured for each plaque size and MATLAB programming was developed to generate tumor dose distributions for all possible ring weighting and radionuclide permutations for a given plaque size and source strength resolution, assuming a 75 Gy apical prescription dose. These dose distributions were analyzed for conformity and homogeneity and compared to reference dose distributions from uniformly-loaded (125)I plaques. The most conformal and homogeneous dose distributions were reproduced within a reference eye environment to assess organ-at-risk (OAR) doses in the Pinnacle(3) treatment planning system (TPS). The gamma-index analysis method was used to quantitatively compare MC and TPS-generated dose distributions. Concentrating > 97% of the total source strength in a single or pair of central (103)Pd seeds produced the most conformal dose distributions, with tumor basal doses a factor of 2-3 higher and OAR doses a factor of 2-3 lower than those of corresponding uniformly-loaded (125)I plaques. Concentrating 82-86% of the total source strength in peripherally-loaded (131)Cs seeds produced the most homogeneous dose distributions, with tumor basal doses 17-25% lower and OAR doses typically 20% higher than those of corresponding uniformly-loaded (125)I plaques. Gamma-index analysis found > 99% agreement between MC and TPS dose distributions. A method was developed to select intra-plaque ring radionuclide compositions and source strengths to deliver more conformal and homogeneous tumor dose distributions than uniformly-loaded (125)I plaques. This method may support coordinated investigations of an appropriate clinical target for eye plaque brachytherapy.

  13. Time-domain diffuse optics: towards next generation devices

    NASA Astrophysics Data System (ADS)

    Contini, Davide; Dalla Mora, Alberto; Arridge, Simon; Martelli, Fabrizio; Tosi, Alberto; Boso, Gianluca; Farina, Andrea; Durduran, Turgut; Martinenghi, Edoardo; Torricelli, Alessandro; Pifferi, Antonio

    2015-07-01

    Diffuse optics is a powerful tool for clinical applications ranging from oncology to neurology, but also for molecular imaging, and quality assessment of food, wood and pharmaceuticals. We show that ideally time-domain diffuse optics can give higher contrast and a higher penetration depth with respect to standard technology. In order to completely exploit the advantages of a time-domain system a distribution of sources and detectors with fast gating capabilities covering all the sample surface is needed. Here, we present the building block to build up such system. This basic component is made of a miniaturised source-detector pair embedded into the probe based on pulsed Vertical-Cavity Surface-Emitting Lasers (VCSEL) as sources and Single-Photon Avalanche Diodes (SPAD) or Silicon Photomultipliers (SiPM) as detectors. The possibility to miniaturized and dramatically increase the number of source detectors pairs open the way to an advancement of diffuse optics in terms of improvement of performances and exploration of new applications. Furthermore, availability of compact devices with reduction in size and cost can boost the application of this technique.

  14. Singular values behaviour optimization in the diagnosis of feed misalignments in radioastronomical reflectors

    NASA Astrophysics Data System (ADS)

    Capozzoli, Amedeo; Curcio, Claudio; Liseno, Angelo; Savarese, Salvatore; Schipani, Pietro

    2016-07-01

    The communication presents an innovative method for the diagnosis of reflector antennas in radio astronomical applications. The approach is based on the optimization of the number and the distribution of the far field sampling points exploited to retrieve the antenna status in terms of feed misalignments, this to drastically reduce the time length of the measurement process and minimize the effects of variable environmental conditions and simplifying the tracking process of the source. The feed misplacement is modeled in terms of an aberration function of the aperture field. The relationship between the unknowns and the far field pattern samples is linearized thanks to a Principal Component Analysis. The number and the position of the field samples are then determined by optimizing the Singular Values behaviour of the relevant operator.

  15. The Outer Disks of Herbig Stars From the UV to NIR

    NASA Technical Reports Server (NTRS)

    Grady, C.; Fukagawa, M.; Maruta, Y.; Ohta, Y.; Wisniewski, J.; Hashimoto, J.; Okamoto, Y.; Momose, M.; Currie, T.; Mcelwain, M.; hide

    2014-01-01

    Spatially-resolved imaging of Herbig stars and related objects began with HST, but intensified with commissioning of high-contrast imagers on 8-m class telescopes. The bulk of the data taken from the ground have been polarized intensity imagery at H-band, with the majority of the sources observed as part of the Strategic Exploration of Exoplanets and Disks with Subaru (SEEDS) survey. Sufficiently many systems have been imaged that we discuss disk properties in scattered, polarized light in terms of groups defined by the IR spectral energy distribution. We find novel phenomena in many of the disks, including spiral density waves, and discuss the disks in terms of clearing mechanisms. Some of the disks have sufficient data to map the dust and gas components, including water ice dissociation products.

  16. Coupling Aggressive Mass Removal with Microbial Reductive Dechlorination for Remediation of DNAPL Source Zones: A Review and Assessment

    PubMed Central

    Christ, John A.; Ramsburg, C. Andrew; Abriola, Linda M.; Pennell, Kurt D.; Löffler, Frank E.

    2005-01-01

    The infiltration of dense non-aqueous-phase liquids (DNAPLs) into the saturated subsurface typically produces a highly contaminated zone that serves as a long-term source of dissolved-phase groundwater contamination. Applications of aggressive physical–chemical technologies to such source zones may remove > 90% of the contaminant mass under favorable conditions. The remaining contaminant mass, however, can create a rebounding of aqueous-phase concentrations within the treated zone. Stimulation of microbial reductive dechlorination within the source zone after aggressive mass removal has recently been proposed as a promising staged-treatment remediation technology for transforming the remaining contaminant mass. This article reviews available laboratory and field evidence that supports the development of a treatment strategy that combines aggressive source-zone removal technologies with subsequent promotion of sustained microbial reductive dechlorination. Physical–chemical source-zone treatment technologies compatible with posttreatment stimulation of microbial activity are identified, and studies examining the requirements and controls (i.e., limits) of reductive dechlorination of chlorinated ethenes are investigated. Illustrative calculations are presented to explore the potential effects of source-zone management alternatives. Results suggest that, for the favorable conditions assumed in these calculations (i.e., statistical homogeneity of aquifer properties, known source-zone DNAPL distribution, and successful bioenhancement in the source zone), source longevity may be reduced by as much as an order of magnitude when physical–chemical source-zone treatment is coupled with reductive dechlorination. PMID:15811838

  17. Molecular fingerprinting of particulate organic matter as a new tool for its source apportionment: changes along a headwater drainage in coarse, medium and fine particles as a function of rainfalls

    NASA Astrophysics Data System (ADS)

    Jeanneau, Laurent; Rowland, Richard; Inamdar, Shreeram

    2018-02-01

    Tracking the sources of particulate organic matter (POM) exported from catchments is important to understand the transfer of energy from soils to oceans. The suitability of investigating the molecular composition of POM by thermally assisted hydrolysis and methylation using tetramethylammonium hydroxide directly coupled to gas chromatography and mass spectrometry is presented. The results of this molecular-fingerprint approach were compared with previously published elemental (% C, % N) and isotopic data (δ13C, δ15N) acquired in a nested headwater catchment in the Piedmont region, eastern United States of America (12 and 79 ha). The concordance between these results highlights the effectiveness of this molecular tool as a valuable method for source fingerprinting of POM. It emphasizes litter as the main source of exported POM at the upstream location (80±14 %), with an increasing proportion of streambed (SBed) sediment remobilization downstream (42 ± 29 %), specifically during events characterized by high rainfall amounts. At the upstream location, the source of POM seems to be controlled by the maximum and median hourly rainfall intensity. An added value of this method is to directly investigate chemical biomarkers and to mine their distributions in terms of biogeochemical functioning of an ecosystem. In this catchment, the distribution of plant-derived biomarkers characterizing lignin, cutin and suberin inputs were similar in SBed and litter, while the proportion of microbial markers was 4 times higher in SBed than in litter. These results indicate that SBed OM was largely from plant litter that has been processed by the aquatic microbial community.

  18. The Seismicity of the Central Apennines Region Studied by Means of a Physics-Based Earthquake Simulator

    NASA Astrophysics Data System (ADS)

    Console, R.; Vannoli, P.; Carluccio, R.

    2016-12-01

    The application of a physics-based earthquake simulation algorithm to the central Apennines region, where the 24 August 2016 Amatrice earthquake occurred, allowed the compilation of a synthetic seismic catalog lasting 100 ky, and containing more than 500,000 M ≥ 4.0 events, without the limitations that real catalogs suffer in terms of completeness, homogeneity and time duration. The algorithm on which this simulator is based is constrained by several physical elements as: (a) an average slip rate for every single fault in the investigated fault systems, (b) the process of rupture growth and termination, leading to a self-organized earthquake magnitude distribution, and (c) interaction between earthquake sources, including small magnitude events. Events nucleated in one fault are allowed to expand into neighboring faults, even belonging to a different fault system, if they are separated by less than a given maximum distance. The seismogenic model upon which we applied the simulator code, was derived from the DISS 3.2.0 database (http://diss.rm.ingv.it/diss/), selecting all the fault systems that are recognized in the central Apennines region, for a total of 24 fault systems. The application of our simulation algorithm provides typical features in time, space and magnitude behavior of the seismicity, which are comparable with those of real observations. These features include long-term periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the linear Gutenberg-Richter distribution in the moderate and higher magnitude range. The statistical distribution of earthquakes with M ≥ 6.0 on single faults exhibits a fairly clear pseudo-periodic behavior, with a coefficient of variation Cv of the order of 0.3-0.6. We found in our synthetic catalog a clear trend of long-term acceleration of seismic activity preceding M ≥ 6.0 earthquakes and quiescence following those earthquakes. Lastly, as an example of a possible use of synthetic catalogs, an attenuation law was applied to all the events reported in the synthetic catalog for the production of maps showing the exceedence probability of given values of peak acceleration (PGA) on the territory under investigation. The application of a physics-based earthquake simulation algorithm to the central Apennines region, where the 24 August 2016 Amatrice earthquake occurred, allowed the compilation of a synthetic seismic catalog lasting 100 ky, and containing more than 500,000 M ≥ 4.0 events, without the limitations that real catalogs suffer in terms of completeness, homogeneity and time duration. The algorithm on which this simulator is based is constrained by several physical elements as: (a) an average slip rate for every single fault in the investigated fault systems, (b) the process of rupture growth and termination, leading to a self-organized earthquake magnitude distribution, and (c) interaction between earthquake sources, including small magnitude events. Events nucleated in one fault are allowed to expand into neighboring faults, even belonging to a different fault system, if they are separated by less than a given maximum distance. The seismogenic model upon which we applied the simulator code, was derived from the DISS 3.2.0 database (http://diss.rm.ingv.it/diss/), selecting all the fault systems that are recognized in the central Apennines region, for a total of 24 fault systems. The application of our simulation algorithm provides typical features in time, space and magnitude behavior of the seismicity, which are comparable with those of real observations. These features include long-term periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the linear Gutenberg-Richter distribution in the moderate and higher magnitude range. The statistical distribution of earthquakes with M ≥ 6.0 on single faults exhibits a fairly clear pseudo-periodic behavior, with a coefficient of variation Cv of the order of 0.3-0.6. We found in our synthetic catalog a clear trend of long-term acceleration of seismic activity preceding M ≥ 6.0 earthquakes and quiescence following those earthquakes. Lastly, as an example of a possible use of synthetic catalogs, an attenuation law was applied to all the events reported in the synthetic catalog for the production of maps showing the exceedence probability of given values of peak acceleration (PGA) on the territory under investigation.

  19. Effect of weak measurement on entanglement distribution over noisy channels.

    PubMed

    Wang, Xin-Wen; Yu, Sixia; Zhang, Deng-Yu; Oh, C H

    2016-03-03

    Being able to implement effective entanglement distribution in noisy environments is a key step towards practical quantum communication, and long-term efforts have been made on the development of it. Recently, it has been found that the null-result weak measurement (NRWM) can be used to enhance probabilistically the entanglement of a single copy of amplitude-damped entangled state. This paper investigates remote distributions of bipartite and multipartite entangled states in the amplitudedamping environment by combining NRWMs and entanglement distillation protocols (EDPs). We show that the NRWM has no positive effect on the distribution of bipartite maximally entangled states and multipartite Greenberger-Horne-Zeilinger states, although it is able to increase the amount of entanglement of each source state (noisy entangled state) of EDPs with a certain probability. However, we find that the NRWM would contribute to remote distributions of multipartite W states. We demonstrate that the NRWM can not only reduce the fidelity thresholds for distillability of decohered W states, but also raise the distillation efficiencies of W states. Our results suggest a new idea for quantifying the ability of a local filtering operation in protecting entanglement from decoherence.

  20. Effect of weak measurement on entanglement distribution over noisy channels

    PubMed Central

    Wang, Xin-Wen; Yu, Sixia; Zhang, Deng-Yu; Oh, C. H.

    2016-01-01

    Being able to implement effective entanglement distribution in noisy environments is a key step towards practical quantum communication, and long-term efforts have been made on the development of it. Recently, it has been found that the null-result weak measurement (NRWM) can be used to enhance probabilistically the entanglement of a single copy of amplitude-damped entangled state. This paper investigates remote distributions of bipartite and multipartite entangled states in the amplitudedamping environment by combining NRWMs and entanglement distillation protocols (EDPs). We show that the NRWM has no positive effect on the distribution of bipartite maximally entangled states and multipartite Greenberger-Horne-Zeilinger states, although it is able to increase the amount of entanglement of each source state (noisy entangled state) of EDPs with a certain probability. However, we find that the NRWM would contribute to remote distributions of multipartite W states. We demonstrate that the NRWM can not only reduce the fidelity thresholds for distillability of decohered W states, but also raise the distillation efficiencies of W states. Our results suggest a new idea for quantifying the ability of a local filtering operation in protecting entanglement from decoherence. PMID:26935775

  1. Descriptive statistics and spatial distributions of geochemical variables associated with manganese oxide-rich phases in the northern Pacific

    USGS Publications Warehouse

    Botbol, Joseph Moses; Evenden, Gerald Ian

    1989-01-01

    Tables, graphs, and maps are used to portray the frequency characteristics and spatial distribution of manganese oxide-rich phase geochemical data, to characterize the northern Pacific in terms of publicly available nodule geochemical data, and to develop data portrayal methods that will facilitate data analysis. Source data are a subset of the Scripps Institute of Oceanography's Sediment Data Bank. The study area is bounded by 0° N., 40° N., 120° E., and 100° W. and is arbitrarily subdivided into 14-20°x20° geographic subregions. Frequency distributions of trace metals characterized in the original raw data are graphed as ogives, and salient parameters are tabulated. All variables are transformed to enrichment values relative to median concentration within their host subregions. Scatter plots of all pairs of original variables and their enrichment transforms are provided as an aid to the interpretation of correlations between variables. Gridded spatial distributions of all variables are portrayed as gray-scale maps. The use of tables and graphs to portray frequency statistics and gray-scale maps to portray spatial distributions is an effective way to prepare for and facilitate multivariate data analysis.

  2. Over-Distribution in Source Memory

    PubMed Central

    Brainerd, C. J.; Reyna, V. F.; Holliday, R. E.; Nakamura, K.

    2012-01-01

    Semantic false memories are confounded with a second type of error, over-distribution, in which items are attributed to contradictory episodic states. Over-distribution errors have proved to be more common than false memories when the two are disentangled. We investigated whether over-distribution is prevalent in another classic false memory paradigm: source monitoring. It is. Conventional false memory responses (source misattributions) were predominantly over-distribution errors, but unlike semantic false memory, over-distribution also accounted for more than half of true memory responses (correct source attributions). Experimental control of over-distribution was achieved via a series of manipulations that affected either recollection of contextual details or item memory (concreteness, frequency, list-order, number of presentation contexts, and individual differences in verbatim memory). A theoretical model was used to analyze the data (conjoint process dissociation) that predicts that predicts that (a) over-distribution is directly proportional to item memory but inversely proportional to recollection and (b) item memory is not a necessary precondition for recollection of contextual details. The results were consistent with both predictions. PMID:21942494

  3. Analysis of the Source System of Nantun Group in Huhehu Depression of Hailar Basin

    NASA Astrophysics Data System (ADS)

    Li, Yue; Li, Junhui; Wang, Qi; Lv, Bingyang; Zhang, Guannan

    2017-10-01

    Huhehu Depression will be the new battlefield in Hailar Basin in the future, while at present it’s in a low exploration level. The study about the source system of Nantun group is little, so fine depiction of the source system would be significant to sedimentary system reconstruction, the reservoir distribution and prediction of favorable area. In this paper, it comprehensive uses of many methods such as ancient landform, light and heavy mineral combination, seismic reflection characteristics, to do detailed study about the source system of Nantun group in different views and different levels. The results show that the source system in Huhehu Depression is from the east of Xilinbeir bulge and the west of Bayan Moutain uplift, which is surrounded by basin. The slope belt is the main source, and the southern bulge is the secondary source. The distribution of source system determines the distribution of sedimentary system and the regularity of the distribution of sand body.

  4. Parameterized source term in the diffusion approximation for enhanced near-field modeling of collimated light

    NASA Astrophysics Data System (ADS)

    Jia, Mengyu; Wang, Shuang; Chen, Xueying; Gao, Feng; Zhao, Huijuan

    2016-03-01

    Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we have reported on an improved explicit model, referred to as "Virtual Source" (VS) diffuse approximation (DA), to inherit the mathematical simplicity of the DA while considerably extend its validity in modeling the near-field photon migration in low-albedo medium. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the nearfield to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. The proposed VS-DA model is validated by comparing with the Monte Carlo simulations, and further introduced in the image reconstruction of the Laminar Optical Tomography system.

  5. Electron Cyclotron Resonance (ECR) Ion Source Development at the Holified Radioactive Ion Beam Facility

    NASA Astrophysics Data System (ADS)

    Bilheux, Hassina; Liu, Yuan; Alton, Gerald; Cole, John; Williams, Cecil; Reed, Charles

    2004-11-01

    Performance of ECR ion sources can be significantly enhanced by increasing the physical size of their ECR zones in relation to the size of their plasma volumes (spatial and frequency domain methods).^3-5 A 6 GHz, all-permanent magnet ECR ion source with a large resonant plasma volume has been tested at ORNL.^6 The magnetic circuit can be configured for creating both flat-β (volume) and conventional minimum-β (surface) resonance conditions. Direct comparisons of the performance of the two source types can be made under similar operating conditions. In this paper, we clearly demonstrate that the flat-β source outperforms its minimum-β counterpart in terms of charge state distribution and intensity within a particular charge state. ^1bilheuxhn@ornl.gov ^2Managed by UT-Battelle, LLC, for the U.S. Department of Energy under contract DE-AC05-00OR22725. ^3G.D. Alton, D.N. Smithe, Rev. Sci. Instrum. 65 (1994) 775. ^4G.D. Alton et al., Rev. Sci. Instrum. 69 (1998) 2305. ^5Z.Q. Xie, C.M. Lyneis, Rev. Sci. Instrum. 66 (1995) 4218. ^6Y. Liu et al., Rev. Sci. Instrum. 69 (1998) 1311.

  6. Detecting fission from special nuclear material sources

    DOEpatents

    Rowland, Mark S [Alamo, CA; Snyderman, Neal J [Berkeley, CA

    2012-06-05

    A neutron detector system for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly into information that a first responder can use to discriminate materials. The system comprises counting neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source. The system includes a graphing component that displays the plot of the neutron distribution from the unknown source over a Poisson distribution and a plot of neutrons due to background or environmental sources. The system further includes a known neutron source placed in proximity to the unknown source to actively interrogate the unknown source in order to accentuate differences in neutron emission from the unknown source from Poisson distributions and/or environmental sources.

  7. Two families of astrophysical diverging lens models

    NASA Astrophysics Data System (ADS)

    Er, Xinzhong; Rogers, Adam

    2018-03-01

    In the standard gravitational lensing scenario, rays from a background source are bent in the direction of a foreground lensing mass distribution. Diverging lens behaviour produces deflections in the opposite sense to gravitational lensing, and is also of astrophysical interest. In fact, diverging lensing due to compact distributions of plasma has been proposed as an explanation for the extreme scattering events that produce frequency-dependent dimming of extragalactic radio sources, and may also be related to the refractive radio wave phenomena observed to affect the flux density of pulsars. In this work we study the behaviour of two families of astrophysical diverging lenses in the geometric optics limit, the power law, and the exponential plasma lenses. Generally, the members of these model families show distinct behaviour in terms of image formation and magnification, however the inclusion of a finite core for certain power-law lenses can produce a caustic and critical curve morphology that is similar to the well-studied Gaussian plasma lens. Both model families can produce dual radial critical curves, a novel distinction from the tangential distortion usually produced by gravitational (converging) lenses. The deflection angle and magnification of a plasma lens vary with the observational frequency, producing wavelength-dependent magnifications that alter the amplitudes and the shape of the light curves. Thus, multiwavelength observations can be used to physically constrain the distribution of the electron density in such lenses.

  8. Mycobacterium avium complex--the role of potable water in disease transmission.

    PubMed

    Whiley, H; Keegan, A; Giglio, S; Bentham, R

    2012-08-01

    Mycobacterium avium complex (MAC) is a group of opportunistic pathogens of major public health concern. It is responsible for a wide spectrum of disease dependent on subspecies, route of infection and patients pre-existing conditions. Presently, there is limited research on the incidence of MAC infection that considers both pulmonary and other clinical manifestations. MAC has been isolated from various terrestrial and aquatic environments including natural waters, engineered water systems and soils. Identifying the specific environmental sources responsible for human infection is essential in minimizing disease prevalence. This paper reviews current literature and case studies regarding the wide spectrum of disease caused by MAC and the role of potable water in disease transmission. Potable water was recognized as a putative pathway for MAC infection. Contaminated potable water sources associated with human infection included warm water distribution systems, showers, faucets, household drinking water, swimming pools and hot tub spas. MAC can maintain long-term contamination of potable water sources through its high resistance to disinfectants, association with biofilms and intracellular parasitism of free-living protozoa. Further research is required to investigate the efficiency of water treatment processes against MAC and into construction and maintenance of warm water distribution systems and the role they play in MAC proliferation. No claim to Australian Government works Journal of Applied Microbiology © 2012 The Society for Applied Microbiology.

  9. Luminosity distance in ``Swiss cheese'' cosmology with randomized voids. II. Magnification probability distributions

    NASA Astrophysics Data System (ADS)

    Flanagan, Éanna É.; Kumar, Naresh; Wasserman, Ira; Vanderveld, R. Ali

    2012-01-01

    We study the fluctuations in luminosity distances due to gravitational lensing by large scale (≳35Mpc) structures, specifically voids and sheets. We use a simplified “Swiss cheese” model consisting of a ΛCDM Friedman-Robertson-Walker background in which a number of randomly distributed nonoverlapping spherical regions are replaced by mass-compensating comoving voids, each with a uniform density interior and a thin shell of matter on the surface. We compute the distribution of magnitude shifts using a variant of the method of Holz and Wald , which includes the effect of lensing shear. The standard deviation of this distribution is ˜0.027 magnitudes and the mean is ˜0.003 magnitudes for voids of radius 35 Mpc, sources at redshift zs=1.0, with the voids chosen so that 90% of the mass is on the shell today. The standard deviation varies from 0.005 to 0.06 magnitudes as we vary the void size, source redshift, and fraction of mass on the shells today. If the shell walls are given a finite thickness of ˜1Mpc, the standard deviation is reduced to ˜0.013 magnitudes. This standard deviation due to voids is a factor ˜3 smaller than that due to galaxy scale structures. We summarize our results in terms of a fitting formula that is accurate to ˜20%, and also build a simplified analytic model that reproduces our results to within ˜30%. Our model also allows us to explore the domain of validity of weak-lensing theory for voids. We find that for 35 Mpc voids, corrections to the dispersion due to lens-lens coupling are of order ˜4%, and corrections due to shear are ˜3%. Finally, we estimate the bias due to source-lens clustering in our model to be negligible.

  10. Widespread presence of naturally occurring perchlorate in high plains of Texas and New Mexico

    USGS Publications Warehouse

    Rajagopalan, S.; Anderson, T.A.; Fahlquist, L.; Rainwater, Ken A.; Ridley, M.; Jackson, W.A.

    2006-01-01

    Perchlorate (ClO4-) occurrence in groundwater has previously been linked to industrial releases and the historic use of Chilean nitrate fertilizers. However, recently a number of occurrences have been identified for which there is no obvious anthropogenic source. Groundwater from an area of 155 000 km2 in 56 counties in northwest Texas and eastern New Mexico is impacted by the presence of ClO4-. Concentrations were generally low (<4 ppb), although some areas are impacted by concentrations up to 200 ppb. ClO4- distribution is not related to well type (public water system, domestic, agricultural, or water-table monitoring) or aquifer (Ogallala, Edward Trinity High Plains, Edwards Trinity Plateau, Seymour, or Cenozoic). Results from vertically nested wells strongly indicate a surface source. The source of ClO4- appears to most likely be atmospheric deposition. Evidence supporting this hypothesis primarily relates to the presence of ClO 4- in tritium-free older water, the lack of relation between land use and concentration distribution, the inability of potential anthropogenic sources to account for the estimated mass of ClO4-, and the positive relationship between conserved anions (e.g., IO3-, Cl-, SO4-2) and ClO4-. The ClO4- distribution appears to be mainly related to evaporative concentration and unsaturated transport. This process has led to higher ClO4- and other ion concentrations in groundwater where the water table is relatively shallow, and in areas with lower saturated thickness. Irrigation may have accelerated this process in some areas by increasing the transport of accumulated salts and by increasing the number of evaporative cycles. Results from this study highlight the potential for ClO4- to impact groundwater in arid and semiarid areas through long-term atmospheric deposition. ?? 2006 American Chemical Society.

  11. Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy.

    PubMed

    Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe

    2015-07-07

    The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm(3) calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.

  12. Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy

    NASA Astrophysics Data System (ADS)

    Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe

    2015-07-01

    The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm3 calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.

  13. Gaussian curvature directs the distribution of spontaneous curvature on bilayer membrane necks.

    PubMed

    Chabanon, Morgan; Rangamani, Padmini

    2018-03-28

    Formation of membrane necks is crucial for fission and fusion in lipid bilayers. In this work, we seek to answer the following fundamental question: what is the relationship between protein-induced spontaneous mean curvature and the Gaussian curvature at a membrane neck? Using an augmented Helfrich model for lipid bilayers to include membrane-protein interaction, we solve the shape equation on catenoids to find the field of spontaneous curvature that satisfies mechanical equilibrium of membrane necks. In this case, the shape equation reduces to a variable coefficient Helmholtz equation for spontaneous curvature, where the source term is proportional to the Gaussian curvature. We show how this latter quantity is responsible for non-uniform distribution of spontaneous curvature in minimal surfaces. We then explore the energetics of catenoids with different spontaneous curvature boundary conditions and geometric asymmetries to show how heterogeneities in spontaneous curvature distribution can couple with Gaussian curvature to result in membrane necks of different geometries.

  14. The investigation of atmospheric deposition distribution of organochlorine pesticides (OCPs) in Turkey

    NASA Astrophysics Data System (ADS)

    Cindoruk, S. Sıddık; Tasdemir, Yücel

    2014-04-01

    Atmospheric deposition is a significant pollution source leading to contamination of remote and clean sites, surface waters and soils. Since persistent organic pollutants (POPs) stay in atmosphere without any degradation, they can be transported and deposited to clean surfaces. Organochlorine pesticides are an important group of POPs which have toxic and harmful effects to living organisms and environment. Therefore, atmospheric deposition levels and characteristics are of importance to determine the pollution quantity of water and soil surfaces in terms of POPs. This study reports the distribution quantities of atmospheric deposition including bulk, dry, wet and air-water exchange of particle and gas phase OCPs as a result of 1-year sampling campaign. Atmospheric deposition distribution showed that the main mechanism for OCPs deposition is wet processes with percentage of 69 of total deposition. OCP compounds' deposition varied according to atmospheric concentration and deposition mechanism. HCH compounds were dominant pesticide species for all deposition mechanisms. HCH deposition constituted the 65% of Σ10OCPs.

  15. Geography of Global Forest Carbon Stocks & Dynamics

    NASA Astrophysics Data System (ADS)

    Saatchi, S. S.; Yu, Y.; Xu, L.; Yang, Y.; Fore, A.; Ganguly, S.; Nemani, R. R.; Zhang, G.; Lefsky, M. A.; Sun, G.; Woodall, C. W.; Naesset, E.; Seibt, U. H.

    2014-12-01

    Spatially explicit distribution of carbon stocks and dynamics in global forests can greatly reduce the uncertainty in the terrestrial portion of the global carbon cycle by improving estimates of emissions and uptakes from land use activities, and help with green house gas inventory at regional and national scales. Here, we produce the first global distribution of carbon stocks in living woody biomass at ~ 100 m (1-ha) resolution for circa 2005 from a combination of satellite observations and ground inventory data. The total carbon stored in live woody biomass is estimated to be 337 PgC with 258 PgC in aboveground and 79 PgC in roots, and partitioned globally in boreal (20%), tropical evergreen (50%), temperate (12%), and woodland savanna and shrublands (15%). We use a combination of satellite observations of tree height, remote sensing data on deforestation and degradation to quantify the dynamics of these forests at the biome level globally and provide geographical distribution of carbon storage dynamics in terms sinks and sources globally.

  16. Small Craters and Their Diagnostic Potential

    NASA Astrophysics Data System (ADS)

    Bugiolacchi, R.

    2017-07-01

    I analysed and compared the size-frequency distributions of craters in the Apollo 17 landing region, comprising of six mare terrains with varying morphologies and cratering characteristics, along with three other regions allegedly affected by the same secondary event (Tycho secondary surge). I propose that for the smaller crater sizes (in this work 9-30 m), a] an exponential curve of power -0.18D can approximate Nkm-2 crater densities in a regime of equilibrium, while b] a power function D-3 closely describes the factorised representation of craters by size (1 m). The saturation level within the Central Area suggests that c] either the modelled rates of crater erosion on the Moon should be revised, or that the Tycho event occurred much earlier in time than the current estimate. We propose that d] the size-frequency distribution of small secondary craters may bear the signature (in terms of size-frequency distribution of debris/surge) of the source impact and that this observation should be tested further.

  17. Analyses of multi-color plant-growth light sources in achieving maximum photosynthesis efficiencies with enhanced color qualities.

    PubMed

    Wu, Tingzhu; Lin, Yue; Zheng, Lili; Guo, Ziquan; Xu, Jianxing; Liang, Shijie; Liu, Zhuguagn; Lu, Yijun; Shih, Tien-Mo; Chen, Zhong

    2018-02-19

    An optimal design of light-emitting diode (LED) lighting that benefits both the photosynthesis performance for plants and the visional health for human eyes has drawn considerable attention. In the present study, we have developed a multi-color driving algorithm that serves as a liaison between desired spectral power distributions and pulse-width-modulation duty cycles. With the aid of this algorithm, our multi-color plant-growth light sources can optimize correlated-color temperature (CCT) and color rendering index (CRI) such that photosynthetic luminous efficacy of radiation (PLER) is maximized regardless of the number of LEDs and the type of photosynthetic action spectrum (PAS). In order to illustrate the accuracies of the proposed algorithm and the practicalities of our plant-growth light sources, we choose six color LEDs and German PAS for experiments. Finally, our study can help provide a useful guide to improve light qualities in plant factories, in which long-term co-inhabitance of plants and human beings is required.

  18. 8.76 W mid-infrared supercontinuum generation in a thulium doped fiber amplifier

    NASA Astrophysics Data System (ADS)

    Michalska, Maria; Grzes, Pawel; Swiderski, Jacek

    2018-07-01

    A stable mid-infrared supercontinuum (SC) generation with a maximum average power of 8.76 W in a spectral band of 1.9-2.65 μm is reported. To broaden the bandwidth of SC, a 1.55 μm pulsed laser system delivering 1 ns pulses at a pulse repetition frequency of 500 kHz was used as a seed source for one-stage thulium-doped fiber amplifier. The power conversion efficiency for wavelengths longer than 2.4 μm and 2.5 μm was determined to be 28% and 18%, respectively, which is believed to be the most efficient power distribution towards the mid-infrared in SC sources based on Tm-doped fibers. The power spectral density of the continuum was calculated to be >13 mW/nm with a potential of further scaling-up. A long-term power stability test, showing power fluctuations <3%, proved the robustness and reliability of the developed SC source.

  19. Cramer-Rao bound analysis of wideband source localization and DOA estimation

    NASA Astrophysics Data System (ADS)

    Yip, Lean; Chen, Joe C.; Hudson, Ralph E.; Yao, Kung

    2002-12-01

    In this paper, we derive the Cramér-Rao Bound (CRB) for wideband source localization and DOA estimation. The resulting CRB formula can be decomposed into two terms: one that depends on the signal characteristic and one that depends on the array geometry. For a uniformly spaced circular array (UCA), a concise analytical form of the CRB can be given by using some algebraic approximation. We further define a DOA beamwidth based on the resulting CRB formula. The DOA beamwidth can be used to design the sampling angular spacing for the Maximum-likelihood (ML) algorithm. For a randomly distributed array, we use an elliptical model to determine the largest and smallest effective beamwidth. The effective beamwidth and the CRB analysis of source localization allow us to design an efficient algorithm for the ML estimator. Finally, our simulation results of the Approximated Maximum Likelihood (AML) algorithm are demonstrated to match well to the CRB analysis at high SNR.

  20. Laboratory-based micro-X-ray fluorescence setup using a von Hamos crystal spectrometer and a focused beam X-ray tube

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kayser, Y., E-mail: yves.kayser@psi.ch; Paul Scherrer Institut, 5232 Villigen-PSI; Błachucki, W.

    2014-04-15

    The high-resolution von Hamos bent crystal spectrometer of the University of Fribourg was upgraded with a focused X-ray beam source with the aim of performing micro-sized X-ray fluorescence (XRF) measurements in the laboratory. The focused X-ray beam source integrates a collimating optics mounted on a low-power micro-spot X-ray tube and a focusing polycapillary half-lens placed in front of the sample. The performances of the setup were probed in terms of spatial and energy resolution. In particular, the fluorescence intensity and energy resolution of the von Hamos spectrometer equipped with the novel micro-focused X-ray source and a standard high-power water-cooled X-raymore » tube were compared. The XRF analysis capability of the new setup was assessed by measuring the dopant distribution within the core of Er-doped SiO{sub 2} optical fibers.« less

  1. Intelligent Energy Management System for PV-Battery-based Microgrids in Future DC Homes

    NASA Astrophysics Data System (ADS)

    Chauhan, R. K.; Rajpurohit, B. S.; Gonzalez-Longatt, F. M.; Singh, S. N.

    2016-06-01

    This paper presents a novel intelligent energy management system (IEMS) for a DC microgrid connected to the public utility (PU), photovoltaic (PV) and multi-battery bank (BB). The control objectives of the proposed IEMS system are: (i) to ensure the load sharing (according to the source capacity) among sources, (ii) to reduce the power loss (high efficient) in the system, and (iii) to enhance the system reliability and power quality. The proposed IEMS is novel because it follows the ideal characteristics of the battery (with some assumptions) for the power sharing and the selection of the closest source to minimize the power losses. The IEMS allows continuous and accurate monitoring with intelligent control of distribution system operations such as battery bank energy storage (BBES) system, PV system and customer utilization of electric power. The proposed IEMS gives the better operational performance for operating conditions in terms of load sharing, loss minimization, and reliability enhancement of the DC microgrid.

  2. Hospitals and plastics. Dioxin prevention and medical waste incinerators.

    PubMed

    Thornton, J; McCally, M; Orris, P; Weinberg, J

    1996-01-01

    CHLORINATED DIOXINS and related compounds are extremely potent toxic substances, producing effects in humans and animals at extremely low doses. Because these compounds are persistent in the environment and accumulate in the food chain, they are now distributed globally, and every member of the human population is exposed to them, primarily through the food supply and mothers' milk. An emerging body of information suggests that dioxin contamination has reached a level that may pose a large-scale, long-term public health risk. Of particular concern are dioxin's effects on reproduction, development, immune system function, and carcinogenesis. Medical waste incineration is a major source of dioxins. Polyvinyl chloride (PVC) plastic, as the dominant source of organically bound chlorine in the medical waste stream, is the primary cause of "iatrogenic" dioxin produced by the incineration of medical wastes. Health professionals have a responsibility to work to reduce dioxin exposure from medical sources. Health care institutions should implement policies to reduce the use of PVC plastics, thus achieving major reductions in medically related dioxin formation.

  3. Integral-moment analysis of the BATSE gamma-ray burst intensity distribution

    NASA Technical Reports Server (NTRS)

    Horack, John M.; Emslie, A. Gordon

    1994-01-01

    We have applied the technique of integral-moment analysis to the intensity distribution of the first 260 gamma-ray bursts observed by the Burst and Transient Source Experiment (BATSE) on the Compton Gamma Ray Observatory. This technique provides direct measurement of properties such as the mean, variance, and skewness of the convolved luminosity-number density distribution, as well as associated uncertainties. Using this method, one obtains insight into the nature of the source distributions unavailable through computation of traditional single parameters such as V/V(sub max)). If the luminosity function of the gamma-ray bursts is strongly peaked, giving bursts only a narrow range of luminosities, these results are then direct probes of the radial distribution of sources, regardless of whether the bursts are a local phenomenon, are distributed in a galactic halo, or are at cosmological distances. Accordingly, an integral-moment analysis of the intensity distribution of the gamma-ray bursts provides for the most complete analytic description of the source distribution available from the data, and offers the most comprehensive test of the compatibility of a given hypothesized distribution with observation.

  4. L1-norm locally linear representation regularization multi-source adaptation learning.

    PubMed

    Tao, Jianwen; Wen, Shiting; Hu, Wenjun

    2015-09-01

    In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Evolution of Welding-Fume Aerosols with Time and Distance from the Source: A study was conducted on the spatiotemporal variability in welding-fume concentrations for the characterization of first- and second-hand exposure to welding fumes.

    PubMed

    Cena, L G; Chen, B T; Keane, M J

    2016-08-01

    Gas metal arc welding fumes were generated from mild-steel plates and measured near the arc (30 cm), representing first-hand exposure of the welder, and farther away from the source (200 cm), representing second-hand exposure of adjacent workers. Measurements were taken during 1-min welding runs and at subsequent 5-min intervals after the welding process was stopped. Number size distributions were measured in real time. Particle mass distributions were measured using a micro-orifice uniform deposition impactor, and total mass concentrations were measured with polytetrafluorothylene filters. Membrane filters were used for collecting morphology samples for electron microscopy. Average mass concentrations measured near the arc were 45 mg/m 3 and 9 mg/m 3 at the farther distance. The discrepancy in concentrations at the two distances was attributed to the presence of spatter particles, which were observed only in the morphology samples near the source. As fumes aged over time, mass concentrations at the farther distance decreased by 31% (6.2 mg/m 3 ) after 5 min and an additional 13% (5.4 mg/m 3 ) after 10 min. Particle number and mass distributions during active welding were similar at both distances, indicating similar exposure patterns for welders and adjacent workers. Exceptions were recorded for particles smaller than 50 nm and larger than 3 μm, where concentrations were higher near the arc, indicating higher exposures of welders. These results were confirmed by microscopy analysis. As residence time increased, number concentrations decreased dramatically. In terms of particle number concentrations, second-hand exposures to welding fumes during active welding may be as high as first-hand exposures.

  6. Differences in tsunami generation between the December 26, 2004 and March 28, 2005 Sumatra earthquakes

    USGS Publications Warehouse

    Geist, E.L.; Bilek, S.L.; Arcas, D.; Titov, V.V.

    2006-01-01

    Source parameters affecting tsunami generation and propagation for the Mw > 9.0 December 26, 2004 and the Mw = 8.6 March 28, 2005 earthquakes are examined to explain the dramatic difference in tsunami observations. We evaluate both scalar measures (seismic moment, maximum slip, potential energy) and finite-source repre-sentations (distributed slip and far-field beaming from finite source dimensions) of tsunami generation potential. There exists significant variability in local tsunami runup with respect to the most readily available measure, seismic moment. The local tsunami intensity for the December 2004 earthquake is similar to other tsunamigenic earthquakes of comparable magnitude. In contrast, the March 2005 local tsunami was deficient relative to its earthquake magnitude. Tsunami potential energy calculations more accurately reflect the difference in tsunami severity, although these calculations are dependent on knowledge of the slip distribution and therefore difficult to implement in a real-time system. A significant factor affecting tsunami generation unaccounted for in these scalar measures is the location of regions of seafloor displacement relative to the overlying water depth. The deficiency of the March 2005 tsunami seems to be related to concentration of slip in the down-dip part of the rupture zone and the fact that a substantial portion of the vertical displacement field occurred in shallow water or on land. The comparison of the December 2004 and March 2005 Sumatra earthquakes presented in this study is analogous to previous studies comparing the 1952 and 2003 Tokachi-Oki earthquakes and tsunamis, in terms of the effect slip distribution has on local tsunamis. Results from these studies indicate the difficulty in rapidly assessing local tsunami runup from magnitude and epicentral location information alone.

  7. FIRRE command and control station (C2)

    NASA Astrophysics Data System (ADS)

    Laird, R. T.; Kramer, T. A.; Cruickshanks, J. R.; Curd, K. M.; Thomas, K. M.; Moneyhun, J.

    2006-05-01

    The Family of Integrated Rapid Response Equipment (FIRRE) is an advanced technology demonstration program intended to develop a family of affordable, scalable, modular, and logistically supportable unmanned systems to meet urgent operational force protection needs and requirements worldwide. The near-term goal is to provide the best available unmanned ground systems to the warfighter in Iraq and Afghanistan. The overarching long-term goal is to develop a fully-integrated, layered force protection system of systems for our forward deployed forces that is networked with the future force C4ISR systems architecture. The intent of the FIRRE program is to reduce manpower requirements, enhance force protection capabilities, and reduce casualties through the use of unmanned systems. FIRRE is sponsored by the Office of the Under Secretary of Defense, Acquisitions, Technology and Logistics (OUSD AT&L), and is managed by the Product Manager, Force Protection Systems (PM-FPS). The FIRRE Command and Control (C2) Station supports two operators, hosts the Joint Battlespace Command and Control Software for Manned and Unmanned Assets (JBC2S), and will be able to host Mission Planning and Rehearsal (MPR) software. The C2 Station consists of an M1152 HMMWV fitted with an S-788 TYPE I shelter. The C2 Station employs five 24" LCD monitors for display of JBC2S software [1], MPR software, and live video feeds from unmanned systems. An audio distribution system allows each operator to select between various audio sources including: AN/PRC-117F tactical radio (SINCGARS compatible), audio prompts from JBC2S software, audio from unmanned systems, audio from other operators, and audio from external sources such as an intercom in an adjacent Tactical Operations Center (TOC). A power distribution system provides battery backup for momentary outages. The Ethernet network, audio distribution system, and audio/video feeds are available for use outside the C2 Station.

  8. Strategic Energy Planning (Area 1) Consultants Reports to Citizen Potawatomi Nation Federally Recognized Indian Tribe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Marvin; Bose, James; Beier, Richard

    2004-12-01

    The assets that Citizen Potawatomi Nation holds were evaluated to help define the strengths and weaknesses to be used in pursuing economic prosperity. With this baseline assessment, a Planning Team will create a vision for the tribe to integrate into long-term energy and business strategies. Identification of energy efficiency devices, systems and technologies was made, and an estimation of cost benefits of the more promising ideas is submitted for possible inclusion into the final energy plan. Multiple energy resources and sources were identified and their attributes were assessed to determine the appropriateness of each. Methods of saving energy were evaluatedmore » and reported on and potential revenue-generating sources that specifically fit the tribe were identified and reported. A primary goal is to create long-term energy strategies to explore development of tribal utility options and analyze renewable energy and energy efficiency options. Associated goals are to consider exploring energy efficiency and renewable economic development projects involving the following topics: (1) Home-scale projects may include construction of a home with energy efficiency or renewable energy features and retrofitting an existing home to add energy efficiency or renewable energy features. (2) Community-scale projects may include medium to large scale energy efficiency building construction, retrofit project, or installation of community renewable energy systems. (3) Small business development may include the creation of a tribal enterprise that would manufacture and distribute solar and wind powered equipment for ranches and farms or create a contracting business to include energy efficiency and renewable retrofits such as geothermal heat pumps. (4) Commercial-scale energy projects may include at a larger scale, the formation of a tribal utility formed to sell power to the commercial grid, or to transmit and distribute power throughout the tribal community, or hydrogen production, and propane and natural-gas distribution systems.« less

  9. Robust optimization based upon statistical theory.

    PubMed

    Sobotta, B; Söhn, M; Alber, M

    2010-08-01

    Organ movement is still the biggest challenge in cancer treatment despite advances in online imaging. Due to the resulting geometric uncertainties, the delivered dose cannot be predicted precisely at treatment planning time. Consequently, all associated dose metrics (e.g., EUD and maxDose) are random variables with a patient-specific probability distribution. The method that the authors propose makes these distributions the basis of the optimization and evaluation process. The authors start from a model of motion derived from patient-specific imaging. On a multitude of geometry instances sampled from this model, a dose metric is evaluated. The resulting pdf of this dose metric is termed outcome distribution. The approach optimizes the shape of the outcome distribution based on its mean and variance. This is in contrast to the conventional optimization of a nominal value (e.g., PTV EUD) computed on a single geometry instance. The mean and variance allow for an estimate of the expected treatment outcome along with the residual uncertainty. Besides being applicable to the target, the proposed method also seamlessly includes the organs at risk (OARs). The likelihood that a given value of a metric is reached in the treatment is predicted quantitatively. This information reveals potential hazards that may occur during the course of the treatment, thus helping the expert to find the right balance between the risk of insufficient normal tissue sparing and the risk of insufficient tumor control. By feeding this information to the optimizer, outcome distributions can be obtained where the probability of exceeding a given OAR maximum and that of falling short of a given target goal can be minimized simultaneously. The method is applicable to any source of residual motion uncertainty in treatment delivery. Any model that quantifies organ movement and deformation in terms of probability distributions can be used as basis for the algorithm. Thus, it can generate dose distributions that are robust against interfraction and intrafraction motion alike, effectively removing the need for indiscriminate safety margins.

  10. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 16 Commercial Practices 2 2014-01-01 2014-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420 21...

  11. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420 21...

  12. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 16 Commercial Practices 2 2013-01-01 2013-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420 21...

  13. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420 21...

  14. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420 21...

  15. Quantification of the evolution of firm size distributions due to mergers and acquisitions

    PubMed Central

    Sornette, Didier

    2017-01-01

    The distribution of firm sizes is known to be heavy tailed. In order to account for this stylized fact, previous economic models have focused mainly on growth through investments in a company’s own operations (internal growth). Thereby, the impact of mergers and acquisitions (M&A) on the firm size (external growth) is often not taken into consideration, notwithstanding its potential large impact. In this article, we make a first step into accounting for M&A. Specifically, we describe the effect of mergers and acquisitions on the firm size distribution in terms of an integro-differential equation. This equation is subsequently solved both analytically and numerically for various initial conditions, which allows us to account for different observations of previous empirical studies. In particular, it rationalises shortcomings of past work by quantifying that mergers and acquisitions develop a significant influence on the firm size distribution only over time scales much longer than a few decades. This explains why M&A has apparently little impact on the firm size distributions in existing data sets. Our approach is very flexible and can be extended to account for other sources of external growth, thus contributing towards a holistic understanding of the distribution of firm sizes. PMID:28841683

  16. Volatile Organic Compounds: Characteristics, distribution and sources in urban schools

    NASA Astrophysics Data System (ADS)

    Mishra, Nitika; Bartsch, Jennifer; Ayoko, Godwin A.; Salthammer, Tunga; Morawska, Lidia

    2015-04-01

    Long term exposure to organic pollutants, both inside and outside school buildings may affect children's health and influence their learning performance. Since children spend significant amount of time in school, air quality, especially in classrooms plays a key role in determining the health risks associated with exposure at schools. Within this context, the present study investigated the ambient concentrations of Volatile Organic Compounds (VOCs) in 25 primary schools in Brisbane with the aim to quantify the indoor and outdoor VOCs concentrations, identify VOCs sources and their contribution, and based on these; propose mitigation measures to reduce VOCs exposure in schools. One of the most important findings is the occurrence of indoor sources, indicated by the I/O ratio >1 in 19 schools. Principal Component Analysis with Varimax rotation was used to identify common sources of VOCs and source contribution was calculated using an Absolute Principal Component Scores technique. The result showed that outdoor 47% of VOCs were contributed by petrol vehicle exhaust but the overall cleaning products had the highest contribution of 41% indoors followed by air fresheners and art and craft activities. These findings point to the need for a range of basic precautions during the selection, use and storage of cleaning products and materials to reduce the risk from these sources.

  17. 77 FR 19740 - Water Sources for Long-Term Recirculation Cooling Following a Loss-of-Coolant Accident

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-02

    ... NUCLEAR REGULATORY COMMISSION [NRC-2010-0249] Water Sources for Long-Term Recirculation Cooling... Regulatory Guide (RG) 1.82, ``Water Sources for Long-Term Recirculation Cooling Following a Loss-of-Coolant... regarding the sumps and suppression pools that provide water sources for emergency core cooling, containment...

  18. A Composite Source Model With Fractal Subevent Size Distribution

    NASA Astrophysics Data System (ADS)

    Burjanek, J.; Zahradnik, J.

    A composite source model, incorporating different sized subevents, provides a pos- sible description of complex rupture processes during earthquakes. The number of subevents with characteristic dimension greater than R is proportional to R-2. The subevents do not overlap with each other, and the sum of their areas equals to the area of the target event (e.g. mainshock) . The subevents are distributed randomly over the fault. Each subevent is modeled as a finite source, using kinematic approach (radial rupture propagation, constant rupture velocity, boxcar slip-velocity function, with constant rise time on the subevent). The final slip at each subevent is related to its characteristic dimension, using constant stress-drop scaling. Variation of rise time with subevent size is a free parameter of modeling. The nucleation point of each subevent is taken as the point closest to mainshock hypocentre. The synthetic Green's functions are calculated by the discrete-wavenumber method in a 1D horizontally lay- ered crustal model in a relatively coarse grid of points covering the fault plane. The Green's functions needed for the kinematic model in a fine grid are obtained by cu- bic spline interpolation. As different frequencies may be efficiently calculated with different sampling, the interpolation simplifies and speeds-up the procedure signifi- cantly. The composite source model described above allows interpretation in terms of a kinematic model with non-uniform final slip and rupture velocity spatial distribu- tions. The 1994 Northridge earthquake (Mw = 6.7) is used as a validation event. The strong-ground motion modeling of the 1999 Athens earthquake (Mw = 5.9) is also performed.

  19. Demographic changes following mechanical removal of exotic brown trout in an Intermountain West (USA), high-elevation stream

    USGS Publications Warehouse

    Saunders, W. Carl; Budy, Phaedra E.; Thiede, Gary P.

    2015-01-01

    Exotic species present a great threat to native fish conservation; however, eradicating exotics is expensive and often impractical. Mechanical removal can be ineffective for eradication, but nonetheless may increase management effectiveness by identifying portions of a watershed that are strong sources of exotics. We used mechanical removal to understand processes driving exotic brown trout (Salmo trutta) populations in the Logan River, Utah. Our goals were to: (i) evaluate the demographic response of brown trout to mechanical removal, (ii) identify sources of brown trout recruitment at a watershed scale and (iii) evaluate whether mechanical removal can reduce brown trout densities. We removed brown trout from 2 km of the Logan River (4174 fish), and 5.6 km of Right Hand Fork (RHF, 15,245 fish), a low-elevation tributary, using single-pass electrofishing. We compared fish abundance and size distributions prior to, and after 2 years of mechanical removal. In the Logan River, immigration to the removal reach and high natural variability in fish abundances limited the response to mechanical removal. In contrast, mechanical removal in RHF resulted in a strong recruitment pulse, shifting the size distribution towards smaller fish. These results suggest that, before removal, density-dependent mortality or emigration of juvenile fish stabilised adult populations and may have provided a source of juveniles to the main stem. Overall, in sites demonstrating strong density-dependent population regulation, or near sources of exotics, short-term mechanical removal has limited effects on brown trout populations but may help identify factors governing populations and inform large-scale management of exotic species.

  20. Modeling Source Water Threshold Exceedances with Extreme Value Theory

    NASA Astrophysics Data System (ADS)

    Rajagopalan, B.; Samson, C.; Summers, R. S.

    2016-12-01

    Variability in surface water quality, influenced by seasonal and long-term climate changes, can impact drinking water quality and treatment. In particular, temperature and precipitation can impact surface water quality directly or through their influence on streamflow and dilution capacity. Furthermore, they also impact land surface factors, such as soil moisture and vegetation, which can in turn affect surface water quality, in particular, levels of organic matter in surface waters which are of concern. All of these will be exacerbated by anthropogenic climate change. While some source water quality parameters, particularly Total Organic Carbon (TOC) and bromide concentrations, are not directly regulated for drinking water, these parameters are precursors to the formation of disinfection byproducts (DBPs), which are regulated in drinking water distribution systems. These DBPs form when a disinfectant, added to the water to protect public health against microbial pathogens, most commonly chlorine, reacts with dissolved organic matter (DOM), measured as TOC or dissolved organic carbon (DOC), and inorganic precursor materials, such as bromide. Therefore, understanding and modeling the extremes of TOC and Bromide concentrations is of critical interest for drinking water utilities. In this study we develop nonstationary extreme value analysis models for threshold exceedances of source water quality parameters, specifically TOC and bromide concentrations. In this, the threshold exceedances are modeled as Generalized Pareto Distribution (GPD) whose parameters vary as a function of climate and land surface variables - thus, enabling to capture the temporal nonstationarity. We apply these to model threshold exceedance of source water TOC and bromide concentrations at two locations with different climate and find very good performance.

  1. Repeat synoptic sampling reveals drivers of change in carbon and nutrient chemistry of Arctic catchments

    NASA Astrophysics Data System (ADS)

    Zarnetske, J. P.; Abbott, B. W.; Bowden, W. B.; Iannucci, F.; Griffin, N.; Parker, S.; Pinay, G.; Aanderud, Z.

    2017-12-01

    Dissolved organic carbon (DOC), nutrients, and other solute concentrations are increasing in rivers across the Arctic. Two hypotheses have been proposed to explain these trends: 1. distributed, top-down permafrost degradation, and 2. discrete, point-source delivery of DOC and nutrients from permafrost collapse features (thermokarst). While long-term monitoring at a single station cannot discriminate between these mechanisms, synoptic sampling of multiple points in the stream network could reveal the spatial structure of solute sources. In this context, we sampled carbon and nutrient chemistry three times over two years in 119 subcatchments of three distinct Arctic catchments (North Slope, Alaska). Subcatchments ranged from 0.1 to 80 km2, and included three distinct types of Arctic landscapes - mountainous, tundra, and glacial-lake catchments. We quantified the stability of spatial patterns in synoptic water chemistry and analyzed high-frequency time series from the catchment outlets across the thaw season to identify source areas for DOC, nutrients, and major ions. We found that variance in solute concentrations between subcatchments collapsed at spatial scales between 1 to 20 km2, indicating a continuum of diffuse- and point-source dynamics, depending on solute and catchment characteristics (e.g. reactivity, topography, vegetation, surficial geology). Spatially-distributed mass balance revealed conservative transport of DOC and nitrogen, and indicates there may be strong in-stream retention of phosphorus, providing a network-scale confirmation of previous reach-scale studies in these Arctic catchments. Overall, we present new approaches to analyzing synoptic data for change detection and quantification of ecohydrological mechanisms in ecosystems in the Arctic and beyond.

  2. ATLAS Distributed Computing Monitoring tools during the LHC Run I

    NASA Astrophysics Data System (ADS)

    Schovancová, J.; Campana, S.; Di Girolamo, A.; Jézéquel, S.; Ueda, I.; Wenaus, T.; Atlas Collaboration

    2014-06-01

    This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources. During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visualization bits across the different tools. A rich family of various filtering and searching options enhancing available user interfaces comes naturally with the data and visualization layer separation. With a variety of reliable monitoring data accessible through standardized interfaces, the possibility of automating actions under well defined conditions correlating multiple data sources has become feasible. In this contribution we discuss also about the automated exclusion of degraded resources and their automated recovery in various activities.

  3. Traffic and nucleation events as main sources of ultrafine particles in high-insolation developed world cities

    NASA Astrophysics Data System (ADS)

    Brines, M.; Dall'Osto, M.; Beddows, D. C. S.; Harrison, R. M.; Gómez-Moreno, F.; Núñez, L.; Artíñano, B.; Costabile, F.; Gobbi, G. P.; Salimi, F.; Morawska, L.; Sioutas, C.; Querol, X.

    2015-05-01

    Road traffic emissions are often considered the main source of ultrafine particles (UFP, diameter smaller than 100 nm) in urban environments. However, recent studies worldwide have shown that - in high-insolation urban regions at least - new particle formation events can also contribute to UFP. In order to quantify such events we systematically studied three cities located in predominantly sunny environments: Barcelona (Spain), Madrid (Spain) and Brisbane (Australia). Three long-term data sets (1-2 years) of fine and ultrafine particle number size distributions (measured by SMPS, Scanning Mobility Particle Sizer) were analysed. Compared to total particle number concentrations, aerosol size distributions offer far more information on the type, origin and atmospheric evolution of the particles. By applying k-means clustering analysis, we categorized the collected aerosol size distributions into three main categories: "Traffic" (prevailing 44-63% of the time), "Nucleation" (14-19%) and "Background pollution and Specific cases" (7-22%). Measurements from Rome (Italy) and Los Angeles (USA) were also included to complement the study. The daily variation of the average UFP concentrations for a typical nucleation day at each site revealed a similar pattern for all cities, with three distinct particle bursts. A morning and an evening spike reflected traffic rush hours, whereas a third one at midday showed nucleation events. The photochemically nucleated particles' burst lasted 1-4 h, reaching sizes of 30-40 nm. On average, the occurrence of particle size spectra dominated by nucleation events was 16% of the time, showing the importance of this process as a source of UFP in urban environments exposed to high solar radiation. Nucleation events lasting for 2 h or more occurred on 55% of the days, this extending to > 4 h in 28% of the days, demonstrating that atmospheric conditions in urban environments are not favourable to the growth of photochemically nucleated particles. In summary, although traffic remains the main source of UFP in urban areas, in developed countries with high insolation urban nucleation events are also a main source of UFP. If traffic-related particle concentrations are reduced in the future, nucleation events will likely increase in urban areas, due to the reduced urban condensation sinks.

  4. A photon source model based on particle transport in a parameterized accelerator structure for Monte Carlo dose calculations.

    PubMed

    Ishizawa, Yoshiki; Dobashi, Suguru; Kadoya, Noriyuki; Ito, Kengo; Chiba, Takahito; Takayama, Yoshiki; Sato, Kiyokazu; Takeda, Ken

    2018-05-17

    An accurate source model of a medical linear accelerator is essential for Monte Carlo (MC) dose calculations. This study aims to propose an analytical photon source model based on particle transport in parameterized accelerator structures, focusing on a more realistic determination of linac photon spectra compared to existing approaches. We designed the primary and secondary photon sources based on the photons attenuated and scattered by a parameterized flattening filter. The primary photons were derived by attenuating bremsstrahlung photons based on the path length in the filter. Conversely, the secondary photons were derived from the decrement of the primary photons in the attenuation process. This design facilitates these sources to share the free parameters of the filter shape and be related to each other through the photon interaction in the filter. We introduced two other parameters of the primary photon source to describe the particle fluence in penumbral regions. All the parameters are optimized based on calculated dose curves in water using the pencil-beam-based algorithm. To verify the modeling accuracy, we compared the proposed model with the phase space data (PSD) of the Varian TrueBeam 6 and 15 MV accelerators in terms of the beam characteristics and the dose distributions. The EGS5 Monte Carlo code was used to calculate the dose distributions associated with the optimized model and reference PSD in a homogeneous water phantom and a heterogeneous lung phantom. We calculated the percentage of points passing 1D and 2D gamma analysis with 1%/1 mm criteria for the dose curves and lateral dose distributions, respectively. The optimized model accurately reproduced the spectral curves of the reference PSD both on- and off-axis. The depth dose and lateral dose profiles of the optimized model also showed good agreement with those of the reference PSD. The passing rates of the 1D gamma analysis with 1%/1 mm criteria between the model and PSD were 100% for 4 × 4, 10 × 10, and 20 × 20 cm 2 fields at multiple depths. For the 2D dose distributions calculated in the heterogeneous lung phantom, the 2D gamma pass rate was 100% for 6 and 15 MV beams. The model optimization time was less than 4 min. The proposed source model optimization process accurately produces photon fluence spectra from a linac using valid physical properties, without detailed knowledge of the geometry of the linac head, and with minimal optimization time. © 2018 American Association of Physicists in Medicine.

  5. Source Identification Of Airborne Antimony On The Basis Of The Field Monitoring And The Source Profiling

    NASA Astrophysics Data System (ADS)

    Iijima, A.; Sato, K.; Fujitani, Y.; Fujimori, E.; Tanabe, K.; Ohara, T.; Shimoda, M.; Kozawa, K.; Furuta, N.

    2008-12-01

    The results of the long-term monitoring of airborne particulate matter (APM) in Tokyo indicated that APM have been extremely enriched with antimony (Sb) compared to crustal composition. This observation suggests that the airborne Sb is distinctly derived from human activities. According to the material flow analysis, automotive brake abrasion dust and fly ash from waste incinerator were suspected as the significant Sb sources. To clarify the emission sources of the airborne Sb, elemental composition, particle size distribution, and morphological profiles of dust particles collected from two possible emission sources were characterized and compared to the field observation data. Brake abrasion dust samples were generated by using a brake dynamometer. During the abrasion test, particle size distribution was measured by an aerodynamic particle sizer spectrometer. Concurrently, size- classified dust particles were collected by an Andersen type air sampler. Fly ash samples were collected from several municipal waste incinerators, and the bulk ash samples were re-dispersed into an enclosed chamber. The measurement of particle size distribution and the collection of size-classified ash particles were conducted by the same methodologies as described previously. Field observations of APM were performed at a roadside site and a residential site by using an Andersen type air sampler. Chemical analyses of metallic elements were performed by an inductively coupled plasma atomic emission spectrometry and an inductively coupled plasma mass spectrometr. Morphological profiling of the individual particle was conducted by a scanning electron microscope equipped with an energy dispersive X-ray spectrometer. High concentration of Sb was detected from both of two possible sources. Particularly, Sb concentrations in a brake abrasion dust were extremely high compared to that in an ambient APM, suggesting that airborne Sb observed at the roadside might have been largely derived from mechanical abrasion of automotive brake pads. The peak of the mass-based particle size distribution of brake abrasion dust was found in a diameter of 2-3 μm. From the morphological viewpoints, shape of brake abrasion dust particle was typically edge- shaped, and high concentrated Sb and sulfur were simultaneously detected in a brake abrasion dust particle because Sb2S3 is used as a solid lubricant for automotive brake pad. Indeed, at the roadside site, total concentration of airborne Sb was twice as much as that observed at residential site. Moreover, the most concentrated Sb was found in a diameter of 2.1-3.6 μm for the roadside APM. Furthermore, in the collected particles with this size range, we found a number of particles of which morphological profiles were similar to those of the brake abrasion dust. Consequently, an automotive brake abrasion dust is expected as the predominant source of airborne Sb in the roadside atmosphere.

  6. Problem solving as intelligent retrieval from distributed knowledge sources

    NASA Technical Reports Server (NTRS)

    Chen, Zhengxin

    1987-01-01

    Distributed computing in intelligent systems is investigated from a different perspective. From the viewpoint that problem solving can be viewed as intelligent knowledge retrieval, the use of distributed knowledge sources in intelligent systems is proposed.

  7. Understanding the Laminar Distribution of Tropospheric Ozone from Ground-Based, Airborne, Spaceborne, and Modeling Perspectives

    NASA Technical Reports Server (NTRS)

    Newchurch, Mike; Johnson, Matthew S.; Huang, Guanyu; Kuang, Shi; Wang, Lihua; Chance, Kelly; Liu, Xiong

    2016-01-01

    Laminar ozone structure is a ubiquitous feature of tropospheric-ozone distributions resulting from dynamic and chemical atmospheric processes. Understanding the characteristics of these ozone laminae and the mechanisms responsible for producing them is important to outline the transport pathways of trace gases and to quantify the impact of different sources on tropospheric background ozone. In this study, we present a new method to detect ozone laminae to understand their climatological characteristics of occurrence frequency in terms of thickness and altitude. We employ both ground-based and airborne ozone lidar measurements and other synergistic observations and modeling to investigate the sources and mechanisms such as biomass burning transport, stratospheric intrusion, lightning-generated NOx, and nocturnal low-level jets that are responsible for depleted or enhanced tropospheric ozone layers. Spaceborne (e.g., OMI (Ozone Monitoring Instrument), TROPOMI (Tropospheric Monitoring Instrument), TEMPO (Tropospheric Emissions: Monitoring of Pollution)) measurements of these laminae will observe greater horizontal extent and lower vertical resolution than balloon-borne or lidar measurements will quantify. Using integrated ground-based, airborne, and spaceborne observations in a modeling framework affords insight into how to gain knowledge of both the vertical and horizontal evolution of these ubiquitous ozone laminae.

  8. Experimental study of gaseous and particulate contaminants distribution in an aircraft cabin

    NASA Astrophysics Data System (ADS)

    Li, Fei; Liu, Junjie; Pei, Jingjing; Lin, Chao-Hsin; Chen, Qingyan

    2014-03-01

    The environment of the aircraft cabin greatly influences the comfort and health of passengers and crew members. Contaminant transport has a strong effect on disease spreading in the cabin environment. To obtain the complex cabin contaminant distribution fields accurately and completely, which is also essential to provide solid and precise data for computational fluid dynamics (CFD) model validation, this paper aimed to investigate and improve the method for simultaneous particle and gaseous contaminant fields measurement. The experiment was conducted in a functional MD-82 aircraft. Sulfur hexafluoride (SF6) was used as tracer gas, and Di-Ethyl-Hexyl-Sebacat (DEHS) was used as particulate contaminant. The whole measurement was completed in a part of the economy-class cabin without heating manikins or occupied with heating manikins. The experimental method, in terms of pollutant source setting, sampling points and schedule, was investigated. Statistical analysis showed that appropriately modified sampling grid was able to provide reasonable data. A small difference in the source locations can lead to a significant difference in cabin contaminant fields. And the relationship between gaseous and particulate pollutant transport was also discussed through tracking behavior analysis.

  9. Modeling integrated water user decisions in intermittent supply systems

    NASA Astrophysics Data System (ADS)

    Rosenberg, David E.; Tarawneh, Tarek; Abdel-Khaleq, Rania; Lund, Jay R.

    2007-07-01

    We apply systems analysis to estimate household water use in an intermittent supply system considering numerous interdependent water user behaviors. Some 39 household actions include conservation; improving local storage or water quality; and accessing sources having variable costs, availabilities, reliabilities, and qualities. A stochastic optimization program with recourse decisions identifies the infrastructure investments and short-term coping actions a customer can adopt to cost-effectively respond to a probability distribution of piped water availability. Monte Carlo simulations show effects for a population of customers. Model calibration reproduces the distribution of billed residential water use in Amman, Jordan. Parametric analyses suggest economic and demand responses to increased availability and alternative pricing. It also suggests potential market penetration for conservation actions, associated water savings, and subsidies to entice further adoption. We discuss new insights to size, target, and finance conservation.

  10. Inference of relativistic electron spectra from measurements of inverse Compton radiation

    NASA Astrophysics Data System (ADS)

    Craig, I. J. D.; Brown, J. C.

    1980-07-01

    The inference of relativistic electron spectra from spectral measurement of inverse Compton radiation is discussed for the case where the background photon spectrum is a Planck function. The problem is formulated in terms of an integral transform that relates the measured spectrum to the unknown electron distribution. A general inversion formula is used to provide a quantitative assessment of the information content of the spectral data. It is shown that the observations must generally be augmented by additional information if anything other than a rudimentary two or three parameter model of the source function is to be derived. It is also pointed out that since a similar equation governs the continuum spectra emitted by a distribution of black-body radiators, the analysis is relevant to the problem of stellar population synthesis from galactic spectra.

  11. On the dynamic toroidal multipoles from localized electric current distributions.

    PubMed

    Fernandez-Corbaton, Ivan; Nanz, Stefan; Rockstuhl, Carsten

    2017-08-08

    We analyze the dynamic toroidal multipoles and prove that they do not have an independent physical meaning with respect to their interaction with electromagnetic waves. We analytically show how the split into electric and toroidal parts causes the appearance of non-radiative components in each of the two parts. These non-radiative components, which cancel each other when both parts are summed, preclude the separate determination of each part by means of measurements of the radiation from the source or of its coupling to external electromagnetic waves. In other words, there is no toroidal radiation or independent toroidal electromagnetic coupling. The formal meaning of the toroidal multipoles is clear in our derivations. They are the higher order terms of an expansion of the multipolar coefficients of electric parity with respect to the electromagnetic size of the source.

  12. Water on Mars: Inventory, distribution, and possible sources of polar ice

    NASA Technical Reports Server (NTRS)

    Clifford, S. M.

    1992-01-01

    Theoretical considerations and various lines of morphologic evidence suggest that, in addition to the normal seasonal and climatic exchange of H2O that occurs between the Martian polar caps, atmosphere, and mid to high latitude regolith, large volumes of water have been introduced into the planet's long term hydrologic cycle by the sublimation of equatorial ground ice, impacts, catastrophic flooding, and volcanism. Under the climatic conditions that are thought to have prevailed on Mars throughout the past 3 to 4 b.y., much of this water is expected to have been cold trapped at the poles. The amount of polar ice contributed by each of the planet's potential crustal sources is discussed and estimated. The final analysis suggests that only 5 to 15 pct. of this potential inventory is now in residence at the poles.

  13. Bayesian inverse modeling and source location of an unintended 131I release in Europe in the fall of 2011

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Šindelářová, Kateřina; Hýža, Miroslav; Stohl, Andreas

    2017-10-01

    In the fall of 2011, iodine-131 (131I) was detected at several radionuclide monitoring stations in central Europe. After investigation, the International Atomic Energy Agency (IAEA) was informed by Hungarian authorities that 131I was released from the Institute of Isotopes Ltd. in Budapest, Hungary. It was reported that a total activity of 342 GBq of 131I was emitted between 8 September and 16 November 2011. In this study, we use the ambient concentration measurements of 131I to determine the location of the release as well as its magnitude and temporal variation. As the location of the release and an estimate of the source strength became eventually known, this accident represents a realistic test case for inversion models. For our source reconstruction, we use no prior knowledge. Instead, we estimate the source location and emission variation using only the available 131I measurements. Subsequently, we use the partial information about the source term available from the Hungarian authorities for validation of our results. For the source determination, we first perform backward runs of atmospheric transport models and obtain source-receptor sensitivity (SRS) matrices for each grid cell of our study domain. We use two dispersion models, FLEXPART and Hysplit, driven with meteorological analysis data from the global forecast system (GFS) and from European Centre for Medium-range Weather Forecasts (ECMWF) weather forecast models. Second, we use a recently developed inverse method, least-squares with adaptive prior covariance (LS-APC), to determine the 131I emissions and their temporal variation from the measurements and computed SRS matrices. For each grid cell of our simulation domain, we evaluate the probability that the release was generated in that cell using Bayesian model selection. The model selection procedure also provides information about the most suitable dispersion model for the source term reconstruction. Third, we select the most probable location of the release with its associated source term and perform a forward model simulation to study the consequences of the iodine release. Results of these procedures are compared with the known release location and reported information about its time variation. We find that our algorithm could successfully locate the actual release site. The estimated release period is also in agreement with the values reported by IAEA and the reported total released activity of 342 GBq is within the 99 % confidence interval of the posterior distribution of our most likely model.

  14. Simulated and measured neutron/gamma light output distribution for poly-energetic neutron/gamma sources

    NASA Astrophysics Data System (ADS)

    Hosseini, S. A.; Zangian, M.; Aghabozorgi, S.

    2018-03-01

    In the present paper, the light output distribution due to poly-energetic neutron/gamma (neutron or gamma) source was calculated using the developed MCNPX-ESUT-PE (MCNPX-Energy engineering of Sharif University of Technology-Poly Energetic version) computational code. The simulation of light output distribution includes the modeling of the particle transport, the calculation of scintillation photons induced by charged particles, simulation of the scintillation photon transport and considering the light resolution obtained from the experiment. The developed computational code is able to simulate the light output distribution due to any neutron/gamma source. In the experimental step of the present study, the neutron-gamma discrimination based on the light output distribution was performed using the zero crossing method. As a case study, 241Am-9Be source was considered and the simulated and measured neutron/gamma light output distributions were compared. There is an acceptable agreement between the discriminated neutron/gamma light output distributions obtained from the simulation and experiment.

  15. Term Coverage of Dietary Supplements Ingredients in Product Labels.

    PubMed

    Wang, Yefeng; Adam, Terrence J; Zhang, Rui

    2016-01-01

    As the clinical application and consumption of dietary supplements has grown, their side effects and possible interactions with prescribed medications has become a serious issue. Information extraction of dietary supplement related information is a critical need to support dietary supplement research. However, there currently is not an existing terminology for dietary supplements, placing a barrier for informatics research in this field. The terms related to dietary supplement ingredients should be collected and normalized before a terminology can be established to facilitate convenient search on safety information and control possible adverse effects of dietary supplements. In this study, the Dietary Supplement Label Database (DSLD) was chosen as the data source from which the ingredient information was extracted and normalized. The distribution based on the product type and the ingredient type of the dietary supplements were analyzed. The ingredient terms were then mapped to the existing terminologies, including UMLS, RxNorm and NDF-RT by using MetaMap and RxMix. The large gap between existing terminologies and ingredients were found: only 14.67%, 19.65%, and 12.88% of ingredient terms were covered by UMLS, RxNorm and NDF-RT, respectively.

  16. Defense Energy Support Center Fact Book, Fiscal Year 1999, Twenty-Second Edition

    DTIC Science & Technology

    1999-01-01

    numbers SOURCE: FACILITIES AND DISTRIBUTION MANAGEMENT COMMODITY BUSINESS UNIT 11 OCONUS COCO 10 8,717,850...GOCO 7 1,518,905 SOURCE: FACILITIES AND DISTRIBUTION MANAGEMENT COMMODITY BUSINESS UNIT DLA MANAGED STORAGE...FY 95 FY 96 FY 97 FY 98 FY 99 SOURCE: FACILITIES AND DISTRIBUTION MANAGEMENT COMMODITY BUSINESS UNIT 13 0 20 40 60 80 100 120 140 160 180 200 220

  17. Inverse modelling of fluvial sediment connectivity identifies characteristics and spatial distribution of sediment sources in a large river network.

    NASA Astrophysics Data System (ADS)

    Schmitt, R. J. P.; Bizzi, S.; Kondolf, G. M.; Rubin, Z.; Castelletti, A.

    2016-12-01

    Field and laboratory evidence indicates that the spatial distribution of transport in both alluvial and bedrock rivers is an adaptation to sediment supply. Sediment supply, in turn, depends on spatial distribution and properties (e.g., grain sizes and supply rates) of individual sediment sources. Analyzing the distribution of transport capacity in a river network could hence clarify the spatial distribution and properties of sediment sources. Yet, challenges include a) identifying magnitude and spatial distribution of transport capacity for each of multiple grain sizes being simultaneously transported, and b) estimating source grain sizes and supply rates, both at network scales. Herein, we approach the problem of identifying the spatial distribution of sediment sources and the resulting network sediment fluxes in a major, poorly monitored tributary (80,000 km2) of the Mekong. Therefore, we apply the CASCADE modeling framework (Schmitt et al. (2016)). CASCADE calculates transport capacities and sediment fluxes for multiple grainsizes on the network scale based on remotely-sensed morphology and modelled hydrology. CASCADE is run in an inverse Monte Carlo approach for 7500 random initializations of source grain sizes. In all runs, supply of each source is inferred from the minimum downstream transport capacity for the source grain size. Results for each realization are compared to sparse available sedimentary records. Only 1 % of initializations reproduced the sedimentary record. Results for these realizations revealed a spatial pattern in source supply rates, grain sizes, and network sediment fluxes that correlated well with map-derived patterns in lithology and river-morphology. Hence, we propose that observable river hydro-morphology contains information on upstream source properties that can be back-calculated using an inverse modeling approach. Such an approach could be coupled to more detailed models of hillslope processes in future to derive integrated models of hillslope production and fluvial transport processes, which is particularly useful to identify sediment provenance in poorly monitored river basins.

  18. Point and Compact Hα Sources in the Interior of M33

    NASA Astrophysics Data System (ADS)

    Moody, J. Ward; Hintz, Eric G.; Joner, Michael D.; Roming, Peter W. A.; Hintz, Maureen L.

    2017-12-01

    A variety of interesting objects such as Wolf-Rayet stars, tight OB associations, planetary nebulae, X-ray binaries, etc., can be discovered as point or compact sources in Hα surveys. How these objects distribute through a galaxy sheds light on the galaxy star formation rate and history, mass distribution, and dynamics. The nearby galaxy M33 is an excellent place to study the distribution of Hα-bright point sources in a flocculant spiral galaxy. We have reprocessed an archived WIYN continuum-subtracted Hα image of the inner 6.‧5 × 6.‧5 of M33 and, employing both eye and machine searches, have tabulated sources with a flux greater than approximately 10-15 erg cm-2s-1. We have effectively recovered previously mapped H II regions and have identified 152 unresolved point sources and 122 marginally resolved compact sources, of which 39 have not been previously identified in any archive. An additional 99 Hα sources were found to have sufficient archival flux values to generate a Spectral Energy Distribution. Using the SED, flux values, Hα flux value, and compactness, we classified 67 of these sources.

  19. Radiological analysis of plutonium glass batches with natural/enriched boron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rainisch, R.

    2000-06-22

    The disposition of surplus plutonium inventories by the US Department of Energy (DOE) includes the immobilization of certain plutonium materials in a borosilicate glass matrix, also referred to as vitrification. This paper addresses source terms of plutonium masses immobilized in a borosilicate glass matrix where the glass components include both natural boron and enriched boron. The calculated source terms pertain to neutron and gamma source strength (particles per second), and source spectrum changes. The calculated source terms corresponding to natural boron and enriched boron are compared to determine the benefits (decrease in radiation source terms) for to the use ofmore » enriched boron. The analysis of plutonium glass source terms shows that a large component of the neutron source terms is due to (a, n) reactions. The Americium-241 and plutonium present in the glass emit alpha particles (a). These alpha particles interact with low-Z nuclides like B-11, B-10, and O-17 in the glass to produce neutrons. The low-Z nuclides are referred to as target particles. The reference glass contains 9.4 wt percent B{sub 2}O{sub 3}. Boron-11 was found to strongly support the (a, n) reactions in the glass matrix. B-11 has a natural abundance of over 80 percent. The (a, n) reaction rates for B-10 are lower than for B-11 and the analysis shows that the plutonium glass neutron source terms can be reduced by artificially enriching natural boron with B-10. The natural abundance of B-10 is 19.9 percent. Boron enriched to 96-wt percent B-10 or above can be obtained commercially. Since lower source terms imply lower dose rates to radiation workers handling the plutonium glass materials, it is important to know the achievable decrease in source terms as a result of boron enrichment. Plutonium materials are normally handled in glove boxes with shielded glass windows and the work entails both extremity and whole-body exposures. Lowering the source terms of the plutonium batches will make the handling of these materials less difficult and will reduce radiation exposure to operating workers.« less

  20. Applicability of the single equivalent point dipole model to represent a spatially distributed bio-electrical source

    NASA Technical Reports Server (NTRS)

    Armoundas, A. A.; Feldman, A. B.; Sherman, D. A.; Cohen, R. J.

    2001-01-01

    Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius. Finally, a method is devised to obtain the size of the distributed source during the cardiac cycle.

  1. Climatology of Aerosol Optical Properties in Southern Africa

    NASA Technical Reports Server (NTRS)

    Queface, Antonio J.; Piketh, Stuart J.; Eck, Thomas F.; Tsay, Si-Chee

    2011-01-01

    A thorough regionally dependent understanding of optical properties of aerosols and their spatial and temporal distribution is required before we can accurately evaluate aerosol effects in the climate system. Long term measurements of aerosol optical depth, Angstrom exponent and retrieved single scattering albedo and size distribution, were analyzed and compiled into an aerosol optical properties climatology for southern Africa. Monitoring of aerosol parameters have been made by the AERONET program since the middle of the last decade in southern Africa. This valuable information provided an opportunity for understanding how aerosols of different types influence the regional radiation budget. Two long term sites, Mongu in Zambia and Skukuza in South Africa formed the core sources of data in this study. Results show that seasonal variation of aerosol optical thicknesses at 500 nm in southern Africa are characterized by low seasonal multi-month mean values (0.11 to 0.17) from December to May, medium values (0.20 to 0.27) between June and August, and high to very high values (0.30 to 0.46) during September to November. The spatial distribution of aerosol loadings shows that the north has high magnitudes than the south in the biomass burning season and the opposite in none biomass burning season. From the present aerosol data, no long term discernable trends are observable in aerosol concentrations in this region. This study also reveals that biomass burning aerosols contribute the bulk of the aerosol loading in August-October. Therefore if biomass burning could be controlled, southern Africa will experience a significant reduction in total atmospheric aerosol loading. In addition to that, aerosol volume size distribution is characterized by low concentrations in the non biomass burning period and well balanced particle size contributions of both coarse and fine modes. In contrast high concentrations are characteristic of biomass burning period, combined with significant dominance of fine mode particles.

  2. Assessing modelled spatial distributions of ice water path using satellite data

    NASA Astrophysics Data System (ADS)

    Eliasson, S.; Buehler, S. A.; Milz, M.; Eriksson, P.; John, V. O.

    2010-05-01

    The climate models used in the IPCC AR4 show large differences in monthly mean cloud ice. The most valuable source of information that can be used to potentially constrain the models is global satellite data. For this, the data sets must be long enough to capture the inter-annual variability of Ice Water Path (IWP). PATMOS-x was used together with ISCCP for the annual cycle evaluation in Fig. 7 while ECHAM-5 was used for the correlation with other models in Table 3. A clear distinction between ice categories in satellite retrievals, as desired from a model point of view, is currently impossible. However, long-term satellite data sets may still be used to indicate the climatology of IWP spatial distribution. We evaluated satellite data sets from CloudSat, PATMOS-x, ISCCP, MODIS and MSPPS in terms of monthly mean IWP, to determine which data sets can be used to evaluate the climate models. IWP data from CloudSat cloud profiling radar provides the most advanced data set on clouds. As CloudSat data are too short to evaluate the model data directly, it was mainly used here to evaluate IWP from the other satellite data sets. ISCCP and MSPPS were shown to have comparatively low IWP values. ISCCP shows particularly low values in the tropics, while MSPPS has particularly low values outside the tropics. MODIS and PATMOS-x were in closest agreement with CloudSat in terms of magnitude and spatial distribution, with MODIS being the best of the two. As PATMOS-x extends over more than 25 years and is in fairly close agreement with CloudSat, it was chosen as the reference data set for the model evaluation. In general there are large discrepancies between the individual climate models, and all of the models show problems in reproducing the observed spatial distribution of cloud-ice. Comparisons consistently showed that ECHAM-5 is the GCM from IPCC AR4 closest to satellite observations.

  3. Synthetic earthquake catalogs simulating seismic activity in the Corinth Gulf, Greece, fault system

    NASA Astrophysics Data System (ADS)

    Console, Rodolfo; Carluccio, Roberto; Papadimitriou, Eleftheria; Karakostas, Vassilis

    2015-01-01

    The characteristic earthquake hypothesis is the basis of time-dependent modeling of earthquake recurrence on major faults. However, the characteristic earthquake hypothesis is not strongly supported by observational data. Few fault segments have long historical or paleoseismic records of individually dated ruptures, and when data and parameter uncertainties are allowed for, the form of the recurrence distribution is difficult to establish. This is the case, for instance, of the Corinth Gulf Fault System (CGFS), for which documents about strong earthquakes exist for at least 2000 years, although they can be considered complete for M ≥ 6.0 only for the latest 300 years, during which only few characteristic earthquakes are reported for individual fault segments. The use of a physics-based earthquake simulator has allowed the production of catalogs lasting 100,000 years and containing more than 500,000 events of magnitudes ≥ 4.0. The main features of our simulation algorithm are (1) an average slip rate released by earthquakes for every single segment in the investigated fault system, (2) heuristic procedures for rupture growth and stop, leading to a self-organized earthquake magnitude distribution, (3) the interaction between earthquake sources, and (4) the effect of minor earthquakes in redistributing stress. The application of our simulation algorithm to the CGFS has shown realistic features in time, space, and magnitude behavior of the seismicity. These features include long-term periodicity of strong earthquakes, short-term clustering of both strong and smaller events, and a realistic earthquake magnitude distribution departing from the Gutenberg-Richter distribution in the higher-magnitude range.

  4. Winter Distribution of On-road NO2 Concentration in Hong Kong

    NASA Astrophysics Data System (ADS)

    Zhu, Y.; Chan, K. L.; Boll, J.; Schütt, A. M. N.; Lipkowitsch, I.; Wenig, M.

    2017-12-01

    In this study, we investigated the spatial distribution of on road NO2 concentration using Cavity-Enhanced Differential Optical Absorption Spectroscopy (DOAS). We performed two measurement campaigns in winter 2010 and 2017. Air pollution is a severe problem for many big cities, especially in Asia. Traffic emission is the primary source of urban pollutants. As Hong Kong is one of the most densely populated cities in the world, many inhabitants are exposed to accumulated pollutants in street canyons. Our mobile measurements were performed for a week in December, 2010 and March, 2017. Additionally, long term air pollution data measured by a long-path DOAS (LP-DOAS) and the Environment Protection Department (EPD) air quality monitoring network were used to investigate the long term trend and seasonal variations of atmospheric NO2 in Hong Kong.The experiment setup and preliminary results of mobile measurements are presented. The measurements were performed along a fixed route which covers most of the urban area. We assembled a NO2 concentration map 2 to 3 times per day in order to cover both morning and evening rush hours. In order to construct a consistent map, we use coinciding LP-DOAS NO2 data to correct for the diurnal cycle. Furthermore, the spatial and temporal distribution of NO2 changes with the day of the week. Traffic load is highly dependent on human activities which typically fall into a 7 days cycle. Therefore, we have analyzed the weekly pattern of on road NO2 distribution to see the differences between anthropogenic emissions during weekdays and weekend.

  5. Portrayal of caesarean section in Brazilian women’s magazines: 20 year review

    PubMed Central

    Daher, Silvia; Betrán, Ana Pilar; Widmer, Mariana; Montilla, Pilar; Souza, Joao Paulo; Merialdi, Mario

    2011-01-01

    Objective To assess the quality and comprehensiveness of the information on caesarean section provided in Brazilian women’s magazines. Design Review of articles published during 1988-2008 in top selling women’s magazines. Setting Brazil, one of the countries with the highest caesarean section rates in the world. Data sources Women’s magazines with the largest distribution during the study period, identified through the official national media indexing organisations. Selection criteria Articles with objective scientific information or advice, comments, opinions, or the experience of ordinary women or celebrities on delivery by caesarean section. Main outcome measures Sources of information mentioned by the author of the article, the accuracy and completeness of data presented on caesarean section, and alleged reasons why women would prefer to deliver though caesarean section. Results 118 articles were included. The main cited sources of information were health professionals (78% (n=92) of the articles). 71% (n=84) of the articles reported at least one benefit of caesarean section, and 82% (n=97) reported at least one short term maternal risk of caesarean section. The benefits most often attributed to delivery by caesarean section were reduction of pain and convenience for family or health professionals. The most frequently reported short term maternal risks of caesarean section were increased time to recover and that it is a less natural way of giving birth. Only one third of the articles mentioned any long term maternal risks or perinatal complications associated with caesarean section. Fear of pain was the main reported reason why women would prefer to deliver by caesarean section. Conclusions Most of the articles published in Brazilian women’s magazines do not use optimal sources of information. The portrayal of caesarean section is mostly balanced, not explicitly in favour of one or another route of delivery, but incomplete and may be leading women to underestimate the maternal/perinatal risks associated with this route of delivery. PMID:21266421

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williamson, Jeffrey F.

    This paper briefly reviews the evolution of brachytherapy dosimetry from 1900 to the present. Dosimetric practices in brachytherapy fall into three distinct eras: During the era of biological dosimetry (1900-1938), radium pioneers could only specify Ra-226 and Rn-222 implants in terms of the mass of radium encapsulated within the implanted sources. Due to the high energy of its emitted gamma rays and the long range of its secondary electrons in air, free-air chambers could not be used to quantify the output of Ra-226 sources in terms of exposure. Biological dosimetry, most prominently the threshold erythema dose, gained currency as amore » means of intercomparing radium treatments with exposure-calibrated orthovoltage x-ray units. The classical dosimetry era (1940-1980) began with successful exposure standardization of Ra-226 sources by Bragg-Gray cavity chambers. Classical dose-computation algorithms, based upon 1-D buildup factor measurements and point-source superposition computational algorithms, were able to accommodate artificial radionuclides such as Co-60, Ir-192, and Cs-137. The quantitative dosimetry era (1980- ) arose in response to the increasing utilization of low energy K-capture radionuclides such as I-125 and Pd-103 for which classical approaches could not be expected to estimate accurate correct doses. This led to intensive development of both experimental (largely TLD-100 dosimetry) and Monte Carlo dosimetry techniques along with more accurate air-kerma strength standards. As a result of extensive benchmarking and intercomparison of these different methods, single-seed low-energy radionuclide dose distributions are now known with a total uncertainty of 3%-5%.« less

  7. History of dose specification in Brachytherapy: From Threshold Erythema Dose to Computational Dosimetry

    NASA Astrophysics Data System (ADS)

    Williamson, Jeffrey F.

    2006-09-01

    This paper briefly reviews the evolution of brachytherapy dosimetry from 1900 to the present. Dosimetric practices in brachytherapy fall into three distinct eras: During the era of biological dosimetry (1900-1938), radium pioneers could only specify Ra-226 and Rn-222 implants in terms of the mass of radium encapsulated within the implanted sources. Due to the high energy of its emitted gamma rays and the long range of its secondary electrons in air, free-air chambers could not be used to quantify the output of Ra-226 sources in terms of exposure. Biological dosimetry, most prominently the threshold erythema dose, gained currency as a means of intercomparing radium treatments with exposure-calibrated orthovoltage x-ray units. The classical dosimetry era (1940-1980) began with successful exposure standardization of Ra-226 sources by Bragg-Gray cavity chambers. Classical dose-computation algorithms, based upon 1-D buildup factor measurements and point-source superposition computational algorithms, were able to accommodate artificial radionuclides such as Co-60, Ir-192, and Cs-137. The quantitative dosimetry era (1980- ) arose in response to the increasing utilization of low energy K-capture radionuclides such as I-125 and Pd-103 for which classical approaches could not be expected to estimate accurate correct doses. This led to intensive development of both experimental (largely TLD-100 dosimetry) and Monte Carlo dosimetry techniques along with more accurate air-kerma strength standards. As a result of extensive benchmarking and intercomparison of these different methods, single-seed low-energy radionuclide dose distributions are now known with a total uncertainty of 3%-5%.

  8. Distributed optimization system and method

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2003-06-10

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  9. Distributed Optimization System

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2004-11-30

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  10. Fission Product Appearance Rate Coefficients in Design Basis Source Term Determinations - Past and Present

    NASA Astrophysics Data System (ADS)

    Perez, Pedro B.; Hamawi, John N.

    2017-09-01

    Nuclear power plant radiation protection design features are based on radionuclide source terms derived from conservative assumptions that envelope expected operating experience. Two parameters that significantly affect the radionuclide concentrations in the source term are failed fuel fraction and effective fission product appearance rate coefficients. Failed fuel fraction may be a regulatory based assumption such as in the U.S. Appearance rate coefficients are not specified in regulatory requirements, but have been referenced to experimental data that is over 50 years old. No doubt the source terms are conservative as demonstrated by operating experience that has included failed fuel, but it may be too conservative leading to over-designed shielding for normal operations as an example. Design basis source term methodologies for normal operations had not advanced until EPRI published in 2015 an updated ANSI/ANS 18.1 source term basis document. Our paper revisits the fission product appearance rate coefficients as applied in the derivation source terms following the original U.S. NRC NUREG-0017 methodology. New coefficients have been calculated based on recent EPRI results which demonstrate the conservatism in nuclear power plant shielding design.

  11. 137Cs activities and 135Cs/137Cs isotopic ratios from soils at Idaho National Laboratory: a case study for contaminant source attribution in the vicinity of nuclear facilities.

    PubMed

    Snow, Mathew S; Snyder, Darin C; Clark, Sue B; Kelley, Morgan; Delmore, James E

    2015-03-03

    Radiometric and mass spectrometric analyses of Cs contamination in the environment can reveal the location of Cs emission sources, release mechanisms, modes of transport, prediction of future contamination migration, and attribution of contamination to specific generator(s) and/or process(es). The Subsurface Disposal Area (SDA) at Idaho National Laboratory (INL) represents a complicated case study for demonstrating the current capabilities and limitations to environmental Cs analyses. (137)Cs distribution patterns, (135)Cs/(137)Cs isotope ratios, known Cs chemistry at this site, and historical records enable narrowing the list of possible emission sources and release events to a single source and event, with the SDA identified as the emission source and flood transport of material from within Pit 9 and Trench 48 as the primary release event. These data combined allow refining the possible number of waste generators from dozens to a single generator, with INL on-site research and reactor programs identified as the most likely waste generator. A discussion on the ultimate limitations to the information that (135)Cs/(137)Cs ratios alone can provide is presented and includes (1) uncertainties in the exact date of the fission event and (2) possibility of mixing between different Cs source terms (including nuclear weapons fallout and a source of interest).

  12. 137 Cs Activities and 135 Cs/ 137 Cs Isotopic Ratios from Soils at Idaho National Laboratory: A Case Study for Contaminant Source Attribution in the Vicinity of Nuclear Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snow, Mathew S.; Snyder, Darin C.; Clark, Sue B.

    2015-03-03

    Radiometric and mass spectrometric analyses of Cs contamination in the environment can reveal the location of Cs emission sources, release mechanisms, modes of transport, prediction of future contamination migration, and attribution of contamination to specific generator(s) and/or process(es). The Subsurface Disposal Area (SDA) at Idaho National Laboratory (INL) represents a complicated case study for demonstrating the current capabilities and limitations to environmental Cs analyses. 137Cs distribution patterns, 135Cs/ 137Cs isotope ratios, known Cs chemistry at this site, and historical records enable narrowing the list of possible emission sources and release events to a single source and event, with the SDAmore » identified as the emission source and flood transport of material from within Pit 9 and Trench 48 as the primary release event. These data combined allow refining the possible number of waste generators from dozens to a single generator, with INL on-site research and reactor programs identified as the most likely waste generator. A discussion on the ultimate limitations to the information that 135Cs/ 137Cs ratios alone can provide is presented and includes (1) uncertainties in the exact date of the fission event and (2) possibility of mixing between different Cs source terms (including nuclear weapons fallout and a source of interest).« less

  13. Operational hydrological forecasting in Bavaria. Part I: Forecast uncertainty

    NASA Astrophysics Data System (ADS)

    Ehret, U.; Vogelbacher, A.; Moritz, K.; Laurent, S.; Meyer, I.; Haag, I.

    2009-04-01

    In Bavaria, operational flood forecasting has been established since the disastrous flood of 1999. Nowadays, forecasts based on rainfall information from about 700 raingauges and 600 rivergauges are calculated and issued for nearly 100 rivergauges. With the added experience of the 2002 and 2005 floods, awareness grew that the standard deterministic forecast, neglecting the uncertainty associated with each forecast is misleading, creating a false feeling of unambiguousness. As a consequence, a system to identify, quantify and communicate the sources and magnitude of forecast uncertainty has been developed, which will be presented in part I of this study. In this system, the use of ensemble meteorological forecasts plays a key role which will be presented in part II. Developing the system, several constraints stemming from the range of hydrological regimes and operational requirements had to be met: Firstly, operational time constraints obviate the variation of all components of the modeling chain as would be done in a full Monte Carlo simulation. Therefore, an approach was chosen where only the most relevant sources of uncertainty were dynamically considered while the others were jointly accounted for by static error distributions from offline analysis. Secondly, the dominant sources of uncertainty vary over the wide range of forecasted catchments: In alpine headwater catchments, typically of a few hundred square kilometers in size, rainfall forecast uncertainty is the key factor for forecast uncertainty, with a magnitude dynamically changing with the prevailing predictability of the atmosphere. In lowland catchments encompassing several thousands of square kilometers, forecast uncertainty in the desired range (usually up to two days) is mainly dependent on upstream gauge observation quality, routing and unpredictable human impact such as reservoir operation. The determination of forecast uncertainty comprised the following steps: a) From comparison of gauge observations and several years of archived forecasts, overall empirical error distributions termed 'overall error' were for each gauge derived for a range of relevant forecast lead times. b) The error distributions vary strongly with the hydrometeorological situation, therefore a subdivision into the hydrological cases 'low flow, 'rising flood', 'flood', flood recession' was introduced. c) For the sake of numerical compression, theoretical distributions were fitted to the empirical distributions using the method of moments. Here, the normal distribution was generally best suited. d) Further data compression was achieved by representing the distribution parameters as a function (second-order polynome) of lead time. In general, the 'overall error' obtained from the above procedure is most useful in regions where large human impact occurs and where the influence of the meteorological forecast is limited. In upstream regions however, forecast uncertainty is strongly dependent on the current predictability of the atmosphere, which is contained in the spread of an ensemble forecast. Including this dynamically in the hydrological forecast uncertainty estimation requires prior elimination of the contribution of the weather forecast to the 'overall error'. This was achieved by calculating long series of hydrometeorological forecast tests, where rainfall observations were used instead of forecasts. The resulting error distribution is termed 'model error' and can be applied on hydrological ensemble forecasts, where ensemble rainfall forecasts are used as forcing. The concept will be illustrated by examples (good and bad ones) covering a wide range of catchment sizes, hydrometeorological regimes and quality of hydrological model calibration. The methodology to combine the static and dynamic shares of uncertainty will be presented in part II of this study.

  14. Shifts in Summertime Precipitation Accumulation Distributions over the US

    NASA Astrophysics Data System (ADS)

    Martinez-Villalobos, C.; Neelin, J. D.

    2016-12-01

    Precipitation accumulations, i.e., the amount of precipitation integrated over the course of an event, is a variable with both important physical and societal implications. Previous observational studies show that accumulation distributions have a characteristic shape, with an approximately power law decrease at first, followed by a sharp decrease at a characteristic large event cutoff scale. This cutoff scale is important as it limits the biggest accumulation events. Stochastic prototypes show that the resulting distributions, and importantly the large event cutoff scale, can be understood as a result of the interplay between moisture loss by precipitation and changes in moisture sinks/sources due to fluctuations in moisture divergence over the course of a precipitation event. The strength of this fluctuating moisture sink/source term is expected to increase under global warming, with both theory and climate model simulations predicting a concomitant increase in the large event cutoff scale. This cutoff scale increase has important consequences as it implies an approximately exponential increase for the largest accumulation events. Given its importance, in this study we characterize and track changes in the distribution of precipitation events accumulations over the contiguous US. Accumulation distributions are calculated using hourly precipitation data from 1700 stations, covering the 1974-2013 period over May-October. The resulting distributions largely follow the aforementioned shape, with individual cutoff scales depending on the local climate. An increase in the large event cutoff scale over this period is observed over several regions over the US, most notably over the eastern third of the US. In agreement with the increase in the cutoff, almost exponential increases in the highest accumulation percentiles occur over these regions, with increases in the 99.9 percentile in the Northeast of 70% for example. The relationship to changes in daily precipitation that have previously been noted and to changes in the moisture budget over this period are examined.

  15. Shifts in Summertime Precipitation Accumulation Distributions over the US

    NASA Astrophysics Data System (ADS)

    Martinez-Villalobos, C.; Neelin, J. D.

    2017-12-01

    Precipitation accumulations, i.e., the amount of precipitation integrated over the course of an event, is a variable with both important physical and societal implications. Previous observational studies show that accumulation distributions have a characteristic shape, with an approximately power law decrease at first, followed by a sharp decrease at a characteristic large event cutoff scale. This cutoff scale is important as it limits the biggest accumulation events. Stochastic prototypes show that the resulting distributions, and importantly the large event cutoff scale, can be understood as a result of the interplay between moisture loss by precipitation and changes in moisture sinks/sources due to fluctuations in moisture divergence over the course of a precipitation event. The strength of this fluctuating moisture sink/source term is expected to increase under global warming, with both theory and climate model simulations predicting a concomitant increase in the large event cutoff scale. This cutoff scale increase has important consequences as it implies an approximately exponential increase for the largest accumulation events. Given its importance, in this study we characterize and track changes in the distribution of precipitation events accumulations over the contiguous US. Accumulation distributions are calculated using hourly precipitation data from 1700 stations, covering the 1974-2013 period over May-October. The resulting distributions largely follow the aforementioned shape, with individual cutoff scales depending on the local climate. An increase in the large event cutoff scale over this period is observed over several regions over the US, most notably over the eastern third of the US. In agreement with the increase in the cutoff, almost exponential increases in the highest accumulation percentiles occur over these regions, with increases in the 99.9 percentile in the Northeast of 70% for example. The relationship to changes in daily precipitation that have previously been noted and to changes in the moisture budget over this period are examined.

  16. Dependence of Microlensing on Source Size and Lens Mass

    NASA Astrophysics Data System (ADS)

    Congdon, A. B.; Keeton, C. R.

    2007-11-01

    In gravitational lensed quasars, the magnification of an image depends on the configuration of stars in the lensing galaxy. We study the statistics of the magnification distribution for random star fields. The width of the distribution characterizes the amount by which the observed magnification is likely to differ from models in which the mass is smoothly distributed. We use numerical simulations to explore how the width of the magnification distribution depends on the mass function of stars, and on the size of the source quasar. We then propose a semi-analytic model to describe the distribution width for different source sizes and stellar mass functions.

  17. Multiobjective optimization of cluster-scale urban water systems investigating alternative water sources and level of decentralization

    NASA Astrophysics Data System (ADS)

    Newman, J. P.; Dandy, G. C.; Maier, H. R.

    2014-10-01

    In many regions, conventional water supplies are unable to meet projected consumer demand. Consequently, interest has arisen in integrated urban water systems, which involve the reclamation or harvesting of alternative, localized water sources. However, this makes the planning and design of water infrastructure more difficult, as multiple objectives need to be considered, water sources need to be selected from a number of alternatives, and end uses of these sources need to be specified. In addition, the scale at which each treatment, collection, and distribution network should operate needs to be investigated. In order to deal with this complexity, a framework for planning and designing water infrastructure taking into account integrated urban water management principles is presented in this paper and applied to a rural greenfield development. Various options for water supply, and the scale at which they operate were investigated in order to determine the life-cycle trade-offs between water savings, cost, and GHG emissions as calculated from models calibrated using Australian data. The decision space includes the choice of water sources, storage tanks, treatment facilities, and pipes for water conveyance. For each water system analyzed, infrastructure components were sized using multiobjective genetic algorithms. The results indicate that local water sources are competitive in terms of cost and GHG emissions, and can reduce demand on the potable system by as much as 54%. Economies of scale in treatment dominated the diseconomies of scale in collection and distribution of water. Therefore, water systems that connect large clusters of households tend to be more cost efficient and have lower GHG emissions. In addition, water systems that recycle wastewater tended to perform better than systems that captured roof-runoff. Through these results, the framework was shown to be effective at identifying near optimal trade-offs between competing objectives, thereby enabling informed decisions to be made when planning water systems for greenfield developments.

  18. Assessment and application of clustering techniques to atmospheric particle number size distribution for the purpose of source apportionment

    NASA Astrophysics Data System (ADS)

    Salimi, F.; Ristovski, Z.; Mazaheri, M.; Laiman, R.; Crilley, L. R.; He, C.; Clifford, S.; Morawska, L.

    2014-06-01

    Long-term measurements of particle number size distribution (PNSD) produce a very large number of observations and their analysis requires an efficient approach in order to produce results in the least possible time and with maximum accuracy. Clustering techniques are a family of sophisticated methods which have been recently employed to analyse PNSD data, however, very little information is available comparing the performance of different clustering techniques on PNSD data. This study aims to apply several clustering techniques (i.e. K-means, PAM, CLARA and SOM) to PNSD data, in order to identify and apply the optimum technique to PNSD data measured at 25 sites across Brisbane, Australia. A new method, based on the Generalised Additive Model (GAM) with a basis of penalised B-splines, was proposed to parameterise the PNSD data and the temporal weight of each cluster was also estimated using the GAM. In addition, each cluster was associated with its possible source based on the results of this parameterisation, together with the characteristics of each cluster. The performances of four clustering techniques were compared using the Dunn index and silhouette width validation values and the K-means technique was found to have the highest performance, with five clusters being the optimum. Therefore, five clusters were found within the data using the K-means technique. The diurnal occurrence of each cluster was used together with other air quality parameters, temporal trends and the physical properties of each cluster, in order to attribute each cluster to its source and origin. The five clusters were attributed to three major sources and origins, including regional background particles, photochemically induced nucleated particles and vehicle generated particles. Overall, clustering was found to be an effective technique for attributing each particle size spectra to its source and the GAM was suitable to parameterise the PNSD data. These two techniques can help researchers immensely in analysing PNSD data for characterisation and source apportionment purposes.

  19. Assessment and application of clustering techniques to atmospheric particle number size distribution for the purpose of source apportionment

    NASA Astrophysics Data System (ADS)

    Salimi, F.; Ristovski, Z.; Mazaheri, M.; Laiman, R.; Crilley, L. R.; He, C.; Clifford, S.; Morawska, L.

    2014-11-01

    Long-term measurements of particle number size distribution (PNSD) produce a very large number of observations and their analysis requires an efficient approach in order to produce results in the least possible time and with maximum accuracy. Clustering techniques are a family of sophisticated methods that have been recently employed to analyse PNSD data; however, very little information is available comparing the performance of different clustering techniques on PNSD data. This study aims to apply several clustering techniques (i.e. K means, PAM, CLARA and SOM) to PNSD data, in order to identify and apply the optimum technique to PNSD data measured at 25 sites across Brisbane, Australia. A new method, based on the Generalised Additive Model (GAM) with a basis of penalised B-splines, was proposed to parameterise the PNSD data and the temporal weight of each cluster was also estimated using the GAM. In addition, each cluster was associated with its possible source based on the results of this parameterisation, together with the characteristics of each cluster. The performances of four clustering techniques were compared using the Dunn index and Silhouette width validation values and the K means technique was found to have the highest performance, with five clusters being the optimum. Therefore, five clusters were found within the data using the K means technique. The diurnal occurrence of each cluster was used together with other air quality parameters, temporal trends and the physical properties of each cluster, in order to attribute each cluster to its source and origin. The five clusters were attributed to three major sources and origins, including regional background particles, photochemically induced nucleated particles and vehicle generated particles. Overall, clustering was found to be an effective technique for attributing each particle size spectrum to its source and the GAM was suitable to parameterise the PNSD data. These two techniques can help researchers immensely in analysing PNSD data for characterisation and source apportionment purposes.

  20. A multi-period distribution network design model under demand uncertainty

    NASA Astrophysics Data System (ADS)

    Tabrizi, Babak H.; Razmi, Jafar

    2013-05-01

    Supply chain management is taken into account as an inseparable component in satisfying customers' requirements. This paper deals with the distribution network design (DND) problem which is a critical issue in achieving supply chain accomplishments. A capable DND can guarantee the success of the entire network performance. However, there are many factors that can cause fluctuations in input data determining market treatment, with respect to short-term planning, on the one hand. On the other hand, network performance may be threatened by the changes that take place within practicing periods, with respect to long-term planning. Thus, in order to bring both kinds of changes under control, we considered a new multi-period, multi-commodity, multi-source DND problem in circumstances where the network encounters uncertain demands. The fuzzy logic is applied here as an efficient tool for controlling the potential customers' demand risk. The defuzzifying framework leads the practitioners and decision-makers to interact with the solution procedure continuously. The fuzzy model is then validated by a sensitivity analysis test, and a typical problem is solved in order to illustrate the implementation steps. Finally, the formulation is tested by some different-sized problems to show its total performance.

  1. General formulation of characteristic time for persistent chemicals in a multimedia environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, D.H.; McKone, T.E.; Kastenberg, W.E.

    1999-02-01

    A simple yet representative method for determining the characteristic time a persistent organic pollutant remains in a multimedia environment is presented. The characteristic time is an important attribute for assessing long-term health and ecological impacts of a chemical. Calculating the characteristic time requires information on decay rates in multiple environmental media as well as the proportion of mass in each environmental medium. The authors explore the premise that using a steady-state distribution of the mass in the environment provides a means to calculate a representative estimate of the characteristic time while maintaining a simple formulation. Calculating the steady-state mass distributionmore » incorporates the effect of advective transport and nonequilibrium effects resulting from the source terms. Using several chemicals, they calculate and compare the characteristic time in a representative multimedia environment for dynamic, steady-state, and equilibrium multimedia models, and also for a single medium model. They demonstrate that formulating the characteristic time based on the steady-state mass distribution in the environment closely approximates the dynamic characteristic time for a range of chemicals and thus can be used in decisions regarding chemical use in the environment.« less

  2. Global threat to agriculture from invasive species.

    PubMed

    Paini, Dean R; Sheppard, Andy W; Cook, David C; De Barro, Paul J; Worner, Susan P; Thomas, Matthew B

    2016-07-05

    Invasive species present significant threats to global agriculture, although how the magnitude and distribution of the threats vary between countries and regions remains unclear. Here, we present an analysis of almost 1,300 known invasive insect pests and pathogens, calculating the total potential cost of these species invading each of 124 countries of the world, as well as determining which countries present the greatest threat to the rest of the world given their trading partners and incumbent pool of invasive species. We find that countries vary in terms of potential threat from invasive species and also their role as potential sources, with apparently similar countries sometimes varying markedly depending on specifics of agricultural commodities and trade patterns. Overall, the biggest agricultural producers (China and the United States) could experience the greatest absolute cost from further species invasions. However, developing countries, in particular, Sub-Saharan African countries, appear most vulnerable in relative terms. Furthermore, China and the United States represent the greatest potential sources of invasive species for the rest of the world. The analysis reveals considerable scope for ongoing redistribution of known invasive pests and highlights the need for international cooperation to slow their spread.

  3. pyBadlands: A framework to simulate sediment transport, landscape dynamics and basin stratigraphic evolution through space and time

    PubMed Central

    2018-01-01

    Understanding Earth surface responses in terms of sediment dynamics to climatic variability and tectonics forcing is hindered by limited ability of current models to simulate long-term evolution of sediment transfer and associated morphological changes. This paper presents pyBadlands, an open-source python-based framework which computes over geological time (1) sediment transport from landmasses to coasts, (2) reworking of marine sediments by longshore currents and (3) development of coral reef systems. pyBadlands is cross-platform, distributed under the GPLv3 license and available on GitHub (http://github.com/badlands-model). Here, we describe the underlying physical assumptions behind the simulated processes and the main options already available in the numerical framework. Along with the source code, a list of hands-on examples is provided that illustrates the model capabilities. In addition, pre and post-processing classes have been built and are accessible as a companion toolbox which comprises a series of workflows to efficiently build, quantify and explore simulation input and output files. While the framework has been primarily designed for research, its simplicity of use and portability makes it a great tool for teaching purposes. PMID:29649301

  4. Global threat to agriculture from invasive species

    PubMed Central

    Paini, Dean R.; Sheppard, Andy W.; Cook, David C.; De Barro, Paul J.; Worner, Susan P.; Thomas, Matthew B.

    2016-01-01

    Invasive species present significant threats to global agriculture, although how the magnitude and distribution of the threats vary between countries and regions remains unclear. Here, we present an analysis of almost 1,300 known invasive insect pests and pathogens, calculating the total potential cost of these species invading each of 124 countries of the world, as well as determining which countries present the greatest threat to the rest of the world given their trading partners and incumbent pool of invasive species. We find that countries vary in terms of potential threat from invasive species and also their role as potential sources, with apparently similar countries sometimes varying markedly depending on specifics of agricultural commodities and trade patterns. Overall, the biggest agricultural producers (China and the United States) could experience the greatest absolute cost from further species invasions. However, developing countries, in particular, Sub-Saharan African countries, appear most vulnerable in relative terms. Furthermore, China and the United States represent the greatest potential sources of invasive species for the rest of the world. The analysis reveals considerable scope for ongoing redistribution of known invasive pests and highlights the need for international cooperation to slow their spread. PMID:27325781

  5. Noniterative three-dimensional grid generation using parabolic partial differential equations

    NASA Technical Reports Server (NTRS)

    Edwards, T. A.

    1985-01-01

    A new algorithm for generating three-dimensional grids has been developed and implemented which numerically solves a parabolic partial differential equation (PDE). The solution procedure marches outward in two coordinate directions, and requires inversion of a scalar tridiagonal system in the third. Source terms have been introduced to control the spacing and angle of grid lines near the grid boundaries, and to control the outer boundary point distribution. The method has been found to generate grids about 100 times faster than comparable grids generated via solution of elliptic PDEs, and produces smooth grids for finite-difference flow calculations.

  6. Three-dimensional calculations of rotor-airframe interaction in forward flight

    NASA Technical Reports Server (NTRS)

    Zori, Laith A. J.; Mathur, Sanjay R.; Rajagopalan, R. G.

    1992-01-01

    A method for analyzing the mutual aerodynamic interaction between a rotor and an airframe model has been developed. This technique models the rotor implicitly through the source terms of the momentum equations. A three-dimensional, incompressible, laminar, Navier-Stokes solver in cylindrical coordinates was developed for analyzing the rotor/airframe problem. The calculations are performed on a simplified model at an advance ratio of 0.1. The airframe surface pressure predictions are found to be in good agreement with wind tunnel test data. Results are presented for velocity and pressure field distributions in the wake of the rotor.

  7. Modeling Scramjet Flows with Variable Turbulent Prandtl and Schmidt Numbers

    NASA Technical Reports Server (NTRS)

    Xiao, X.; Hassan, H. A.; Baurle, R. A.

    2006-01-01

    A complete turbulence model, where the turbulent Prandtl and Schmidt numbers are calculated as part of the solution and where averages involving chemical source terms are modeled, is presented. The ability of avoiding the use of assumed or evolution Probability Distribution Functions (PDF's) results in a highly efficient algorithm for reacting flows. The predictions of the model are compared with two sets of experiments involving supersonic mixing and one involving supersonic combustion. The results demonstrate the need for consideration of turbulence/chemistry interactions in supersonic combustion. In general, good agreement with experiment is indicated.

  8. Algae Biofuels Co-Location Assessment Tool for Canada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2011-11-29

    The Algae Biofuels Co-Location Assessment Tool for Canada uses chemical stoichiometry to estimate Nitrogen, Phosphorous, and Carbon atom availability from waste water and carbon dioxide emissions streams, and requirements for those same elements to produce a unit of algae. This information is then combined to find limiting nutrient information and estimate potential productivity associated with waste water and carbon dioxide sources. Output is visualized in terms of distributions or spatial locations. Distances are calculated between points of interest in the model using the great circle distance equation, and the smallest distances found by an exhaustive search and sort algorithm.

  9. Towards an accurate real-time locator of infrasonic sources

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Blom, P.; Polozov, A.; Marcillo, O.; Arrowsmith, S.; Hofstetter, A.

    2017-11-01

    Infrasonic signals propagate from an atmospheric source via media with stochastic and fast space-varying conditions. Hence, their travel time, the amplitude at sensor recordings and even manifestation in the so-called "shadow zones" are random. Therefore, the traditional least-squares technique for locating infrasonic sources is often not effective, and the problem for the best solution must be formulated in probabilistic terms. Recently, a series of papers has been published about Bayesian Infrasonic Source Localization (BISL) method based on the computation of the posterior probability density function (PPDF) of the source location, as a convolution of a priori probability distribution function (APDF) of the propagation model parameters with likelihood function (LF) of observations. The present study is devoted to the further development of BISL for higher accuracy and stability of the source location results and decreasing of computational load. We critically analyse previous algorithms and propose several new ones. First of all, we describe the general PPDF formulation and demonstrate that this relatively slow algorithm might be among the most accurate algorithms, provided the adequate APDF and LF are used. Then, we suggest using summation instead of integration in a general PPDF calculation for increased robustness, but this leads us to the 3D space-time optimization problem. Two different forms of APDF approximation are considered and applied for the PPDF calculation in our study. One of them is previously suggested, but not yet properly used is the so-called "celerity-range histograms" (CRHs). Another is the outcome from previous findings of linear mean travel time for the four first infrasonic phases in the overlapping consecutive distance ranges. This stochastic model is extended here to the regional distance of 1000 km, and the APDF introduced is the probabilistic form of the junction between this travel time model and range-dependent probability distributions of the phase arrival time picks. To illustrate the improvements in both computation time and location accuracy achieved, we compare location results for the new algorithms, previously published BISL-type algorithms and the least-squares location technique. This comparison is provided via a case study of different typical spatial data distributions and statistical experiment using the database of 36 ground-truth explosions from the Utah Test and Training Range (UTTR) recorded during the US summer season at USArray transportable seismic stations when they were near the site between 2006 and 2008.

  10. Characterization of distinct Arctic aerosol accumulation modes and their sources

    NASA Astrophysics Data System (ADS)

    Lange, R.; Dall'Osto, M.; Skov, H.; Nøjgaard, J. K.; Nielsen, I. E.; Beddows, D. C. S.; Simo, R.; Harrison, R. M.; Massling, A.

    2018-06-01

    In this work we use cluster analysis of long term particle size distribution data to expand an array of different shorter term atmospheric measurements, thereby gaining insights into longer term patterns and properties of Arctic aerosol. Measurements of aerosol number size distributions (9-915 nm) were conducted at Villum Research Station (VRS), Station Nord in North Greenland during a 5 year record (2012-2016). Alongside this, measurements of aerosol composition, meteorological parameters, gaseous compounds and cloud condensation nuclei (CCN) activity were performed during different shorter occasions. K-means clustering analysis of particle number size distributions on daily basis identified several clusters. Clusters of accumulation mode aerosols (main size modes > 100 nm) accounted for 56% of the total aerosol during the sampling period (89-91% during February-April, 1-3% during June-August). By association to chemical composition, cloud condensation nuclei properties, and meteorological variables, three typical accumulation mode aerosol clusters were identified: Haze (32% of the time), Bimodal (14%) and Aged (6%). In brief: (1) Haze accumulation mode aerosol shows a single mode at 150 nm, peaking in February-April, with highest loadings of sulfate and black carbon concentrations. (2) Accumulation mode Bimodal aerosol shows two modes, at 38 nm and 150 nm, peaking in June-August, with the highest ratio of organics to sulfate concentrations. (3) Aged accumulation mode aerosol shows a single mode at 213 nm, peaking in September-October and is associated with cloudy and humid weather conditions during autumn. The three aerosol clusters were considered alongside CCN concentrations. We suggest that organic compounds, that are likely marine biogenic in nature, greatly influence the Bimodal cluster and contribute significantly to its CCN activity. This stresses the importance of better characterizing the marine ecosystem and the aerosol-mediated climate effects in the Arctic.

  11. Modeling TAE Response To Nonlinear Drives

    NASA Astrophysics Data System (ADS)

    Zhang, Bo; Berk, Herbert; Breizman, Boris; Zheng, Linjin

    2012-10-01

    Experiment has detected the Toroidal Alfven Eigenmodes (TAE) with signals at twice the eigenfrequency.These harmonic modes arise from the second order perturbation in amplitude of the MHD equation for the linear modes that are driven the energetic particle free energy. The structure of TAE in realistic geometry can be calculated by generalizing the linear numerical solver (AEGIS package). We have have inserted all the nonlinear MHD source terms, where are quadratic in the linear amplitudes, into AEGIS code. We then invert the linear MHD equation at the second harmonic frequency. The ratio of amplitudes of the first and second harmonic terms are used to determine the internal field amplitude. The spatial structure of energy and density distribution are investigated. The results can be directly employed to compare with experiments and determine the Alfven wave amplitude in the plasma region.

  12. Fundamental questions of earthquake statistics, source behavior, and the estimation of earthquake probabilities from possible foreshocks

    USGS Publications Warehouse

    Michael, Andrew J.

    2012-01-01

    Estimates of the probability that an ML 4.8 earthquake, which occurred near the southern end of the San Andreas fault on 24 March 2009, would be followed by an M 7 mainshock over the following three days vary from 0.0009 using a Gutenberg–Richter model of aftershock statistics (Reasenberg and Jones, 1989) to 0.04 using a statistical model of foreshock behavior and long‐term estimates of large earthquake probabilities, including characteristic earthquakes (Agnew and Jones, 1991). I demonstrate that the disparity between the existing approaches depends on whether or not they conform to Gutenberg–Richter behavior. While Gutenberg–Richter behavior is well established over large regions, it could be violated on individual faults if they have characteristic earthquakes or over small areas if the spatial distribution of large‐event nucleations is disproportional to the rate of smaller events. I develop a new form of the aftershock model that includes characteristic behavior and combines the features of both models. This new model and the older foreshock model yield the same results when given the same inputs, but the new model has the advantage of producing probabilities for events of all magnitudes, rather than just for events larger than the initial one. Compared with the aftershock model, the new model has the advantage of taking into account long‐term earthquake probability models. Using consistent parameters, the probability of an M 7 mainshock on the southernmost San Andreas fault is 0.0001 for three days from long‐term models and the clustering probabilities following the ML 4.8 event are 0.00035 for a Gutenberg–Richter distribution and 0.013 for a characteristic‐earthquake magnitude–frequency distribution. Our decisions about the existence of characteristic earthquakes and how large earthquakes nucleate have a first‐order effect on the probabilities obtained from short‐term clustering models for these large events.

  13. The combined effects of a long-term experimental drought and an extreme drought on the use of plant-water sources in a Mediterranean forest.

    PubMed

    Barbeta, Adrià; Mejía-Chang, Monica; Ogaya, Romà; Voltas, Jordi; Dawson, Todd E; Peñuelas, Josep

    2015-03-01

    Vegetation in water-limited ecosystems relies strongly on access to deep water reserves to withstand dry periods. Most of these ecosystems have shallow soils over deep groundwater reserves. Understanding the functioning and functional plasticity of species-specific root systems and the patterns of or differences in the use of water sources under more frequent or intense droughts is therefore necessary to properly predict the responses of seasonally dry ecosystems to future climate. We used stable isotopes to investigate the seasonal patterns of water uptake by a sclerophyll forest on sloped terrain with shallow soils. We assessed the effect of a long-term experimental drought (12 years) and the added impact of an extreme natural drought that produced widespread tree mortality and crown defoliation. The dominant species, Quercus ilex, Arbutus unedo and Phillyrea latifolia, all have dimorphic root systems enabling them to access different water sources in space and time. The plants extracted water mainly from the soil in the cold and wet seasons but increased their use of groundwater during the summer drought. Interestingly, the plants subjected to the long-term experimental drought shifted water uptake toward deeper (10-35 cm) soil layers during the wet season and reduced groundwater uptake in summer, indicating plasticity in the functional distribution of fine roots that dampened the effect of our experimental drought over the long term. An extreme drought in 2011, however, further reduced the contribution of deep soil layers and groundwater to transpiration, which resulted in greater crown defoliation in the drought-affected plants. This study suggests that extreme droughts aggravate moderate but persistent drier conditions (simulated by our manipulation) and may lead to the depletion of water from groundwater reservoirs and weathered bedrock, threatening the preservation of these Mediterranean ecosystems in their current structures and compositions. © 2014 John Wiley & Sons Ltd.

  14. Nonuniformity correction of imaging systems with a spatially nonhomogeneous radiation source.

    PubMed

    Gutschwager, Berndt; Hollandt, Jörg

    2015-12-20

    We present a novel method of nonuniformity correction of imaging systems in a wide optical spectral range by applying a radiation source with an unknown and spatially nonhomogeneous radiance or radiance temperature distribution. The benefit of this method is that it can be applied with radiation sources of arbitrary spatial radiance or radiance temperature distribution and only requires the sufficient temporal stability of this distribution during the measurement process. The method is based on the recording of several (at least three) images of a radiation source and a purposeful row- and line-shift of these sequent images in relation to the first primary image. The mathematical procedure is explained in detail. Its numerical verification with a source of a predefined nonhomogenous radiance distribution and a thermal imager of a predefined nonuniform focal plane array responsivity is presented.

  15. Distributed XQuery-Based Integration and Visualization of Multimodality Brain Mapping Data

    PubMed Central

    Detwiler, Landon T.; Suciu, Dan; Franklin, Joshua D.; Moore, Eider B.; Poliakov, Andrew V.; Lee, Eunjung S.; Corina, David P.; Ojemann, George A.; Brinkley, James F.

    2008-01-01

    This paper addresses the need for relatively small groups of collaborating investigators to integrate distributed and heterogeneous data about the brain. Although various national efforts facilitate large-scale data sharing, these approaches are generally too “heavyweight” for individual or small groups of investigators, with the result that most data sharing among collaborators continues to be ad hoc. Our approach to this problem is to create a “lightweight” distributed query architecture, in which data sources are accessible via web services that accept arbitrary query languages but return XML results. A Distributed XQuery Processor (DXQP) accepts distributed XQueries in which subqueries are shipped to the remote data sources to be executed, with the resulting XML integrated by DXQP. A web-based application called DXBrain accesses DXQP, allowing a user to create, save and execute distributed XQueries, and to view the results in various formats including a 3-D brain visualization. Example results are presented using distributed brain mapping data sources obtained in studies of language organization in the brain, but any other XML source could be included. The advantage of this approach is that it is very easy to add and query a new source, the tradeoff being that the user needs to understand XQuery and the schemata of the underlying sources. For small numbers of known sources this burden is not onerous for a knowledgeable user, leading to the conclusion that the system helps to fill the gap between ad hoc local methods and large scale but complex national data sharing efforts. PMID:19198662

  16. ALMA observations of lensed Herschel sources: testing the dark matter halo paradigm

    NASA Astrophysics Data System (ADS)

    Amvrosiadis, A.; Eales, S. A.; Negrello, M.; Marchetti, L.; Smith, M. W. L.; Bourne, N.; Clements, D. L.; De Zotti, G.; Dunne, L.; Dye, S.; Furlanetto, C.; Ivison, R. J.; Maddox, S. J.; Valiante, E.; Baes, M.; Baker, A. J.; Cooray, A.; Crawford, S. M.; Frayer, D.; Harris, A.; Michałowski, M. J.; Nayyeri, H.; Oliver, S.; Riechers, D. A.; Serjeant, S.; Vaccari, M.

    2018-04-01

    With the advent of wide-area submillimetre surveys, a large number of high-redshift gravitationally lensed dusty star-forming galaxies have been revealed. Because of the simplicity of the selection criteria for candidate lensed sources in such surveys, identified as those with S500 μm > 100 mJy, uncertainties associated with the modelling of the selection function are expunged. The combination of these attributes makes submillimetre surveys ideal for the study of strong lens statistics. We carried out a pilot study of the lensing statistics of submillimetre-selected sources by making observations with the Atacama Large Millimeter Array (ALMA) of a sample of strongly lensed sources selected from surveys carried out with the Herschel Space Observatory. We attempted to reproduce the distribution of image separations for the lensed sources using a halo mass function taken from a numerical simulation that contains both dark matter and baryons. We used three different density distributions, one based on analytical fits to the haloes formed in the EAGLE simulation and two density distributions [Singular Isothermal Sphere (SIS) and SISSA] that have been used before in lensing studies. We found that we could reproduce the observed distribution with all three density distributions, as long as we imposed an upper mass transition of ˜1013 M⊙ for the SIS and SISSA models, above which we assumed that the density distribution could be represented by a Navarro-Frenk-White profile. We show that we would need a sample of ˜500 lensed sources to distinguish between the density distributions, which is practical given the predicted number of lensed sources in the Herschel surveys.

  17. 26 CFR 1.737-3 - Basis adjustments; Recovery rules.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...

  18. 26 CFR 1.737-3 - Basis adjustments; Recovery rules.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...

  19. 26 CFR 1.737-3 - Basis adjustments; Recovery rules.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...

  20. 26 CFR 1.737-3 - Basis adjustments; Recovery rules.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...

  1. 26 CFR 1.737-3 - Basis adjustments; Recovery rules.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...

  2. Accident Source Terms for Pressurized Water Reactors with High-Burnup Cores Calculated using MELCOR 1.8.5.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gauntt, Randall O.; Goldmann, Andrew; Kalinich, Donald A.

    2016-12-01

    In this study, risk-significant pressurized-water reactor severe accident sequences are examined using MELCOR 1.8.5 to explore the range of fission product releases to the reactor containment building. Advances in the understanding of fission product release and transport behavior and severe accident progression are used to render best estimate analyses of selected accident sequences. Particular emphasis is placed on estimating the effects of high fuel burnup in contrast with low burnup on fission product releases to the containment. Supporting this emphasis, recent data available on fission product release from high-burnup (HBU) fuel from the French VERCOR project are used in thismore » study. The results of these analyses are treated as samples from a population of accident sequences in order to employ approximate order statistics characterization of the results. These trends and tendencies are then compared to the NUREG-1465 alternative source term prescription used today for regulatory applications. In general, greater differences are observed between the state-of-the-art calculations for either HBU or low-burnup (LBU) fuel and the NUREG-1465 containment release fractions than exist between HBU and LBU release fractions. Current analyses suggest that retention of fission products within the vessel and the reactor coolant system (RCS) are greater than contemplated in the NUREG-1465 prescription, and that, overall, release fractions to the containment are therefore lower across the board in the present analyses than suggested in NUREG-1465. The decreased volatility of Cs 2 MoO 4 compared to CsI or CsOH increases the predicted RCS retention of cesium, and as a result, cesium and iodine do not follow identical behaviors with respect to distribution among vessel, RCS, and containment. With respect to the regulatory alternative source term, greater differences are observed between the NUREG-1465 prescription and both HBU and LBU predictions than exist between HBU and LBU analyses. Additionally, current analyses suggest that the NUREG-1465 release fractions are conservative by about a factor of 2 in terms of release fractions and that release durations for in-vessel and late in-vessel release periods are in fact longer than the NUREG-1465 durations. It is currently planned that a subsequent report will further characterize these results using more refined statistical methods, permitting a more precise reformulation of the NUREG-1465 alternative source term for both LBU and HBU fuels, with the most important finding being that the NUREG-1465 formula appears to embody significant conservatism compared to current best-estimate analyses. ACKNOWLEDGEMENTS This work was supported by the United States Nuclear Regulatory Commission, Office of Nuclear Regulatory Research. The authors would like to thank Dr. Ian Gauld and Dr. Germina Ilas, of Oak Ridge National Laboratory, for their contributions to this work. In addition to development of core fission product inventory and decay heat information for use in MELCOR models, their insights related to fuel management practices and resulting effects on spatial distribution of fission products in the core was instrumental in completion of our work.« less

  3. Long-term particulate matter modeling for health effect studies in California - Part 2: Concentrations and sources of ultrafine organic aerosols

    NASA Astrophysics Data System (ADS)

    Hu, Jianlin; Jathar, Shantanu; Zhang, Hongliang; Ying, Qi; Chen, Shu-Hua; Cappa, Christopher D.; Kleeman, Michael J.

    2017-04-01

    Organic aerosol (OA) is a major constituent of ultrafine particulate matter (PM0. 1). Recent epidemiological studies have identified associations between PM0. 1 OA and premature mortality and low birth weight. In this study, the source-oriented UCD/CIT model was used to simulate the concentrations and sources of primary organic aerosols (POA) and secondary organic aerosols (SOA) in PM0. 1 in California for a 9-year (2000-2008) modeling period with 4 km horizontal resolution to provide more insights about PM0. 1 OA for health effect studies. As a related quality control, predicted monthly average concentrations of fine particulate matter (PM2. 5) total organic carbon at six major urban sites had mean fractional bias of -0.31 to 0.19 and mean fractional errors of 0.4 to 0.59. The predicted ratio of PM2. 5 SOA / OA was lower than estimates derived from chemical mass balance (CMB) calculations by a factor of 2-3, which suggests the potential effects of processes such as POA volatility, additional SOA formation mechanism, and missing sources. OA in PM0. 1, the focus size fraction of this study, is dominated by POA. Wood smoke is found to be the single biggest source of PM0. 1 OA in winter in California, while meat cooking, mobile emissions (gasoline and diesel engines), and other anthropogenic sources (mainly solvent usage and waste disposal) are the most important sources in summer. Biogenic emissions are predicted to be the largest PM0. 1 SOA source, followed by mobile sources and other anthropogenic sources, but these rankings are sensitive to the SOA model used in the calculation. Air pollution control programs aiming to reduce the PM0. 1 OA concentrations should consider controlling solvent usage, waste disposal, and mobile emissions in California, but these findings should be revisited after the latest science is incorporated into the SOA exposure calculations. The spatial distributions of SOA associated with different sources are not sensitive to the choice of SOA model, although the absolute amount of SOA can change significantly. Therefore, the spatial distributions of PM0. 1 POA and SOA over the 9-year study period provide useful information for epidemiological studies to further investigate the associations with health outcomes.

  4. Modelling larval dispersal of the king scallop ( Pecten maximus) in the English Channel: examples from the bay of Saint-Brieuc and the bay of Seine

    NASA Astrophysics Data System (ADS)

    Nicolle, Amandine; Dumas, Franck; Foveau, Aurélie; Foucher, Eric; Thiébaut, Eric

    2013-06-01

    The king scallop ( Pecten maximus) is one of the most important benthic species of the English Channel as it constitutes the first fishery in terms of landings in this area. To support strategies of spatial fishery management, we develop a high-resolution biophysical model to study scallop dispersal in two bays along the French coasts of the English Channel (i.e. the bay of Saint-Brieuc and the bay of Seine) and to quantify the relative roles of local hydrodynamic processes, temperature-dependent planktonic larval duration (PLD) and active swimming behaviour (SB). The two bays are chosen for three reasons: (1) the distribution of the scallop stocks in these areas is well known from annual scallop stock surveys, (2) these two bays harbour important fisheries and (3) scallops in these two areas present some differences in terms of reproductive cycle and spawning duration. The English Channel currents and temperature are simulated for 10 years (2000-2010) with the MARS-3D code and then used by the Lagrangian module of MARS-3D to model the transport. Results were analysed in terms of larval distribution at settlement and connectivity rates. While larval transport in the two bays depended both on the tidal residual circulation and the wind-induced currents, the relative role of these two hydrodynamic processes varied among bays. In the bay of Saint-Brieuc, the main patterns of larval dispersal were due to tides, the wind being only a source of variability in the extent of larval patch and the local retention rate. Conversely, in the bay of Seine, wind-induced currents altered both the direction and the extent of larval transport. The main effect of a variable PLD in relation to the thermal history of each larva was to reduce the spread of dispersal and consequently increase the local retention by about 10 % on average. Although swimming behaviour could influence larval dispersal during the first days of the PLD when larvae are mainly located in surface waters, it has a minor role on larval distribution at settlement and retention rates. The analysis of the connectivity between subpopulations within each bay allows identifying the main sources of larvae which depend on both the characteristics of local hydrodynamics and the spatial heterogeneity in the reproductive outputs.

  5. Quantifying the uncertainty of nonpoint source attribution in distributed water quality models: A Bayesian assessment of SWAT's sediment export predictions

    NASA Astrophysics Data System (ADS)

    Wellen, Christopher; Arhonditsis, George B.; Long, Tanya; Boyd, Duncan

    2014-11-01

    Spatially distributed nonpoint source watershed models are essential tools to estimate the magnitude and sources of diffuse pollution. However, little work has been undertaken to understand the sources and ramifications of the uncertainty involved in their use. In this study we conduct the first Bayesian uncertainty analysis of the water quality components of the SWAT model, one of the most commonly used distributed nonpoint source models. Working in Southern Ontario, we apply three Bayesian configurations for calibrating SWAT to Redhill Creek, an urban catchment, and Grindstone Creek, an agricultural one. We answer four interrelated questions: can SWAT determine suspended sediment sources with confidence when end of basin data is used for calibration? How does uncertainty propagate from the discharge submodel to the suspended sediment submodels? Do the estimated sediment sources vary when different calibration approaches are used? Can we combine the knowledge gained from different calibration approaches? We show that: (i) despite reasonable fit at the basin outlet, the simulated sediment sources are subject to uncertainty sufficient to undermine the typical approach of reliance on a single, best fit simulation; (ii) more than a third of the uncertainty of sediment load predictions may stem from the discharge submodel; (iii) estimated sediment sources do vary significantly across the three statistical configurations of model calibration despite end-of-basin predictions being virtually identical; and (iv) Bayesian model averaging is an approach that can synthesize predictions when a number of adequate distributed models make divergent source apportionments. We conclude with recommendations for future research to reduce the uncertainty encountered when using distributed nonpoint source models for source apportionment.

  6. Black Carbon and Sulfate Aerosols in the Arctic: Long-term Trends, Radiative Impacts, and Source Attributions

    NASA Astrophysics Data System (ADS)

    Wang, H.; Zhang, R.; Yang, Y.; Smith, S.; Rasch, P. J.

    2017-12-01

    The Arctic has warmed dramatically in recent decades. As one of the important short-lived climate forcers, aerosols affect the Arctic radiative budget directly by interfering radiation and indirectly by modifying clouds. Light-absorbing particles (e.g., black carbon) in snow/ice can reduce the surface albedo. The direct radiative impact of aerosols on the Arctic climate can be either warming or cooling, depending on their composition and location, which can further alter the poleward heat transport. Anthropogenic emissions, especially, BC and SO2, have changed drastically in low/mid-latitude source regions in the past few decades. Arctic surface observations at some locations show that BC and sulfate aerosols had a decreasing trend in the recent decades. In order to understand the impact of long-term emission changes on aerosols and their radiative effects, we use the Community Earth System Model (CESM) equipped with an explicit BC and sulfur source-tagging technique to quantify the source-receptor relationships and decadal trends of Arctic sulfate and BC and to identify variations in their atmospheric transport pathways from lower latitudes. The simulation was conducted for 36 years (1979-2014) with prescribed sea surface temperatures and sea ice concentrations. To minimize potential biases in modeled large-scale circulations, wind fields in the simulation are nudged toward an atmospheric reanalysis dataset, while atmospheric constituents including water vapor, clouds, and aerosols are allowed to evolve according to the model physics. Both anthropogenic and open fire emissions came from the newly released CMIP6 datasets, which show strong regional trends in BC and SO2 emissions during the simulation time period. Results show that emissions from East Asia and South Asia together have the largest contributions to Arctic sulfate and BC concentrations in the upper troposphere, which have an increasing trend. The strong decrease in emissions from Europe, Russia and North America contributed significantly to the overall decreasing trend in Arctic BC and sulfate, especially, in the lower troposphere. The long-term changes in the spatial distributions of aerosols, their radiative impacts and source attributions, along with implications for the Arctic warming trend, will be discussed.

  7. Contributions of solar wind and micrometeoroids to molecular hydrogen in the lunar exosphere

    NASA Astrophysics Data System (ADS)

    Hurley, Dana M.; Cook, Jason C.; Retherford, Kurt D.; Greathouse, Thomas; Gladstone, G. Randall; Mandt, Kathleen; Grava, Cesare; Kaufmann, David; Hendrix, Amanda; Feldman, Paul D.; Pryor, Wayne; Stickle, Angela; Killen, Rosemary M.; Stern, S. Alan

    2017-02-01

    We investigate the density and spatial distribution of the H2 exosphere of the Moon assuming various source mechanisms. Owing to its low mass, escape is non-negligible for H2. For high-energy source mechanisms, a high percentage of the released molecules escape lunar gravity. Thus, the H2 spatial distribution for high-energy release processes reflects the spatial distribution of the source. For low energy release mechanisms, the escape rate decreases and the H2 redistributes itself predominantly to reflect a thermally accommodated exosphere. However, a small dependence on the spatial distribution of the source is superimposed on the thermally accommodated distribution in model simulations, where density is locally enhanced near regions of higher source rate. For an exosphere accommodated to the local surface temperature, a source rate of 2.2 g s-1 is required to produce a steady state density at high latitude of 1200 cm-3. Greater source rates are required to produce the same density for more energetic release mechanisms. Physical sputtering by solar wind and direct delivery of H2 through micrometeoroid bombardment can be ruled out as mechanisms for producing and liberating H2 into the lunar exosphere. Chemical sputtering by the solar wind is the most plausible as a source mechanism and would require 10-50% of the solar wind H+ inventory to be converted to H2 to account for the observations.

  8. Contributions of Solar Wind and Micrometeoroids to Molecular Hydrogen in the Lunar Exosphere

    NASA Technical Reports Server (NTRS)

    Hurley, Dana M.; Cook, Jason C.; Retherford, Kurt D.; Greathouse, Thomas; Gladstone, G. Randall; Mandt, Kathleen; Grava, Cesare; Kaufmann, David; Hendrix, Amanda; Feldman, Paul D.; hide

    2016-01-01

    We investigate the density and spatial distribution of the H2 exosphere of the Moon assuming various source mechanisms. Owing to its low mass, escape is non-negligible for H2. For high-energy source mechanisms, a high percentage of the released molecules escape lunar gravity. Thus, the H2 spatial distribution for high-energy release processes reflects the spatial distribution of the source. For low energy release mechanisms, the escape rate decreases and the H2 redistributes itself predominantly to reflect a thermally accommodated exosphere. However, a small dependence on the spatial distribution of the source is superimposed on the thermally accommodated distribution in model simulations, where density is locally enhanced near regions of higher source rate. For an exosphere accommodated to the local surface temperature, a source rate of 2.2 g s-1 is required to produce a steady state density at high latitude of 1200 cm-3. Greater source rates are required to produce the same density for more energetic release mechanisms. Physical sputtering by solar wind and direct delivery of H2 through micrometeoroid bombardment can be ruled out as mechanisms for producing and liberating H2 into the lunar exosphere. Chemical sputtering by the solar wind is the most plausible as a source mechanism and would require 10-50 of the solar wind H+ inventory to be converted to H2 to account for the observations.

  9. 75 FR 20399 - Notice of Issuance of Regulatory Guide

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-19

    ..., Revision 1, ``Establishing Quality Assurance Programs for the Manufacture and Distribution of Sealed... Manufacture and Distribution of Sealed Sources and Devices Containing Byproduct Material,'' was issued with a... during the review of an application to manufacture or distribute sealed sources and devices containing...

  10. Electron temperature profiles in axial field 2.45 GHz ECR ion source with a ceramic chamber

    NASA Astrophysics Data System (ADS)

    Abe, K.; Tamura, R.; Kasuya, T.; Wada, M.

    2017-08-01

    An array of electrostatic probes was arranged on the plasma electrode of a 2.45 GHz microwave driven axial magnetic filter field type negative hydrogen (H-) ion source to clarify the spatial plasma distribution near the electrode. The measured spatial distribution of electron temperature indicated the lower temperature near the extraction hole of the plasma electrode corresponding to the effectiveness of the axial magnetic filter field geometry. When the ratio of electron saturation current to the ion saturation current was plotted as a function of position, the obtained distribution showed a higher ratio near the hydrogen gas inlet through which ground state hydrogen molecules are injected into the source. Though the efficiency in producing H- ions is smaller for a 2.45 GHz source than a source operated at 14 GHz, it gives more volume to measure spatial distributions of various plasma parameters to understand fundamental processes that are influential on H- production in this type of ion sources.

  11. Simulation of a Lunar Surface Base Power Distribution Network for the Constellation Lunar Surface Systems

    NASA Technical Reports Server (NTRS)

    Mintz, Toby; Maslowski, Edward A.; Colozza, Anthony; McFarland, Willard; Prokopius, Kevin P.; George, Patrick J.; Hussey, Sam W.

    2010-01-01

    The Lunar Surface Power Distribution Network Study team worked to define, breadboard, build and test an electrical power distribution system consistent with NASA's goal of providing electrical power to sustain life and power equipment used to explore the lunar surface. A testbed was set up to simulate the connection of different power sources and loads together to form a mini-grid and gain an understanding of how the power systems would interact. Within the power distribution scheme, each power source contributes to the grid in an independent manner without communication among the power sources and without a master-slave scenario. The grid consisted of four separate power sources and the accompanying power conditioning equipment. Overall system design and testing was performed. The tests were performed to observe the output and interaction of the different power sources as some sources are added and others are removed from the grid connection. The loads on the system were also varied from no load to maximum load to observe the power source interactions.

  12. A decentralized mechanism for improving the functional robustness of distribution networks.

    PubMed

    Shi, Benyun; Liu, Jiming

    2012-10-01

    Most real-world distribution systems can be modeled as distribution networks, where a commodity can flow from source nodes to sink nodes through junction nodes. One of the fundamental characteristics of distribution networks is the functional robustness, which reflects the ability of maintaining its function in the face of internal or external disruptions. In view of the fact that most distribution networks do not have any centralized control mechanisms, we consider the problem of how to improve the functional robustness in a decentralized way. To achieve this goal, we study two important problems: 1) how to formally measure the functional robustness, and 2) how to improve the functional robustness of a network based on the local interaction of its nodes. First, we derive a utility function in terms of network entropy to characterize the functional robustness of a distribution network. Second, we propose a decentralized network pricing mechanism, where each node need only communicate with its distribution neighbors by sending a "price" signal to its upstream neighbors and receiving "price" signals from its downstream neighbors. By doing so, each node can determine its outflows by maximizing its own payoff function. Our mathematical analysis shows that the decentralized pricing mechanism can produce results equivalent to those of an ideal centralized maximization with complete information. Finally, to demonstrate the properties of our mechanism, we carry out a case study on the U.S. natural gas distribution network. The results validate the convergence and effectiveness of our mechanism when comparing it with an existing algorithm.

  13. 76 FR 15004 - Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-18

    ... company (``fund'') distributions of long-term capital gains made more frequently than once every twelve months. Rule 19b-1 under the Act \\1\\ prohibits funds from distributing long-term capital gains more than... fixed-income securities to distribute long-term capital gains more than once every twelve months, if: (i...

  14. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2014-01-01 2014-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...

  15. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2012-01-01 2012-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...

  16. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2010-01-01 2010-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...

  17. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2013-01-01 2013-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...

  18. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2011-01-01 2011-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...

  19. Radiation Source Mapping with Bayesian Inverse Methods

    DOE PAGES

    Hykes, Joshua M.; Azmy, Yousry Y.

    2017-03-22

    In this work, we present a method to map the spectral and spatial distributions of radioactive sources using a limited number of detectors. Locating and identifying radioactive materials is important for border monitoring, in accounting for special nuclear material in processing facilities, and in cleanup operations following a radioactive material spill. Most methods to analyze these types of problems make restrictive assumptions about the distribution of the source. In contrast, the source mapping method presented here allows an arbitrary three-dimensional distribution in space and a gamma peak distribution in energy. To apply the method, the problem is cast as anmore » inverse problem where the system’s geometry and material composition are known and fixed, while the radiation source distribution is sought. A probabilistic Bayesian approach is used to solve the resulting inverse problem since the system of equations is ill-posed. The posterior is maximized with a Newton optimization method. The probabilistic approach also provides estimates of the confidence in the final source map prediction. A set of adjoint, discrete ordinates flux solutions, obtained in this work by the Denovo code, is required to efficiently compute detector responses from a candidate source distribution. These adjoint fluxes form the linear mapping from the state space to the response space. The test of the method’s success is simultaneously locating a set of 137Cs and 60Co gamma sources in a room. This test problem is solved using experimental measurements that we collected for this purpose. Because of the weak sources available for use in the experiment, some of the expected photopeaks were not distinguishable from the Compton continuum. However, by supplanting 14 flawed measurements (out of a total of 69) with synthetic responses computed by MCNP, the proof-of-principle source mapping was successful. The locations of the sources were predicted within 25 cm for two of the sources and 90 cm for the third, in a room with an ~4-x 4-m floor plan. Finally, the predicted source intensities were within a factor of ten of their true value.« less

  20. Exploring super-Gaussianity toward robust information-theoretical time delay estimation.

    PubMed

    Petsatodis, Theodoros; Talantzis, Fotios; Boukis, Christos; Tan, Zheng-Hua; Prasad, Ramjee

    2013-03-01

    Time delay estimation (TDE) is a fundamental component of speaker localization and tracking algorithms. Most of the existing systems are based on the generalized cross-correlation method assuming gaussianity of the source. It has been shown that the distribution of speech, captured with far-field microphones, is highly varying, depending on the noise and reverberation conditions. Thus the performance of TDE is expected to fluctuate depending on the underlying assumption for the speech distribution, being also subject to multi-path reflections and competitive background noise. This paper investigates the effect upon TDE when modeling the source signal with different speech-based distributions. An information theoretical TDE method indirectly encapsulating higher order statistics (HOS) formed the basis of this work. The underlying assumption of Gaussian distributed source has been replaced by that of generalized Gaussian distribution that allows evaluating the problem under a larger set of speech-shaped distributions, ranging from Gaussian to Laplacian and Gamma. Closed forms of the univariate and multivariate entropy expressions of the generalized Gaussian distribution are derived to evaluate the TDE. The results indicate that TDE based on the specific criterion is independent of the underlying assumption for the distribution of the source, for the same covariance matrix.

Top