Sample records for distributed point source

  1. STATISTICS OF GAMMA-RAY POINT SOURCES BELOW THE FERMI DETECTION LIMIT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malyshev, Dmitry; Hogg, David W., E-mail: dm137@nyu.edu

    2011-09-10

    An analytic relation between the statistics of photons in pixels and the number counts of multi-photon point sources is used to constrain the distribution of gamma-ray point sources below the Fermi detection limit at energies above 1 GeV and at latitudes below and above 30 deg. The derived source-count distribution is consistent with the distribution found by the Fermi Collaboration based on the first Fermi point-source catalog. In particular, we find that the contribution of resolved and unresolved active galactic nuclei (AGNs) to the total gamma-ray flux is below 20%-25%. In the best-fit model, the AGN-like point-source fraction is 17%more » {+-} 2%. Using the fact that the Galactic emission varies across the sky while the extragalactic diffuse emission is isotropic, we put a lower limit of 51% on Galactic diffuse emission and an upper limit of 32% on the contribution from extragalactic weak sources, such as star-forming galaxies. Possible systematic uncertainties are discussed.« less

  2. Point and Compact Hα Sources in the Interior of M33

    NASA Astrophysics Data System (ADS)

    Moody, J. Ward; Hintz, Eric G.; Joner, Michael D.; Roming, Peter W. A.; Hintz, Maureen L.

    2017-12-01

    A variety of interesting objects such as Wolf-Rayet stars, tight OB associations, planetary nebulae, X-ray binaries, etc., can be discovered as point or compact sources in Hα surveys. How these objects distribute through a galaxy sheds light on the galaxy star formation rate and history, mass distribution, and dynamics. The nearby galaxy M33 is an excellent place to study the distribution of Hα-bright point sources in a flocculant spiral galaxy. We have reprocessed an archived WIYN continuum-subtracted Hα image of the inner 6.‧5 × 6.‧5 of M33 and, employing both eye and machine searches, have tabulated sources with a flux greater than approximately 10-15 erg cm-2s-1. We have effectively recovered previously mapped H II regions and have identified 152 unresolved point sources and 122 marginally resolved compact sources, of which 39 have not been previously identified in any archive. An additional 99 Hα sources were found to have sufficient archival flux values to generate a Spectral Energy Distribution. Using the SED, flux values, Hα flux value, and compactness, we classified 67 of these sources.

  3. Point and Condensed Hα Sources in the Interior of M33

    NASA Astrophysics Data System (ADS)

    Moody, J. Ward; Hintz, Eric G.; Roming, Peter; Joner, Michael D.; Bucklein, Brian

    2017-01-01

    A variety of interesting objects such as Wolf-Rayet stars, tight OB associations, planetary nebula, x-ray binaries, etc. can be discovered as point or condensed sources in Hα surveys. How these objects distribute through a galaxy sheds light on the galaxy star formation rate and history, mass distribution, and dynamics. The nearby galaxy M33 is an excellent place to study the distribution of Hα-bright point sources in a flocculant spiral galaxy. We have reprocessed an archived WIYN continuum-subtracted Hα image of the inner 6.5' of the nearby galaxy M33 and, employing both eye and machine searches, have tabulated sources with a flux greater than 1 x 10-15 erg cm-2sec-1. We have identified 152 unresolved point sources and 122 marginally resolved condensed sources, 38 of which have not been previously cataloged. We present a map of these sources and discuss their probable identifications.

  4. A method to analyze "source-sink" structure of non-point source pollution based on remote sensing technology.

    PubMed

    Jiang, Mengzhen; Chen, Haiying; Chen, Qinghui

    2013-11-01

    With the purpose of providing scientific basis for environmental planning about non-point source pollution prevention and control, and improving the pollution regulating efficiency, this paper established the Grid Landscape Contrast Index based on Location-weighted Landscape Contrast Index according to the "source-sink" theory. The spatial distribution of non-point source pollution caused by Jiulongjiang Estuary could be worked out by utilizing high resolution remote sensing images. The results showed that, the area of "source" of nitrogen and phosphorus in Jiulongjiang Estuary was 534.42 km(2) in 2008, and the "sink" was 172.06 km(2). The "source" of non-point source pollution was distributed mainly over Xiamen island, most of Haicang, east of Jiaomei and river bank of Gangwei and Shima; and the "sink" was distributed over southwest of Xiamen island and west of Shima. Generally speaking, the intensity of "source" gets weaker along with the distance from the seas boundary increase, while "sink" gets stronger. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Distributed optimization system and method

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2003-06-10

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  6. Distributed Optimization System

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2004-11-30

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  7. Measuring Spatial Variability of Vapor Flux to Characterize Vadose-zone VOC Sources: Flow-cell Experiments

    DOE PAGES

    Mainhagu, Jon; Morrison, C.; Truex, Michael J.; ...

    2014-08-05

    A method termed vapor-phase tomography has recently been proposed to characterize the distribution of volatile organic contaminant mass in vadose-zone source areas, and to measure associated three-dimensional distributions of local contaminant mass discharge. The method is based on measuring the spatial variability of vapor flux, and thus inherent to its effectiveness is the premise that the magnitudes and temporal variability of vapor concentrations measured at different monitoring points within the interrogated area will be a function of the geospatial positions of the points relative to the source location. A series of flow-cell experiments was conducted to evaluate this premise. Amore » well-defined source zone was created by injection and extraction of a non-reactive gas (SF6). Spatial and temporal concentration distributions obtained from the tests were compared to simulations produced with a mathematical model describing advective and diffusive transport. Tests were conducted to characterize both areal and vertical components of the application. Decreases in concentration over time were observed for monitoring points located on the opposite side of the source zone from the local–extraction point, whereas increases were observed for monitoring points located between the local–extraction point and the source zone. We found that the results illustrate that comparison of temporal concentration profiles obtained at various monitoring points gives a general indication of the source location with respect to the extraction and monitoring points.« less

  8. Statistical measurement of the gamma-ray source-count distribution as a function of energy

    NASA Astrophysics Data System (ADS)

    Zechlin, H.-S.; Cuoco, A.; Donato, F.; Fornengo, N.; Regis, M.

    2017-01-01

    Photon counts statistics have recently been proven to provide a sensitive observable for characterizing gamma-ray source populations and for measuring the composition of the gamma-ray sky. In this work, we generalize the use of the standard 1-point probability distribution function (1pPDF) to decompose the high-latitude gamma-ray emission observed with Fermi-LAT into: (i) point-source contributions, (ii) the Galactic foreground contribution, and (iii) a diffuse isotropic background contribution. We analyze gamma-ray data in five adjacent energy bands between 1 and 171 GeV. We measure the source-count distribution dN/dS as a function of energy, and demonstrate that our results extend current measurements from source catalogs to the regime of so far undetected sources. Our method improves the sensitivity for resolving point-source populations by about one order of magnitude in flux. The dN/dS distribution as a function of flux is found to be compatible with a broken power law. We derive upper limits on further possible breaks as well as the angular power of unresolved sources. We discuss the composition of the gamma-ray sky and capabilities of the 1pPDF method.

  9. Applicability of the single equivalent point dipole model to represent a spatially distributed bio-electrical source

    NASA Technical Reports Server (NTRS)

    Armoundas, A. A.; Feldman, A. B.; Sherman, D. A.; Cohen, R. J.

    2001-01-01

    Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius. Finally, a method is devised to obtain the size of the distributed source during the cardiac cycle.

  10. Distributed Sensing for Quickest Change Detection of Point Radiation Sources

    DTIC Science & Technology

    2017-02-01

    point occurs simultaneously at all sensor nodes, thus neglecting signal propagation delays. For nuclear radiation , the observation period, which is on... nuclear radiation using a sensor network,” in Homeland Security (HST), 2012 IEEE Conference on Technologies for. IEEE, 2012, pp. 648–653. [8] G. Lorden...Distributed Sensing for Quickest Change Detection of Point Radiation Sources Gene T. Whipps⋆† Emre Ertin† Randolph L. Moses† †The Ohio State

  11. Mapping algorithm for freeform construction using non-ideal light sources

    NASA Astrophysics Data System (ADS)

    Li, Chen; Michaelis, D.; Schreiber, P.; Dick, L.; Bräuer, A.

    2015-09-01

    Using conventional mapping algorithms for the construction of illumination freeform optics' arbitrary target pattern can be obtained for idealized sources, e.g. collimated light or point sources. Each freeform surface element generates an image point at the target and the light intensity of an image point is corresponding to the area of the freeform surface element who generates the image point. For sources with a pronounced extension and ray divergence, e.g. an LED with a small source-freeform-distance, the image points are blurred and the blurred patterns might be different between different points. Besides, due to Fresnel losses and vignetting, the relationship between light intensity of image points and area of freeform surface elements becomes complicated. These individual light distributions of each freeform element are taken into account in a mapping algorithm. To this end the method of steepest decent procedures are used to adapt the mapping goal. A structured target pattern for a optics system with an ideal source is computed applying corresponding linear optimization matrices. Special weighting factor and smoothing factor are included in the procedures to achieve certain edge conditions and to ensure the manufacturability of the freefrom surface. The corresponding linear optimization matrices, which are the lighting distribution patterns of each of the freeform surface elements, are gained by conventional raytracing with a realistic source. Nontrivial source geometries, like LED-irregularities due to bonding or source fine structures, and a complex ray divergence behavior can be easily considered. Additionally, Fresnel losses, vignetting and even stray light are taken into account. After optimization iterations, with a realistic source, the initial mapping goal can be achieved by the optics system providing a structured target pattern with an ideal source. The algorithm is applied to several design examples. A few simple tasks are presented to discussed the ability and limitation of the this mothed. It is also presented that a homogeneous LED-illumination system design, in where, with a strongly tilted incident direction, a homogeneous distribution is achieved with a rather compact optics system and short working distance applying a relatively large LED source. It is shown that the lighting distribution patterns from the freeform surface elements can be significantly different from the others. The generation of a structured target pattern, applying weighting factor and smoothing factor, are discussed. Finally, freeform designs for much more complex sources like clusters of LED-sources are presented.

  12. Design of TIR collimating lens for ordinary differential equation of extended light source

    NASA Astrophysics Data System (ADS)

    Zhan, Qianjing; Liu, Xiaoqin; Hou, Zaihong; Wu, Yi

    2017-10-01

    The source of LED has been widely used in our daily life. The intensity angle distribution of single LED is lambert distribution, which does not satisfy the requirement of people. Therefore, we need to distribute light and change the LED's intensity angle distribution. The most commonly method to change its intensity angle distribution is the free surface. Generally, using ordinary differential equations to calculate free surface can only be applied in a point source, but it will lead to a big error for the expand light. This paper proposes a LED collimating lens based on the ordinary differential equation, combined with the LED's light distribution curve, and adopt the method of calculating the center gravity of the extended light to get the normal vector. According to the law of Snell, the ordinary differential equations are constructed. Using the runge-kutta method for solution of ordinary differential equation solution, the curve point coordinates are gotten. Meanwhile, the edge point data of lens are imported into the optical simulation software TracePro. Based on 1mm×1mm single lambert body for light conditions, The degrees of collimating light can be close to +/-3. Furthermore, the energy utilization rate is higher than 85%. In this paper, the point light source is used to calculate partial differential equation method and compared with the simulation of the lens, which improve the effect of 1 degree of collimation.

  13. Spherical-earth Gravity and Magnetic Anomaly Modeling by Gauss-legendre Quadrature Integration

    NASA Technical Reports Server (NTRS)

    Vonfrese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J. (Principal Investigator)

    1981-01-01

    The anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical Earth for an arbitrary body represented by an equivalent point source distribution of gravity poles or magnetic dipoles were calculated. The distribution of equivalent point sources was determined directly from the coordinate limits of the source volume. Variable integration limits for an arbitrarily shaped body are derived from interpolation of points which approximate the body's surface envelope. The versatility of the method is enhanced by the ability to treat physical property variations within the source volume and to consider variable magnetic fields over the source and observation surface. A number of examples verify and illustrate the capabilities of the technique, including preliminary modeling of potential field signatures for Mississippi embayment crustal structure at satellite elevations.

  14. Calculation and analysis of the non-point source pollution in the upstream watershed of the Panjiakou Reservoir, People's Republic of China

    NASA Astrophysics Data System (ADS)

    Zhang, S.; Tang, L.

    2007-05-01

    Panjiakou Reservoir is an important drinking water resource in Haihe River Basin, Hebei Province, People's Republic of China. The upstream watershed area is about 35,000 square kilometers. Recently, the water pollution in the reservoir is becoming more serious owing to the non-point pollution as well as point source pollution on the upstream watershed. To effectively manage the reservoir and watershed and develop a plan to reduce pollutant loads, the loading of non-point and point pollution and their distribution on the upstream watershed must be understood fully. The SWAT model is used to simulate the production and transportation of the non-point source pollutants in the upstream watershed of the Panjiakou Reservoir. The loadings of non-point source pollutants are calculated for different hydrologic years and the spatial and temporal characteristics of non-point source pollution are studied. The stream network and topographic characteristics of the stream network and sub-basins are all derived from the DEM by ArcGIS software. The soil and land use data are reclassified and the soil physical properties database file is created for the model. The SWAT model was calibrated with observed data of several hydrologic monitoring stations in the study area. The results of the calibration show that the model performs fairly well. Then the calibrated model was used to calculate the loadings of non-point source pollutants for a wet year, a normal year and a dry year respectively. The time and space distribution of flow, sediment and non-point source pollution were analyzed depending on the simulated results. The comparison of different hydrologic years on calculation results is dramatic. The loading of non-point source pollution in the wet year is relatively larger but smaller in the dry year since the non-point source pollutants are mainly transported through the runoff. The pollution loading within a year is mainly produced in the flood season. Because SWAT is a distributed model, it is possible to view model output as it varies across the basin, so the critical areas and reaches can be found in the study area. According to the simulation results, it is found that different land uses can yield different results and fertilization in rainy season has an important impact on the non- point source pollution. The limitations of the SWAT model are also discussed and the measures of the control and prevention of non- point source pollution for Panjiakou Reservoir are presented according to the analysis of model calculation results.

  15. Aeroacoustic catastrophes: upstream cusp beaming in Lilley's equation.

    PubMed

    Stone, J T; Self, R H; Howls, C J

    2017-05-01

    The downstream propagation of high-frequency acoustic waves from a point source in a subsonic jet obeying Lilley's equation is well known to be organized around the so-called 'cone of silence', a fold catastrophe across which the amplitude may be modelled uniformly using Airy functions. Here we show that acoustic waves not only unexpectedly propagate upstream, but also are organized at constant distance from the point source around a cusp catastrophe with amplitude modelled locally by the Pearcey function. Furthermore, the cone of silence is revealed to be a cross-section of a swallowtail catastrophe. One consequence of these discoveries is that the peak acoustic field upstream is not only structurally stable but also at a similar level to the known downstream field. The fine structure of the upstream cusp is blurred out by distributions of symmetric acoustic sources, but peak upstream acoustic beaming persists when asymmetries are introduced, from either arrays of discrete point sources or perturbed continuum ring source distributions. These results may pose interesting questions for future novel jet-aircraft engine designs where asymmetric source distributions arise.

  16. Improved bioluminescence and fluorescence reconstruction algorithms using diffuse optical tomography, normalized data, and optimized selection of the permissible source region

    PubMed Central

    Naser, Mohamed A.; Patterson, Michael S.

    2011-01-01

    Reconstruction algorithms are presented for two-step solutions of the bioluminescence tomography (BLT) and the fluorescence tomography (FT) problems. In the first step, a continuous wave (cw) diffuse optical tomography (DOT) algorithm is used to reconstruct the tissue optical properties assuming known anatomical information provided by x-ray computed tomography or other methods. Minimization problems are formed based on L1 norm objective functions, where normalized values for the light fluence rates and the corresponding Green’s functions are used. Then an iterative minimization solution shrinks the permissible regions where the sources are allowed by selecting points with higher probability to contribute to the source distribution. Throughout this process the permissible region shrinks from the entire object to just a few points. The optimum reconstructed bioluminescence and fluorescence distributions are chosen to be the results of the iteration corresponding to the permissible region where the objective function has its global minimum This provides efficient BLT and FT reconstruction algorithms without the need for a priori information about the bioluminescence sources or the fluorophore concentration. Multiple small sources and large distributed sources can be reconstructed with good accuracy for the location and the total source power for BLT and the total number of fluorophore molecules for the FT. For non-uniform distributed sources, the size and magnitude become degenerate due to the degrees of freedom available for possible solutions. However, increasing the number of data points by increasing the number of excitation sources can improve the accuracy of reconstruction for non-uniform fluorophore distributions. PMID:21326647

  17. GARLIC, A SHIELDING PROGRAM FOR GAMMA RADIATION FROM LINE- AND CYLINDER- SOURCES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roos, M.

    1959-06-01

    GARLlC is a program for computing the gamma ray flux or dose rate at a shielded isotropic point detector, due to a line source or the line equivalent of a cylindrical source. The source strength distribution along the line must be either uniform or an arbitrary part of the positive half-cycle of a cosine function The line source can be orierted arbitrarily with respect to the main shield and the detector, except that the detector must not be located on the line source or on its extensionThe main source is a homogeneous plane slab in which scattered radiation is accountedmore » for by multiplying each point element of the line source by a point source buildup factor inside the integral over the point elements. Between the main shield and the line source additional shields can be introduced, which are either plane slabs, parallel to the main shield, or cylindrical rings, coaxial with the line source. Scattered radiation in the additional shields can only be accounted for by constant build-up factors outside the integral. GARLlC-xyz is an extended version particularly suited for the frequently met problem of shielding a room containing a large number of line sources in diHerent positions. The program computes the angles and linear dimensions of a problem for GARLIC when the positions of the detector point and the end points of the line source are given as points in an arbitrary rectangular coordinate system. As an example the isodose curves in water are presented for a monoenergetic cosine-distributed line source at several source energies and for an operating fuel element of the Swedish reactor R3, (auth)« less

  18. Advanced Unstructured Grid Generation for Complex Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2008-01-01

    A new approach for distribution of grid points on the surface and in the volume has been developed and implemented in the NASA unstructured grid generation code VGRID. In addition to the point and line sources of prior work, the new approach utilizes surface and volume sources for automatic curvature-based grid sizing and convenient point distribution in the volume. A new exponential growth function produces smoother and more efficient grids and provides superior control over distribution of grid points in the field. All types of sources support anisotropic grid stretching which not only improves the grid economy but also provides more accurate solutions for certain aerodynamic applications. The new approach does not require a three-dimensional background grid as in the previous methods. Instead, it makes use of an efficient bounding-box auxiliary medium for storing grid parameters defined by surface sources. The new approach is less memory-intensive and more efficient computationally. The grids generated with the new method either eliminate the need for adaptive grid refinement for certain class of problems or provide high quality initial grids that would enhance the performance of many adaptation methods.

  19. Advanced Unstructured Grid Generation for Complex Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar

    2010-01-01

    A new approach for distribution of grid points on the surface and in the volume has been developed. In addition to the point and line sources of prior work, the new approach utilizes surface and volume sources for automatic curvature-based grid sizing and convenient point distribution in the volume. A new exponential growth function produces smoother and more efficient grids and provides superior control over distribution of grid points in the field. All types of sources support anisotropic grid stretching which not only improves the grid economy but also provides more accurate solutions for certain aerodynamic applications. The new approach does not require a three-dimensional background grid as in the previous methods. Instead, it makes use of an efficient bounding-box auxiliary medium for storing grid parameters defined by surface sources. The new approach is less memory-intensive and more efficient computationally. The grids generated with the new method either eliminate the need for adaptive grid refinement for certain class of problems or provide high quality initial grids that would enhance the performance of many adaptation methods.

  20. Spherical-earth gravity and magnetic anomaly modeling by Gauss-Legendre quadrature integration

    NASA Technical Reports Server (NTRS)

    Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J.

    1981-01-01

    Gauss-Legendre quadrature integration is used to calculate the anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical earth. The procedure involves representation of the anomalous source as a distribution of equivalent point gravity poles or point magnetic dipoles. The distribution of equivalent point sources is determined directly from the volume limits of the anomalous body. The variable limits of integration for an arbitrarily shaped body are obtained from interpolations performed on a set of body points which approximate the body's surface envelope. The versatility of the method is shown by its ability to treat physical property variations within the source volume as well as variable magnetic fields over the source and observation surface. Examples are provided which illustrate the capabilities of the technique, including a preliminary modeling of potential field signatures for the Mississippi embayment crustal structure at 450 km.

  1. Strategies for satellite-based monitoring of CO2 from distributed area and point sources

    NASA Astrophysics Data System (ADS)

    Schwandner, Florian M.; Miller, Charles E.; Duren, Riley M.; Natraj, Vijay; Eldering, Annmarie; Gunson, Michael R.; Crisp, David

    2014-05-01

    Atmospheric CO2 budgets are controlled by the strengths, as well as the spatial and temporal variabilities of CO2 sources and sinks. Natural CO2 sources and sinks are dominated by the vast areas of the oceans and the terrestrial biosphere. In contrast, anthropogenic and geogenic CO2 sources are dominated by distributed area and point sources, which may constitute as much as 70% of anthropogenic (e.g., Duren & Miller, 2012), and over 80% of geogenic emissions (Burton et al., 2013). Comprehensive assessments of CO2 budgets necessitate robust and highly accurate satellite remote sensing strategies that address the competing and often conflicting requirements for sampling over disparate space and time scales. Spatial variability: The spatial distribution of anthropogenic sources is dominated by patterns of production, storage, transport and use. In contrast, geogenic variability is almost entirely controlled by endogenic geological processes, except where surface gas permeability is modulated by soil moisture. Satellite remote sensing solutions will thus have to vary greatly in spatial coverage and resolution to address distributed area sources and point sources alike. Temporal variability: While biogenic sources are dominated by diurnal and seasonal patterns, anthropogenic sources fluctuate over a greater variety of time scales from diurnal, weekly and seasonal cycles, driven by both economic and climatic factors. Geogenic sources typically vary in time scales of days to months (geogenic sources sensu stricto are not fossil fuels but volcanoes, hydrothermal and metamorphic sources). Current ground-based monitoring networks for anthropogenic and geogenic sources record data on minute- to weekly temporal scales. Satellite remote sensing solutions would have to capture temporal variability through revisit frequency or point-and-stare strategies. Space-based remote sensing offers the potential of global coverage by a single sensor. However, no single combination of orbit and sensor provides the full range of temporal sampling needed to characterize distributed area and point source emissions. For instance, point source emission patterns will vary with source strength, wind speed and direction. Because wind speed, direction and other environmental factors change rapidly, short term variabilities should be sampled. For detailed target selection and pointing verification, important lessons have already been learned and strategies devised during JAXA's GOSAT mission (Schwandner et al, 2013). The fact that competing spatial and temporal requirements drive satellite remote sensing sampling strategies dictates a systematic, multi-factor consideration of potential solutions. Factors to consider include vista, revisit frequency, integration times, spatial resolution, and spatial coverage. No single satellite-based remote sensing solution can address this problem for all scales. It is therefore of paramount importance for the international community to develop and maintain a constellation of atmospheric CO2 monitoring satellites that complement each other in their temporal and spatial observation capabilities: Polar sun-synchronous orbits (fixed local solar time, no diurnal information) with agile pointing allow global sampling of known distributed area and point sources like megacities, power plants and volcanoes with daily to weekly temporal revisits and moderate to high spatial resolution. Extensive targeting of distributed area and point sources comes at the expense of reduced mapping or spatial coverage, and the important contextual information that comes with large-scale contiguous spatial sampling. Polar sun-synchronous orbits with push-broom swath-mapping but limited pointing agility may allow mapping of individual source plumes and their spatial variability, but will depend on fortuitous environmental conditions during the observing period. These solutions typically have longer times between revisits, limiting their ability to resolve temporal variations. Geostationary and non-sun-synchronous low-Earth-orbits (precessing local solar time, diurnal information possible) with agile pointing have the potential to provide, comprehensive mapping of distributed area sources such as megacities with longer stare times and multiple revisits per day, at the expense of global access and spatial coverage. An ad hoc CO2 remote sensing constellation is emerging. NASA's OCO-2 satellite (launch July 2014) joins JAXA's GOSAT satellite in orbit. These will be followed by GOSAT-2 and NASA's OCO-3 on the International Space Station as early as 2017. Additional polar orbiting satellites (e.g., CarbonSat, under consideration at ESA) and geostationary platforms may also become available. However, the individual assets have been designed with independent science goals and requirements, and limited consideration of coordinated observing strategies. Every effort must be made to maximize the science return from this constellation. We discuss the opportunities to exploit the complementary spatial and temporal coverage provided by these assets as well as the crucial gaps in the capabilities of this constellation. References Burton, M.R., Sawyer, G.M., and Granieri, D. (2013). Deep carbon emissions from volcanoes. Rev. Mineral. Geochem. 75: 323-354. Duren, R.M., Miller, C.E. (2012). Measuring the carbon emissions of megacities. Nature Climate Change 2, 560-562. Schwandner, F.M., Oda, T., Duren, R., Carn, S.A., Maksyutov, S., Crisp, D., Miller, C.E. (2013). Scientific Opportunities from Target-Mode Capabilities of GOSAT-2. NASA Jet Propulsion Laboratory, California Institute of Technology, Pasadena CA, White Paper, 6p., March 2013.

  2. Radial Distribution of X-Ray Point Sources Near the Galactic Center

    NASA Astrophysics Data System (ADS)

    Hong, Jae Sub; van den Berg, Maureen; Grindlay, Jonathan E.; Laycock, Silas

    2009-11-01

    We present the log N-log S and spatial distributions of X-ray point sources in seven Galactic bulge (GB) fields within 4° from the Galactic center (GC). We compare the properties of 1159 X-ray point sources discovered in our deep (100 ks) Chandra observations of three low extinction Window fields near the GC with the X-ray sources in the other GB fields centered around Sgr B2, Sgr C, the Arches Cluster, and Sgr A* using Chandra archival data. To reduce the systematic errors induced by the uncertain X-ray spectra of the sources coupled with field-and-distance-dependent extinction, we classify the X-ray sources using quantile analysis and estimate their fluxes accordingly. The result indicates that the GB X-ray population is highly concentrated at the center, more heavily than the stellar distribution models. It extends out to more than 1fdg4 from the GC, and the projected density follows an empirical radial relation inversely proportional to the offset from the GC. We also compare the total X-ray and infrared surface brightness using the Chandra and Spitzer observations of the regions. The radial distribution of the total infrared surface brightness from the 3.6 band μm images appears to resemble the radial distribution of the X-ray point sources better than that predicted by the stellar distribution models. Assuming a simple power-law model for the X-ray spectra, the closer to the GC the intrinsically harder the X-ray spectra appear, but adding an iron emission line at 6.7 keV in the model allows the spectra of the GB X-ray sources to be largely consistent across the region. This implies that the majority of these GB X-ray sources can be of the same or similar type. Their X-ray luminosity and spectral properties support the idea that the most likely candidate is magnetic cataclysmic variables (CVs), primarily intermediate polars (IPs). Their observed number density is also consistent with the majority being IPs, provided the relative CV to star density in the GB is not smaller than the value in the local solar neighborhood.

  3. Probing dim point sources in the inner Milky Way using PCAT

    NASA Astrophysics Data System (ADS)

    Daylan, Tansu; Portillo, Stephen K. N.; Finkbeiner, Douglas P.

    2017-01-01

    Poisson regression of the Fermi-LAT data in the inner Milky Way reveals an extended gamma-ray excess. An important question is whether the signal is coming from a collection of unresolved point sources, possibly old recycled pulsars, or constitutes a truly diffuse emission component. Previous analyses have relied on non-Poissonian template fits or wavelet decomposition of the Fermi-LAT data, which find evidence for a population of dim point sources just below the 3FGL flux limit. In order to be able to draw conclusions about the flux distribution of point sources at the dim end, we employ a Bayesian trans-dimensional MCMC framework by taking samples from the space of catalogs consistent with the observed gamma-ray emission in the inner Milky Way. The software implementation, PCAT (Probabilistic Cataloger), is designed to efficiently explore that catalog space in the crowded field limit such as in the galactic plane, where the model PSF, point source positions and fluxes are highly degenerate. We thus generate fair realizations of the underlying MSP population in the inner galaxy and constrain the population characteristics such as the radial and flux distribution of such sources.

  4. Detection prospects for high energy neutrino sources from the anisotropic matter distribution in the local Universe

    NASA Astrophysics Data System (ADS)

    Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene

    2017-03-01

    Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe. Assuming that the distribution of the neutrino sources follows that of matter, we look for correlations between ``warm'' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance (including that of IceCube-Gen2), we demonstrate that sources with local density exceeding 10-6 Mpc-3 and neutrino luminosity Lν lesssim 1042 erg s-1 (1041 erg s-1) will be efficiently revealed by our method using IceCube (IceCube-Gen2). At low luminosities such as will be probed by IceCube-Gen2, the sensitivity of this analysis is superior to requiring statistically significant direct observation of a point source.

  5. Identifying and characterizing major emission point sources as a basis for geospatial distribution of mercury emissions inventories

    NASA Astrophysics Data System (ADS)

    Steenhuisen, Frits; Wilson, Simon J.

    2015-07-01

    Mercury is a global pollutant that poses threats to ecosystem and human health. Due to its global transport, mercury contamination is found in regions of the Earth that are remote from major emissions areas, including the Polar regions. Global anthropogenic emission inventories identify important sectors and industries responsible for emissions at a national level; however, to be useful for air transport modelling, more precise information on the locations of emission is required. This paper describes the methodology applied, and the results of work that was conducted to assign anthropogenic mercury emissions to point sources as part of geospatial mapping of the 2010 global anthropogenic mercury emissions inventory prepared by AMAP/UNEP. Major point-source emission sectors addressed in this work account for about 850 tonnes of the emissions included in the 2010 inventory. This work allocated more than 90% of these emissions to some 4600 identified point source locations, including significantly more point source locations in Africa, Asia, Australia and South America than had been identified during previous work to geospatially-distribute the 2005 global inventory. The results demonstrate the utility and the limitations of using existing, mainly public domain resources to accomplish this work. Assumptions necessary to make use of selected online resources are discussed, as are artefacts that can arise when these assumptions are applied to assign (national-sector) emissions estimates to point sources in various countries and regions. Notwithstanding the limitations of the available information, the value of this procedure over alternative methods commonly used to geo-spatially distribute emissions, such as use of 'proxy' datasets to represent emissions patterns, is illustrated. Improvements in information that would facilitate greater use of these methods in future work to assign emissions to point-sources are discussed. These include improvements to both national (geo-referenced) emission inventories and also to other resources that can be employed when such national inventories are lacking.

  6. Modeling deep brain stimulation: point source approximation versus realistic representation of the electrode

    NASA Astrophysics Data System (ADS)

    Zhang, Tianhe C.; Grill, Warren M.

    2010-12-01

    Deep brain stimulation (DBS) has emerged as an effective treatment for movement disorders; however, the fundamental mechanisms by which DBS works are not well understood. Computational models of DBS can provide insights into these fundamental mechanisms and typically require two steps: calculation of the electrical potentials generated by DBS and, subsequently, determination of the effects of the extracellular potentials on neurons. The objective of this study was to assess the validity of using a point source electrode to approximate the DBS electrode when calculating the thresholds and spatial distribution of activation of a surrounding population of model neurons in response to monopolar DBS. Extracellular potentials in a homogenous isotropic volume conductor were calculated using either a point current source or a geometrically accurate finite element model of the Medtronic DBS 3389 lead. These extracellular potentials were coupled to populations of model axons, and thresholds and spatial distributions were determined for different electrode geometries and axon orientations. Median threshold differences between DBS and point source electrodes for individual axons varied between -20.5% and 9.5% across all orientations, monopolar polarities and electrode geometries utilizing the DBS 3389 electrode. Differences in the percentage of axons activated at a given amplitude by the point source electrode and the DBS electrode were between -9.0% and 12.6% across all monopolar configurations tested. The differences in activation between the DBS and point source electrodes occurred primarily in regions close to conductor-insulator interfaces and around the insulating tip of the DBS electrode. The robustness of the point source approximation in modeling several special cases—tissue anisotropy, a long active electrode and bipolar stimulation—was also examined. Under the conditions considered, the point source was shown to be a valid approximation for predicting excitation of populations of neurons in response to DBS.

  7. The distribution of infrared point sources in nearby elliptical galaxies

    NASA Astrophysics Data System (ADS)

    Gogoi, Rupjyoti; Shalima, P.; Misra, Ranjeev

    2018-02-01

    Infrared (IR) point sources as observed by Spitzer, in nearby early-type galaxies should either be bright sources in the galaxy such as globular clusters, or they may be background sources such as AGNs. These objects are often counterparts of sources in other wavebands such as optical and X-rays and the IR information provides crucial information regarding their nature. However, many of the IR sources may be background objects and it is important to identify them or at least quantify the level of background contamination. Moreover, the distribution of these IR point sources in flux, distance from the centre and colour would be useful in understanding their origin. Archival Spitzer IRAC images provide a unique opportunity for such a study and here we present the results of such an analysis for four nearby galaxies, NGC 1399, NGC 2768, NGC 4365 and NGC 4649. We estimate the background contamination using several blank fields. Our results suggest that IR colours can be effectively used to differentiate between sources in the galaxy and background ones. In particular we find that sources having AGN like colours are indeed consistent with being background AGNs. For sources with non AGN like colours we compute the distribution of flux and normalised distance from the centre which is found to be of a power-law form. Although our sample size is small, the power-law index for the galaxies are different indicating perhaps that the galaxy environment may be playing a part in their origin and nature.

  8. An improved DPSM technique for modelling ultrasonic fields in cracked solids

    NASA Astrophysics Data System (ADS)

    Banerjee, Sourav; Kundu, Tribikram; Placko, Dominique

    2007-04-01

    In recent years Distributed Point Source Method (DPSM) is being used for modelling various ultrasonic, electrostatic and electromagnetic field modelling problems. In conventional DPSM several point sources are placed near the transducer face, interface and anomaly boundaries. The ultrasonic or the electromagnetic field at any point is computed by superimposing the contributions of different layers of point sources strategically placed. The conventional DPSM modelling technique is modified in this paper so that the contributions of the point sources in the shadow region can be removed from the calculations. For this purpose the conventional point sources that radiate in all directions are replaced by Controlled Space Radiation (CSR) sources. CSR sources can take care of the shadow region problem to some extent. Complete removal of the shadow region problem can be achieved by introducing artificial interfaces. Numerically synthesized fields obtained by the conventional DPSM technique that does not give any special consideration to the point sources in the shadow region and the proposed modified technique that nullifies the contributions of the point sources in the shadow region are compared. One application of this research can be found in the improved modelling of the real time ultrasonic non-destructive evaluation experiments.

  9. Interferometry with flexible point source array for measuring complex freeform surface and its design algorithm

    NASA Astrophysics Data System (ADS)

    Li, Jia; Shen, Hua; Zhu, Rihong; Gao, Jinming; Sun, Yue; Wang, Jinsong; Li, Bo

    2018-06-01

    The precision of the measurements of aspheric and freeform surfaces remains the primary factor restrict their manufacture and application. One effective means of measuring such surfaces involves using reference or probe beams with angle modulation, such as tilted-wave-interferometer (TWI). It is necessary to improve the measurement efficiency by obtaining the optimum point source array for different pieces before TWI measurements. For purpose of forming a point source array based on the gradients of different surfaces under test, we established a mathematical model describing the relationship between the point source array and the test surface. However, the optimal point sources are irregularly distributed. In order to achieve a flexible point source array according to the gradient of test surface, a novel interference setup using fiber array is proposed in which every point source can be independently controlled on and off. Simulations and the actual measurement examples of two different surfaces are given in this paper to verify the mathematical model. Finally, we performed an experiment of testing an off-axis ellipsoidal surface that proved the validity of the proposed interference system.

  10. Evaluation of the AnnAGNPS model for predicting runoff and sediment yield in a small Mediterranean agricultural watershed in Navarre (Spain)

    USDA-ARS?s Scientific Manuscript database

    AnnAGNPS (Annualized Agricultural Non-Point Source Pollution Model) is a system of computer models developed to predict non-point source pollutant loadings within agricultural watersheds. It contains a daily time step distributed parameter continuous simulation surface runoff model designed to assis...

  11. Unveiling the Gamma-Ray Source Count Distribution Below the Fermi Detection Limit with Photon Statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza

    The source-count distribution as a function of their flux, dN/dS, is one of the main quantities characterizing gamma-ray source populations. In this paper, we employ statistical properties of the Fermi Large Area Telescope (LAT) photon counts map to measure the composition of the extragalactic gamma-ray sky at high latitudes (|b| greater-than or slanted equal to 30°) between 1 and 10 GeV. We present a new method, generalizing the use of standard pixel-count statistics, to decompose the total observed gamma-ray emission into (a) point-source contributions, (b) the Galactic foreground contribution, and (c) a truly diffuse isotropic background contribution. Using the 6more » yr Fermi-LAT data set (P7REP), we show that the dN/dS distribution in the regime of so far undetected point sources can be consistently described with a power law with an index between 1.9 and 2.0. We measure dN/dS down to an integral flux of ~2 x 10 -11cm -2s -1, improving beyond the 3FGL catalog detection limit by about one order of magnitude. The overall dN/dS distribution is consistent with a broken power law, with a break at 2.1 +1.0 -1.3 x 10 -8cm -2s -1. The power-law index n 1 = 3.1 +0.7 -0.5 for bright sources above the break hardens to n 2 = 1.97 ± 0.03 for fainter sources below the break. A possible second break of the dN/dS distribution is constrained to be at fluxes below 6.4 x 10 -11cm -2s -1 at 95% confidence level. Finally, the high-latitude gamma-ray sky between 1 and 10 GeV is shown to be composed of ~25% point sources, ~69.3% diffuse Galactic foreground emission, and ~6% isotropic diffuse background.« less

  12. Unveiling the Gamma-Ray Source Count Distribution Below the Fermi Detection Limit with Photon Statistics

    DOE PAGES

    Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; ...

    2016-07-26

    The source-count distribution as a function of their flux, dN/dS, is one of the main quantities characterizing gamma-ray source populations. In this paper, we employ statistical properties of the Fermi Large Area Telescope (LAT) photon counts map to measure the composition of the extragalactic gamma-ray sky at high latitudes (|b| greater-than or slanted equal to 30°) between 1 and 10 GeV. We present a new method, generalizing the use of standard pixel-count statistics, to decompose the total observed gamma-ray emission into (a) point-source contributions, (b) the Galactic foreground contribution, and (c) a truly diffuse isotropic background contribution. Using the 6more » yr Fermi-LAT data set (P7REP), we show that the dN/dS distribution in the regime of so far undetected point sources can be consistently described with a power law with an index between 1.9 and 2.0. We measure dN/dS down to an integral flux of ~2 x 10 -11cm -2s -1, improving beyond the 3FGL catalog detection limit by about one order of magnitude. The overall dN/dS distribution is consistent with a broken power law, with a break at 2.1 +1.0 -1.3 x 10 -8cm -2s -1. The power-law index n 1 = 3.1 +0.7 -0.5 for bright sources above the break hardens to n 2 = 1.97 ± 0.03 for fainter sources below the break. A possible second break of the dN/dS distribution is constrained to be at fluxes below 6.4 x 10 -11cm -2s -1 at 95% confidence level. Finally, the high-latitude gamma-ray sky between 1 and 10 GeV is shown to be composed of ~25% point sources, ~69.3% diffuse Galactic foreground emission, and ~6% isotropic diffuse background.« less

  13. A Direction Finding Method with A 3-D Array Based on Aperture Synthesis

    NASA Astrophysics Data System (ADS)

    Li, Shiwen; Chen, Liangbing; Gao, Zhaozhao; Ma, Wenfeng

    2018-01-01

    Direction finding for electronic warfare application should provide a wider field of view as possible. But the maximum unambiguous field of view for conventional direction finding methods is a hemisphere. It cannot distinguish the direction of arrival of the signals from the back lobe of the array. In this paper, a full 3-D direction finding method based on aperture synthesis radiometry is proposed. The model of the direction finding system is illustrated, and the fundamentals are presented. The relationship between the outputs of the measurements of a 3-D array and the 3-D power distribution of the point sources can be represented by a 3-D Fourier transform, and then the 3-D power distribution of the point sources can be reconstructed by an inverse 3-D Fourier transform. And in order to display the 3-D power distribution of the point sources conveniently, the whole spherical distribution is represented by two 2-D circular distribution images, one of which is for the upper hemisphere, and the other is for the lower hemisphere. Then a numeric simulation is designed and conducted to demonstrate the feasibility of the method. The results show that the method can estimate the arbitrary direction of arrival of the signals in the 3-D space correctly.

  14. Contaminant transport from point source on water surface in open channel flow with bed absorption

    NASA Astrophysics Data System (ADS)

    Guo, Jinlan; Wu, Xudong; Jiang, Weiquan; Chen, Guoqian

    2018-06-01

    Studying solute dispersion in channel flows is of significance for environmental and industrial applications. Two-dimensional concentration distribution for a most typical case of a point source release on the free water surface in a channel flow with bed absorption is presented by means of Chatwin's long-time asymptotic technique. Five basic characteristics of Taylor dispersion and vertical mean concentration distribution with skewness and kurtosis modifications are also analyzed. The results reveal that bed absorption affects both the longitudinal and vertical concentration distributions and causes the contaminant cloud to concentrate in the upper layer. Additionally, the cross-sectional concentration distribution shows an asymptotic Gaussian distribution at large time which is unaffected by the bed absorption. The vertical concentration distribution is found to be nonuniform even at large time. The obtained results are essential for practical implements with strict environmental standards.

  15. What are single photons good for?

    NASA Astrophysics Data System (ADS)

    Sangouard, Nicolas; Zbinden, Hugo

    2012-10-01

    In a long-held preconception, photons play a central role in present-day quantum technologies. But what are sources producing photons one by one good for precisely? Well, in opposition to what many suggest, we show that single-photon sources are not helpful for point to point quantum key distribution because faint laser pulses do the job comfortably. However, there is no doubt about the usefulness of sources producing single photons for future quantum technologies. In particular, we show how single-photon sources could become the seed of a revolution in the framework of quantum communication, making the security of quantum key distribution device-independent or extending quantum communication over many hundreds of kilometers. Hopefully, these promising applications will provide a guideline for researchers to develop more and more efficient sources, producing narrowband, pure and indistinguishable photons at appropriate wavelengths.

  16. Detection prospects for high energy neutrino sources from the anisotropic matter distribution in the local Universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene, E-mail: mertsch@nbi.ku.dk, E-mail: mohamed.rameez@nbi.ku.dk, E-mail: tamborra@nbi.ku.dk

    Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe. Assuming that the distribution of the neutrino sources follows that of matter, we look for correlations between ''warm'' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance (including that of IceCube-Gen2), we demonstrate that sourcesmore » with local density exceeding 10{sup −6} Mpc{sup −3} and neutrino luminosity L {sub ν} ∼< 10{sup 42} erg s{sup −1} (10{sup 41} erg s{sup −1}) will be efficiently revealed by our method using IceCube (IceCube-Gen2). At low luminosities such as will be probed by IceCube-Gen2, the sensitivity of this analysis is superior to requiring statistically significant direct observation of a point source.« less

  17. High Resolution Geological Site Characterization Utilizing Ground Motion Data

    DTIC Science & Technology

    1992-06-26

    Hayward, 1992). 15 Acquistion I 16 The source characterization array was composed of 28 stations evenly 17 distributed on the circumference of a...of analog anti alias filters, no prefiltering was applied during II acquistion . 12 Results 13 We deployed 9 different sources within the source...calculated using a 1024 point Hamming window applied to 32 the original 1000 point detrended and padded time series. These are then contoured as a 33

  18. Characterizing the size distribution of particles in urban stormwater by use of fixed-point sample-collection methods

    USGS Publications Warehouse

    Selbig, William R.; Bannerman, Roger T.

    2011-01-01

    The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.

  19. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    NASA Astrophysics Data System (ADS)

    Di Mauro, M.; Manconi, S.; Zechlin, H.-S.; Ajello, M.; Charles, E.; Donato, F.

    2018-04-01

    The Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (| b| > 20^\\circ ), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10‑12 ph cm‑2 s‑1. With this method, we detect a flux break at (3.5 ± 0.4) × 10‑11 ph cm‑2 s‑1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ∼10‑11 ph cm‑2 s‑1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.

  20. The effect of tandem-ovoid titanium applicator on points A, B, bladder, and rectum doses in gynecological brachytherapy using 192Ir.

    PubMed

    Sadeghi, Mohammad Hosein; Sina, Sedigheh; Mehdizadeh, Amir; Faghihi, Reza; Moharramzadeh, Vahed; Meigooni, Ali Soleimani

    2018-02-01

    The dosimetry procedure by simple superposition accounts only for the self-shielding of the source and does not take into account the attenuation of photons by the applicators. The purpose of this investigation is an estimation of the effects of the tandem and ovoid applicator on dose distribution inside the phantom by MCNP5 Monte Carlo simulations. In this study, the superposition method is used for obtaining the dose distribution in the phantom without using the applicator for a typical gynecological brachytherapy (superposition-1). Then, the sources are simulated inside the tandem and ovoid applicator to identify the effect of applicator attenuation (superposition-2), and the dose at points A, B, bladder, and rectum were compared with the results of superposition. The exact dwell positions, times of the source, and positions of the dosimetry points were determined in images of a patient and treatment data of an adult woman patient from a cancer center. The MCNP5 Monte Carlo (MC) code was used for simulation of the phantoms, applicators, and the sources. The results of this study showed no significant differences between the results of superposition method and the MC simulations for different dosimetry points. The difference in all important dosimetry points was found to be less than 5%. According to the results, applicator attenuation has no significant effect on the calculated points dose, the superposition method, adding the dose of each source obtained by the MC simulation, can estimate the dose to points A, B, bladder, and rectum with good accuracy.

  1. Developement of watershed and reference loads for a TMDL in Charleston Harbor System, SC.

    Treesearch

    Silong Lu; Devenra Amatya; Jamie Miller

    2005-01-01

    It is essential to determine point and non-point source loads and their distribution for development of a dissolved oxygen (DO) Total Maximum Daily Load (TMDL). A series of models were developed to assess sources of oxygen-demand loadings in Charleston Harbor, South Carolina. These oxygen-demand loadings included nutrients and BOD. Stream flow and nutrient...

  2. 7 CFR 1730.62 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... ELECTRIC SYSTEM OPERATIONS AND MAINTENANCE Interconnection of Distributed Resources § 1730.62 Definitions. “Distributed resources” as used in this subpart means sources of electric power that are not directly connected... to the borrower's electric power system through a point of common coupling. Distributed resources...

  3. 7 CFR 1730.62 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... ELECTRIC SYSTEM OPERATIONS AND MAINTENANCE Interconnection of Distributed Resources § 1730.62 Definitions. “Distributed resources” as used in this subpart means sources of electric power that are not directly connected... to the borrower's electric power system through a point of common coupling. Distributed resources...

  4. 7 CFR 1730.62 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... ELECTRIC SYSTEM OPERATIONS AND MAINTENANCE Interconnection of Distributed Resources § 1730.62 Definitions. “Distributed resources” as used in this subpart means sources of electric power that are not directly connected... to the borrower's electric power system through a point of common coupling. Distributed resources...

  5. 7 CFR 1730.62 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... ELECTRIC SYSTEM OPERATIONS AND MAINTENANCE Interconnection of Distributed Resources § 1730.62 Definitions. “Distributed resources” as used in this subpart means sources of electric power that are not directly connected... to the borrower's electric power system through a point of common coupling. Distributed resources...

  6. 7 CFR 1730.62 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... ELECTRIC SYSTEM OPERATIONS AND MAINTENANCE Interconnection of Distributed Resources § 1730.62 Definitions. “Distributed resources” as used in this subpart means sources of electric power that are not directly connected... to the borrower's electric power system through a point of common coupling. Distributed resources...

  7. Stabilizing operation point technique based on the tunable distributed feedback laser for interferometric sensors

    NASA Astrophysics Data System (ADS)

    Mao, Xuefeng; Zhou, Xinlei; Yu, Qingxu

    2016-02-01

    We describe a stabilizing operation point technique based on the tunable Distributed Feedback (DFB) laser for quadrature demodulation of interferometric sensors. By introducing automatic lock quadrature point and wavelength periodically tuning compensation into an interferometric system, the operation point of interferometric system is stabilized when the system suffers various environmental perturbations. To demonstrate the feasibility of this stabilizing operation point technique, experiments have been performed using a tunable-DFB-laser as light source to interrogate an extrinsic Fabry-Perot interferometric vibration sensor and a diaphragm-based acoustic sensor. Experimental results show that good tracing of Q-point was effectively realized.

  8. A model for jet-noise analysis using pressure-gradient correlations on an imaginary cone

    NASA Technical Reports Server (NTRS)

    Norum, T. D.

    1974-01-01

    The technique for determining the near and far acoustic field of a jet through measurements of pressure-gradient correlations on an imaginary conical surface surrounding the jet is discussed. The necessary analytical developments are presented, and their feasibility is checked by using a point source as the sound generator. The distribution of the apparent sources on the cone, equivalent to the point source, is determined in terms of the pressure-gradient correlations.

  9. Assessment of Groundwater Susceptibility to Non-Point Source Contaminants Using Three-Dimensional Transient Indexes.

    PubMed

    Zhang, Yong; Weissmann, Gary S; Fogg, Graham E; Lu, Bingqing; Sun, HongGuang; Zheng, Chunmiao

    2018-06-05

    Groundwater susceptibility to non-point source contamination is typically quantified by stable indexes, while groundwater quality evolution (or deterioration globally) can be a long-term process that may last for decades and exhibit strong temporal variations. This study proposes a three-dimensional (3- d ), transient index map built upon physical models to characterize the complete temporal evolution of deep aquifer susceptibility. For illustration purposes, the previous travel time probability density (BTTPD) approach is extended to assess the 3- d deep groundwater susceptibility to non-point source contamination within a sequence stratigraphic framework observed in the Kings River fluvial fan (KRFF) aquifer. The BTTPD, which represents complete age distributions underlying a single groundwater sample in a regional-scale aquifer, is used as a quantitative, transient measure of aquifer susceptibility. The resultant 3- d imaging of susceptibility using the simulated BTTPDs in KRFF reveals the strong influence of regional-scale heterogeneity on susceptibility. The regional-scale incised-valley fill deposits increase the susceptibility of aquifers by enhancing rapid downward solute movement and displaying relatively narrow and young age distributions. In contrast, the regional-scale sequence-boundary paleosols within the open-fan deposits "protect" deep aquifers by slowing downward solute movement and displaying a relatively broad and old age distribution. Further comparison of the simulated susceptibility index maps to known contaminant distributions shows that these maps are generally consistent with the high concentration and quick evolution of 1,2-dibromo-3-chloropropane (DBCP) in groundwater around the incised-valley fill since the 1970s'. This application demonstrates that the BTTPDs can be used as quantitative and transient measures of deep aquifer susceptibility to non-point source contamination.

  10. Developing a Near Real-time System for Earthquake Slip Distribution Inversion

    NASA Astrophysics Data System (ADS)

    Zhao, Li; Hsieh, Ming-Che; Luo, Yan; Ji, Chen

    2016-04-01

    Advances in observational and computational seismology in the past two decades have enabled completely automatic and real-time determinations of the focal mechanisms of earthquake point sources. However, seismic radiations from moderate and large earthquakes often exhibit strong finite-source directivity effect, which is critically important for accurate ground motion estimations and earthquake damage assessments. Therefore, an effective procedure to determine earthquake rupture processes in near real-time is in high demand for hazard mitigation and risk assessment purposes. In this study, we develop an efficient waveform inversion approach for the purpose of solving for finite-fault models in 3D structure. Full slip distribution inversions are carried out based on the identified fault planes in the point-source solutions. To ensure efficiency in calculating 3D synthetics during slip distribution inversions, a database of strain Green tensors (SGT) is established for 3D structural model with realistic surface topography. The SGT database enables rapid calculations of accurate synthetic seismograms for waveform inversion on a regular desktop or even a laptop PC. We demonstrate our source inversion approach using two moderate earthquakes (Mw~6.0) in Taiwan and in mainland China. Our results show that 3D velocity model provides better waveform fitting with more spatially concentrated slip distributions. Our source inversion technique based on the SGT database is effective for semi-automatic, near real-time determinations of finite-source solutions for seismic hazard mitigation purposes.

  11. Strategies for lidar characterization of particulates from point and area sources

    NASA Astrophysics Data System (ADS)

    Wojcik, Michael D.; Moore, Kori D.; Martin, Randal S.; Hatfield, Jerry

    2010-10-01

    Use of ground based remote sensing technologies such as scanning lidar systems (light detection and ranging) has gained traction in characterizing ambient aerosols due to some key advantages such as wide area of regard (10 km2), fast response time, high spatial resolution (<10 m) and high sensitivity. Energy Dynamics Laboratory and Utah State University, in conjunction with the USDA-ARS, has developed a three-wavelength scanning lidar system called Aglite that has been successfully deployed to characterize particle motion, concentration, and size distribution at both point and diffuse area sources in agricultural and industrial settings. A suite of massbased and size distribution point sensors are used to locally calibrate the lidar. Generating meaningful particle size distribution, mass concentration, and emission rate results based on lidar data is dependent on strategic onsite deployment of these point sensors with successful local meteorological measurements. Deployment strategies learned from field use of this entire measurement system over five years include the characterization of local meteorology and its predictability prior to deployment, the placement of point sensors to prevent contamination and overloading, the positioning of the lidar and beam plane to avoid hard target interferences, and the usefulness of photographic and written observational data.

  12. Statistical interpretation of pollution data from satellites. [for levels distribution over metropolitan area

    NASA Technical Reports Server (NTRS)

    Smith, G. L.; Green, R. N.; Young, G. R.

    1974-01-01

    The NIMBUS-G environmental monitoring satellite has an instrument (a gas correlation spectrometer) onboard for measuring the mass of a given pollutant within a gas volume. The present paper treats the problem: How can this type measurement be used to estimate the distribution of pollutant levels in a metropolitan area. Estimation methods are used to develop this distribution. The pollution concentration caused by a point source is modeled as a Gaussian plume. The uncertainty in the measurements is used to determine the accuracy of estimating the source strength, the wind velocity, diffusion coefficients and source location.

  13. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    DOE PAGES

    Di Mauro, M.; Manconi, S.; Zechlin, H. -S.; ...

    2018-03-29

    Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less

  14. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Mauro, M.; Manconi, S.; Zechlin, H. -S.

    Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less

  15. The effect of tandem-ovoid titanium applicator on points A, B, bladder, and rectum doses in gynecological brachytherapy using 192Ir

    PubMed Central

    Sadeghi, Mohammad Hosein; Mehdizadeh, Amir; Faghihi, Reza; Moharramzadeh, Vahed; Meigooni, Ali Soleimani

    2018-01-01

    Purpose The dosimetry procedure by simple superposition accounts only for the self-shielding of the source and does not take into account the attenuation of photons by the applicators. The purpose of this investigation is an estimation of the effects of the tandem and ovoid applicator on dose distribution inside the phantom by MCNP5 Monte Carlo simulations. Material and methods In this study, the superposition method is used for obtaining the dose distribution in the phantom without using the applicator for a typical gynecological brachytherapy (superposition-1). Then, the sources are simulated inside the tandem and ovoid applicator to identify the effect of applicator attenuation (superposition-2), and the dose at points A, B, bladder, and rectum were compared with the results of superposition. The exact dwell positions, times of the source, and positions of the dosimetry points were determined in images of a patient and treatment data of an adult woman patient from a cancer center. The MCNP5 Monte Carlo (MC) code was used for simulation of the phantoms, applicators, and the sources. Results The results of this study showed no significant differences between the results of superposition method and the MC simulations for different dosimetry points. The difference in all important dosimetry points was found to be less than 5%. Conclusions According to the results, applicator attenuation has no significant effect on the calculated points dose, the superposition method, adding the dose of each source obtained by the MC simulation, can estimate the dose to points A, B, bladder, and rectum with good accuracy. PMID:29619061

  16. Theoretical evaluation of accuracy in position and size of brain activity obtained by near-infrared topography

    NASA Astrophysics Data System (ADS)

    Kawaguchi, Hiroshi; Hayashi, Toshiyuki; Kato, Toshinori; Okada, Eiji

    2004-06-01

    Near-infrared (NIR) topography can obtain a topographical distribution of the activated region in the brain cortex. Near-infrared light is strongly scattered in the head, and the volume of tissue sampled by a source-detector pair on the head surface is broadly distributed in the brain. This scattering effect results in poor resolution and contrast in the topographic image of the brain activity. In this study, a one-dimensional distribution of absorption change in a head model is calculated by mapping and reconstruction methods to evaluate the effect of the image reconstruction algorithm and the interval of measurement points for topographic imaging on the accuracy of the topographic image. The light propagation in the head model is predicted by Monte Carlo simulation to obtain the spatial sensitivity profile for a source-detector pair. The measurement points are one-dimensionally arranged on the surface of the model, and the distance between adjacent measurement points is varied from 4 mm to 28 mm. Small intervals of the measurement points improve the topographic image calculated by both the mapping and reconstruction methods. In the conventional mapping method, the limit of the spatial resolution depends upon the interval of the measurement points and spatial sensitivity profile for source-detector pairs. The reconstruction method has advantages over the mapping method which improve the results of one-dimensional analysis when the interval of measurement points is less than 12 mm. The effect of overlapping of spatial sensitivity profiles indicates that the reconstruction method may be effective to improve the spatial resolution of a two-dimensional reconstruction of topographic image obtained with larger interval of measurement points. Near-infrared topography with the reconstruction method potentially obtains an accurate distribution of absorption change in the brain even if the size of absorption change is less than 10 mm.

  17. Time-frequency approach to underdetermined blind source separation.

    PubMed

    Xie, Shengli; Yang, Liu; Yang, Jun-Mei; Zhou, Guoxu; Xiang, Yong

    2012-02-01

    This paper presents a new time-frequency (TF) underdetermined blind source separation approach based on Wigner-Ville distribution (WVD) and Khatri-Rao product to separate N non-stationary sources from M(M <; N) mixtures. First, an improved method is proposed for estimating the mixing matrix, where the negative value of the auto WVD of the sources is fully considered. Then after extracting all the auto-term TF points, the auto WVD value of the sources at every auto-term TF point can be found out exactly with the proposed approach no matter how many active sources there are as long as N ≤ 2M-1. Further discussion about the extraction of auto-term TF points is made and finally the numerical simulation results are presented to show the superiority of the proposed algorithm by comparing it with the existing ones.

  18. Thermal power systems, point-focusing distributed receiver technology project. Volume 2: Detailed report

    NASA Technical Reports Server (NTRS)

    Lucas, J.

    1979-01-01

    Thermal or electrical power from the sun's radiated energy through Point-Focusing Distributed Receiver technology is the goal of this Project. The energy thus produced must be economically competitive with other sources. The Project supports the industrial development of technology and hardware for extracting energy from solar power to achieve the stated goal. Present studies are working to concentrate the solar energy through mirrors or lenses, to a working fluid or gas, and through a power converter change to an energy source useful to man. Rankine-cycle and Brayton-cycle engines are currently being developed as the most promising energy converters for our near future needs.

  19. Analysis of an ultrasonically rotating droplet by moving particle semi-implicit and distributed point source method in a rotational coordinate

    NASA Astrophysics Data System (ADS)

    Wada, Yuji; Yuge, Kohei; Tanaka, Hiroki; Nakamura, Kentaro

    2017-07-01

    Numerical analysis on the rotation of an ultrasonically levitated droplet in centrifugal coordinate is discussed. A droplet levitated in an acoustic chamber is simulated using the distributed point source method and the moving particle semi-implicit method. Centrifugal coordinate is adopted to avoid the Laplacian differential error, which causes numerical divergence or inaccuracy in the global coordinate calculation. Consequently, the duration of calculation stability has increased 30 times longer than that in a the previous paper. Moreover, the droplet radius versus rotational acceleration characteristics show a similar trend to the theoretical and experimental values in the literature.

  20. A three-dimensional point process model for the spatial distribution of disease occurrence in relation to an exposure source.

    PubMed

    Grell, Kathrine; Diggle, Peter J; Frederiksen, Kirsten; Schüz, Joachim; Cardis, Elisabeth; Andersen, Per K

    2015-10-15

    We study methods for how to include the spatial distribution of tumours when investigating the relation between brain tumours and the exposure from radio frequency electromagnetic fields caused by mobile phone use. Our suggested point process model is adapted from studies investigating spatial aggregation of a disease around a source of potential hazard in environmental epidemiology, where now the source is the preferred ear of each phone user. In this context, the spatial distribution is a distribution over a sample of patients rather than over multiple disease cases within one geographical area. We show how the distance relation between tumour and phone can be modelled nonparametrically and, with various parametric functions, how covariates can be included in the model and how to test for the effect of distance. To illustrate the models, we apply them to a subset of the data from the Interphone Study, a large multinational case-control study on the association between brain tumours and mobile phone use. Copyright © 2015 John Wiley & Sons, Ltd.

  1. Flows and Stratification of an Enclosure Containing Both Localised and Vertically Distributed Sources of Buoyancy

    NASA Astrophysics Data System (ADS)

    Partridge, Jamie; Linden, Paul

    2013-11-01

    We examine the flows and stratification established in a naturally ventilated enclosure containing both a localised and vertically distributed source of buoyancy. The enclosure is ventilated through upper and lower openings which connect the space to an external ambient. Small scale laboratory experiments were carried out with water as the working medium and buoyancy being driven directly by temperature differences. A point source plume gave localised heating while the distributed source was driven by a controllable heater mat located in the side wall of the enclosure. The transient temperatures, as well as steady state temperature profiles, were recorded and are reported here. The temperature profiles inside the enclosure were found to be dependent on the effective opening area A*, a combination of the upper and lower openings, and the ratio of buoyancy fluxes from the distributed and localised source Ψ =Bw/Bp . Industrial CASE award with ARUP.

  2. [Multiple time scales analysis of spatial differentiation characteristics of non-point source nitrogen loss within watershed].

    PubMed

    Liu, Mei-bing; Chen, Xing-wei; Chen, Ying

    2015-07-01

    Identification of the critical source areas of non-point source pollution is an important means to control the non-point source pollution within the watershed. In order to further reveal the impact of multiple time scales on the spatial differentiation characteristics of non-point source nitrogen loss, a SWAT model of Shanmei Reservoir watershed was developed. Based on the simulation of total nitrogen (TN) loss intensity of all 38 subbasins, spatial distribution characteristics of nitrogen loss and critical source areas were analyzed at three time scales of yearly average, monthly average and rainstorms flood process, respectively. Furthermore, multiple linear correlation analysis was conducted to analyze the contribution of natural environment and anthropogenic disturbance on nitrogen loss. The results showed that there were significant spatial differences of TN loss in Shanmei Reservoir watershed at different time scales, and the spatial differentiation degree of nitrogen loss was in the order of monthly average > yearly average > rainstorms flood process. TN loss load mainly came from upland Taoxi subbasin, which was identified as the critical source area. At different time scales, land use types (such as farmland and forest) were always the dominant factor affecting the spatial distribution of nitrogen loss, while the effect of precipitation and runoff on the nitrogen loss was only taken in no fertilization month and several processes of storm flood at no fertilization date. This was mainly due to the significant spatial variation of land use and fertilization, as well as the low spatial variability of precipitation and runoff.

  3. Myocardial Drug Distribution Generated from Local Epicardial Application: Potential Impact of Cardiac Capillary Perfusion in a Swine Model Using Epinephrine

    PubMed Central

    Maslov, Mikhail Y.; Edelman, Elazer R.; Pezone, Matthew J.; Wei, Abraham E.; Wakim, Matthew G.; Murray, Michael R.; Tsukada, Hisashi; Gerogiannis, Iraklis S.; Groothuis, Adam; Lovich, Mark A.

    2014-01-01

    Prior studies in small mammals have shown that local epicardial application of inotropic compounds drives myocardial contractility without systemic side effects. Myocardial capillary blood flow, however, may be more significant in larger species than in small animals. We hypothesized that bulk perfusion in capillary beds of the large mammalian heart enhances drug distribution after local release, but also clears more drug from the tissue target than in small animals. Epicardial (EC) drug releasing systems were used to apply epinephrine to the anterior surface of the left heart of swine in either point-sourced or distributed configurations. Following local application or intravenous (IV) infusion at the same dose rates, hemodynamic responses, epinephrine levels in the coronary sinus and systemic circulation, and drug deposition across the ventricular wall, around the circumference and down the axis, were measured. EC delivery via point-source release generated transmural epinephrine gradients directly beneath the site of application extending into the middle third of the myocardial thickness. Gradients in drug deposition were also observed down the length of the heart and around the circumference toward the lateral wall, but not the interventricular septum. These gradients extended further than might be predicted from simple diffusion. The circumferential distribution following local epinephrine delivery from a distributed source to the entire anterior wall drove drug toward the inferior wall, further than with point-source release, but again, not to the septum. This augmented drug distribution away from the release source, down the axis of the left ventricle, and selectively towards the left heart follows the direction of capillary perfusion away from the anterior descending and circumflex arteries, suggesting a role for the coronary circulation in determining local drug deposition and clearance. The dominant role of the coronary vasculature is further suggested by the elevated drug levels in the coronary sinus effluent. Indeed, plasma levels, hemodynamic responses, and myocardial deposition remote from the point of release were similar following local EC or IV delivery. Therefore, the coronary vasculature shapes the pharmacokinetics of local myocardial delivery of small catecholamine drugs in large animal models. Optimal design of epicardial drug delivery systems must consider the underlying bulk capillary perfusion currents within the tissue to deliver drug to tissue targets and may favor therapeutic molecules with better potential retention in myocardial tissue. PMID:25234821

  4. Statistical Measurement of the Gamma-Ray Source-count Distribution as a Function of Energy

    NASA Astrophysics Data System (ADS)

    Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; Fornengo, Nicolao; Regis, Marco

    2016-08-01

    Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. We employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ˜50 GeV. The index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index of {2.2}-0.3+0.7 in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain {83}-13+7% ({81}-19+52%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). The method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.

  5. GRIS observations of Al-26 gamma-ray line emission from two points in the Galactic plane

    NASA Technical Reports Server (NTRS)

    Teegarden, B. J.; Barthelmy, S. D.; Gehrels, N.; Tueller, J.; Leventhal, M.

    1991-01-01

    Both of the Gamma-Ray Imaging Spectrometer (GRIS) experiment's two observations of the Galactic center region, at l = zero and 335 deg respectively, detected Al-26 gamma-ray line emission. While these observations are consistent with the assumed high-energy gamma-ray distribution, they are consistent with other distributions as well. The data suggest that the Al-26 emission is distributed over Galactic longitude rather than being confined to a point source. The GRIS data also indicate that the 1809 keV line is broadened.

  6. The Spitzer-IRAC Point-source Catalog of the Vela-D Cloud

    NASA Astrophysics Data System (ADS)

    Strafella, F.; Elia, D.; Campeggio, L.; Giannini, T.; Lorenzetti, D.; Marengo, M.; Smith, H. A.; Fazio, G.; De Luca, M.; Massi, F.

    2010-08-01

    This paper presents the observations of Cloud D in the Vela Molecular Ridge, obtained with the Infrared Array Camera (IRAC) camera on board the Spitzer Space Telescope at the wavelengths λ = 3.6, 4.5, 5.8, and 8.0 μm. A photometric catalog of point sources, covering a field of approximately 1.2 deg2, has been extracted and complemented with additional available observational data in the millimeter region. Previous observations of the same region, obtained with the Spitzer MIPS camera in the photometric bands at 24 μm and 70 μm, have also been reconsidered to allow an estimate of the spectral slope of the sources in a wider spectral range. A total of 170,299 point sources, detected at the 5σ sensitivity level in at least one of the IRAC bands, have been reported in the catalog. There were 8796 sources for which good quality photometry was obtained in all four IRAC bands. For this sample, a preliminary characterization of the young stellar population based on the determination of spectral slope is discussed; combining this with diagnostics in the color-magnitude and color-color diagrams, the relative population of young stellar objects (YSOs) in different evolutionary classes has been estimated and a total of 637 candidate YSOs have been selected. The main differences in their relative abundances have been highlighted and a brief account for their spatial distribution is given. The star formation rate has also been estimated and compared with the values derived for other star-forming regions. Finally, an analysis of the spatial distribution of the sources by means of the two-point correlation function shows that the younger population, constituted by the Class I and flat-spectrum sources, is significantly more clustered than the Class II and III sources.

  7. Controlling Continuous-Variable Quantum Key Distribution with Entanglement in the Middle Using Tunable Linear Optics Cloning Machines

    NASA Astrophysics Data System (ADS)

    Wu, Xiao Dong; Chen, Feng; Wu, Xiang Hua; Guo, Ying

    2017-02-01

    Continuous-variable quantum key distribution (CVQKD) can provide detection efficiency, as compared to discrete-variable quantum key distribution (DVQKD). In this paper, we demonstrate a controllable CVQKD with the entangled source in the middle, contrast to the traditional point-to-point CVQKD where the entanglement source is usually created by one honest party and the Gaussian noise added on the reference partner of the reconciliation is uncontrollable. In order to harmonize the additive noise that originates in the middle to resist the effect of malicious eavesdropper, we propose a controllable CVQKD protocol by performing a tunable linear optics cloning machine (LOCM) at one participant's side, say Alice. Simulation results show that we can achieve the optimal secret key rates by selecting the parameters of the tuned LOCM in the derived regions.

  8. An adaptive grid scheme using the boundary element method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munipalli, R.; Anderson, D.A.

    1996-09-01

    A technique to solve the Poisson grid generation equations by Green`s function related methods has been proposed, with the source terms being purely position dependent. The use of distributed singularities in the flow domain coupled with the boundary element method (BEM) formulation is presented in this paper as a natural extension of the Green`s function method. This scheme greatly simplifies the adaption process. The BEM reduces the dimensionality of the given problem by one. Internal grid-point placement can be achieved for a given boundary distribution by adding continuous and discrete source terms in the BEM formulation. A distribution of vortexmore » doublets is suggested as a means of controlling grid-point placement and grid-line orientation. Examples for sample adaption problems are presented and discussed. 15 refs., 20 figs.« less

  9. Community shift of biofilms developed in a full-scale drinking water distribution system switching from different water sources.

    PubMed

    Li, Weiying; Wang, Feng; Zhang, Junpeng; Qiao, Yu; Xu, Chen; Liu, Yao; Qian, Lin; Li, Wenming; Dong, Bingzhi

    2016-02-15

    The bacterial community of biofilms in drinking water distribution systems (DWDS) with various water sources has been rarely reported. In this research, biofilms were sampled at three points (A, B, and C) during the river water source phase (phase I), the interim period (phase II) and the reservoir water source phase (phase III), and the biofilm community was determined using the 454-pyrosequencing method. Results showed that microbial diversity declined in phase II but increased in phase III. The primary phylum was Proteobacteria during three phases, while the dominant class at points A and B was Betaproteobacteria (>49%) during all phases, but that changed to Holophagae in phase II (62.7%) and Actinobacteria in phase III (35.6%) for point C, which was closely related to its water quality. More remarkable community shift was found at the genus level. In addition, analysis results showed that water quality could significantly affect microbial diversity together, while the nutrient composition (e.g. C/N ration) of the water environment might determine the microbial community. Furthermore, Mycobacterium spp. and Pseudomonas spp. were detected in the biofilm, which should give rise to attention. This study revealed that water source switching produced substantial impact on the biofilm community. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. A 3D tomographic reconstruction method to analyze Jupiter's electron-belt emission observations

    NASA Astrophysics Data System (ADS)

    Santos-Costa, Daniel; Girard, Julien; Tasse, Cyril; Zarka, Philippe; Kita, Hajime; Tsuchiya, Fuminori; Misawa, Hiroaki; Clark, George; Bagenal, Fran; Imai, Masafumi; Becker, Heidi N.; Janssen, Michael A.; Bolton, Scott J.; Levin, Steve M.; Connerney, John E. P.

    2017-04-01

    Multi-dimensional reconstruction techniques of Jupiter's synchrotron radiation from radio-interferometric observations were first developed by Sault et al. [Astron. Astrophys., 324, 1190-1196, 1997]. The tomographic-like technique introduced 20 years ago had permitted the first 3-dimensional mapping of the brightness distribution around the planet. This technique has demonstrated the advantage to be weakly dependent on planetary field models. It also does not require any knowledge on the energy and spatial distributions of the radiating electrons. On the downside, it is assumed that the volume emissivity of any punctual point source around the planet is isotropic. This assumption becomes incorrect when mapping the brightness distribution for non-equatorial point sources or any point sources from Juno's perspective. In this paper, we present our modeling effort to bypass the isotropy issue. Our approach is to use radio-interferometric observations and determine the 3-D brightness distribution in a cylindrical coordinate system. For each set (z, r), we constrain the longitudinal distribution with a Fourier series and the anisotropy is addressed with a simple periodic function when possible. We develop this new method over a wide range of frequencies using past VLA and LOFAR observations of Jupiter. We plan to test this reconstruction method with observations of Jupiter that are currently being carried out with LOFAR and GMRT in support to the Juno mission. We describe how this new 3D tomographic reconstruction method provides new model constraints on the energy and spatial distributions of Jupiter's ultra-relativistic electrons close to the planet and be used to interpret Juno MWR observations of Jupiter's electron-belt emission and assist in evaluating the background noise from the radiation environment in the atmospheric measurements.

  11. The social ecology of water in a Mumbai slum: failures in water quality, quantity, and reliability.

    PubMed

    Subbaraman, Ramnath; Shitole, Shrutika; Shitole, Tejal; Sawant, Kiran; O'Brien, Jennifer; Bloom, David E; Patil-Deshmukh, Anita

    2013-02-26

    Urban slums in developing countries that are not recognized by the government often lack legal access to municipal water supplies. This results in the creation of insecure "informal" water distribution systems (i.e., community-run or private systems outside of the government's purview) that may increase water-borne disease risk. We evaluate an informal water distribution system in a slum in Mumbai, India using commonly accepted health and social equity indicators. We also identify predictors of bacterial contamination of drinking water using logistic regression analysis. Data were collected through two studies: the 2008 Baseline Needs Assessment survey of 959 households and the 2011 Seasonal Water Assessment, in which 229 samples were collected for water quality testing over three seasons. Water samples were collected in each season from the following points along the distribution system: motors that directly tap the municipal supply (i.e., "point-of-source" water), hoses going to slum lanes, and storage and drinking water containers from 21 households. Depending on season, households spend an average of 52 to 206 times more than the standard municipal charge of Indian rupees 2.25 (US dollars 0.04) per 1000 liters for water, and, in some seasons, 95% use less than the WHO-recommended minimum of 50 liters per capita per day. During the monsoon season, 50% of point-of-source water samples were contaminated. Despite a lack of point-of-source water contamination in other seasons, stored drinking water was contaminated in all seasons, with rates as high as 43% for E. coli and 76% for coliform bacteria. In the multivariate logistic regression analysis, monsoon and summer seasons were associated with significantly increased odds of drinking water contamination. Our findings reveal severe deficiencies in water-related health and social equity indicators. All bacterial contamination of drinking water occurred due to post-source contamination during storage in the household, except during the monsoon season, when there was some point-of-source water contamination. This suggests that safe storage and household water treatment interventions may improve water quality in slums. Problems of exorbitant expense, inadequate quantity, and poor point-of-source quality can only be remedied by providing unrecognized slums with equitable access to municipal water supplies.

  12. The Distribution of Interplanetary Dust between 0.96 and 1.04 au as Inferred from Impacts on the STEREO Spacecraft Observed by the Heliospheric Imagers

    NASA Technical Reports Server (NTRS)

    Davis, C. J.; Davis, J. A.; Meyer-Vernet, Nicole; Crothers, S.; Lintott, C.; Smith, A.; Bamford, S.; Baeten, E. M. L.; SaintCyr, O. C.; Campbell-Brown, M.; hide

    2012-01-01

    The distribution of dust in the ecliptic plane between 0.96 and 1.04 au has been inferred from impacts on the two Solar Terrestrial Relations Observatory (STEREO) spacecraft through observation of secondary particle trails and unexpected off-points in the heliospheric imager (HI) cameras. This study made use of analysis carried out by members of a distributed webbased citizen science project Solar Stormwatch. A comparison between observations of the brightest particle trails and a survey of fainter trails shows consistent distributions. While there is no obvious correlation between this distribution and the occurrence of individual meteor streams at Earth, there are some broad longitudinal features in these distributions that are also observed in sources of the sporadic meteor population. The different position of the HI instrument on the two STEREO spacecraft leads to each sampling different populations of dust particles. The asymmetry in the number of trails seen by each spacecraft and the fact that there are many more unexpected off-points in the HI-B than in HI-A indicates that the majority of impacts are coming from the apex direction. For impacts causing off-points in the HI-B camera, these dust particles are estimated to have masses in excess of 10 (exp-17) kg with radii exceeding 0.1 µm. For off-points observed in the HI-A images, which can only have been caused by particles travelling from the anti-apex direction, the distribution is consistent with that of secondary 'storm' trails observed by HI-B, providing evidence that these trails also result from impacts with primary particles from an anti-apex source. Investigating the mass distribution for the off-points of both HI-A and HI-B, it is apparent that the differential mass index of particles from the apex direction (causing off-points in HI-B) is consistently above 2. This indicates that the majority of the mass is within the smaller particles of this population. In contrast, the differential mass index of particles from the anti-apex direction (causing off-points in HI-A) is consistently below 2, indicating that the majority of the mass is to be found in larger particles of this distribution.

  13. Skyshine at neutron energies less than or equal to 400 MeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alsmiller, A.G. Jr.; Barish, J.; Childs, R.L.

    1980-10-01

    The dose equivalent at an air-ground interface as a function of distance from an assumed azimuthally symmetric point source of neutrons can be calculated as a double integral. The integration is over the source strength as a function of energy and polar angle weighted by an importance function that depends on the source variables and on the distance from the source to the filed point. The neutron importance function for a source 15 m above the ground emitting only into the upper hemisphere has been calculated using the two-dimensional discrete ordinates code, DOT, and the first collision source code, GRTUNCL,more » in the adjoint mode. This importance function is presented for neutron energies less than or equal to 400 MeV, for source cosine intervals of 1 to .8, .8 to .6 to .4, .4 to .2 and .2 to 0, and for various distances from the source to the field point. As part of the adjoint calculations a photon importance function is also obtained. This importance function for photon energies less than or equal to 14 MEV and for various source cosine intervals and source-to-field point distances is also presented. These importance functions may be used to obtain skyshine dose equivalent estimates for any known source energy-angle distribution.« less

  14. A New Source Biasing Approach in ADVANTG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bevill, Aaron M; Mosher, Scott W

    2012-01-01

    The ADVANTG code has been developed at Oak Ridge National Laboratory to generate biased sources and weight window maps for MCNP using the CADIS and FW-CADIS methods. In preparation for an upcoming RSICC release, a new approach for generating a biased source has been developed. This improvement streamlines user input and improves reliability. Previous versions of ADVANTG generated the biased source from ADVANTG input, writing an entirely new general fixed-source definition (SDEF). Because volumetric sources were translated into SDEF-format as a finite set of points, the user had to perform a convergence study to determine whether the number of sourcemore » points used accurately represented the source region. Further, the large number of points that must be written in SDEF-format made the MCNP input and output files excessively long and difficult to debug. ADVANTG now reads SDEF-format distributions and generates corresponding source biasing cards, eliminating the need for a convergence study. Many problems of interest use complicated source regions that are defined using cell rejection. In cell rejection, the source distribution in space is defined using an arbitrarily complex cell and a simple bounding region. Source positions are sampled within the bounding region but accepted only if they fall within the cell; otherwise, the position is resampled entirely. When biasing in space is applied to sources that use rejection sampling, current versions of MCNP do not account for the rejection in setting the source weight of histories, resulting in an 'unfair game'. This problem was circumvented in previous versions of ADVANTG by translating volumetric sources into a finite set of points, which does not alter the mean history weight ({bar w}). To use biasing parameters without otherwise modifying the original cell-rejection SDEF-format source, ADVANTG users now apply a correction factor for {bar w} in post-processing. A stratified-random sampling approach in ADVANTG is under development to automatically report the correction factor with estimated uncertainty. This study demonstrates the use of ADVANTG's new source biasing method, including the application of {bar w}.« less

  15. Obtaining the phase in the star test using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Salazar Romero, Marcos A.; Vazquez-Montiel, Sergio; Cornejo-Rodriguez, Alejandro

    2004-10-01

    The star test is conceptually perhaps the most basic and simplest of all methods of testing image-forming optical systems, the irradiance distribution at the image of a point source (such as a star) is give for the Point Spread Function, PSF. The PSF is very sensitive to aberrations. One way to quantify the PSF is measuring the irradiance distribution on the image of the source point. On the other hand, if we know the aberrations introduced by the optical systems and utilizing the diffraction theory then we can calculate the PSF. In this work we propose a method in order to find the wavefront aberrations starting from the PSF, transforming the problem of fitting a polynomial of aberrations in a problem of optimization using Genetic Algorithm. Also, we show that this method is immune to the noise introduced in the register or recording of the image. Results of these methods are shown.

  16. Temporal and spatial distributions of nutrients under the influence of human activities in Sishili Bay, northern Yellow Sea of China.

    PubMed

    Wang, Yujue; Liu, Dongyan; Dong, Zhijun; Di, Baoping; Shen, Xuhong

    2012-12-01

    The temporal and spatial distributions of dissolved inorganic nitrogen (DIN), dissolved organic nitrogen (DON), soluble reactive phosphorus (SRP) and dissolved reactive silica (DRSi) together with chlorophyll-a, temperature and salinity were analyzed monthly from December 2008 to March 2010 at four zones in Sishili Bay located in the northern Yellow Sea. The nutrient distribution was impacted by seasonal factors (biotic factors, temperature and wet deposition), physical factors (water exchange) and anthropogenic loadings. The seasonal variations of nutrients were mainly determined by the seasonal factors and the spatial distribution of nutrients was mainly related to water exchange. Anthropogenic loadings for DIN, SRP and DRSi were mainly from point sources, but for DON, non-point sources were also important. Nutrient limitation has changed from DIN in 1997 to SRP and DRSi in 2010, and this has resulted in changes in the dominant red tide species from diatom to dinoflagellates. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Evaluation of Rock Surface Characterization by Means of Temperature Distribution

    NASA Astrophysics Data System (ADS)

    Seker, D. Z.; Incekara, A. H.; Acar, A.; Kaya, S.; Bayram, B.; Sivri, N.

    2017-12-01

    Rocks have many different types which are formed over many years. Close range photogrammetry is a techniques widely used and preferred rather than other conventional methods. In this method, the photographs overlapping each other are the basic data source of the point cloud data which is the main data source for 3D model that provides analysts automation possibility. Due to irregular and complex structures of rocks, representation of their surfaces with a large number points is more effective. Color differences caused by weathering on the rock surfaces or naturally occurring make it possible to produce enough number of point clouds from the photographs. Objects such as small trees, shrubs and weeds on and around the surface also contribute to this. These differences and properties are important for efficient operation of pixel matching algorithms to generate adequate point cloud from photographs. In this study, possibilities of using temperature distribution for interpretation of roughness of rock surface which is one of the parameters representing the surface, was investigated. For the study, a small rock which is in size of 3 m x 1 m, located at ITU Ayazaga Campus was selected as study object. Two different methods were used. The first one is production of producing choropleth map by interpolation using temperature values of control points marked on object which were also used in 3D model. 3D object model was created with the help of terrestrial photographs and 12 control points marked on the object and coordinated. Temperature value of control points were measured by using infrared thermometer and used as basic data source in order to create choropleth map with interpolation. Temperature values range from 32 to 37.2 degrees. In the second method, 3D object model was produced by means of terrestrial thermal photographs. Fort this purpose, several terrestrial photographs were taken by thermal camera and 3D object model showing temperature distribution was created. The temperature distributions in both applications are almost identical in position. The areas on the rock surface that roughness values are higher than the surroundings can be clearly identified. When the temperature distributions produced by both methods are evaluated, it is observed that as the roughness on the surface increases, the temperature increases.

  18. Resolving the Extragalactic γ-Ray Background above 50 GeV with the Fermi Large Area Telescope.

    PubMed

    Ackermann, M; Ajello, M; Albert, A; Atwood, W B; Baldini, L; Ballet, J; Barbiellini, G; Bastieri, D; Bechtol, K; Bellazzini, R; Bissaldi, E; Blandford, R D; Bloom, E D; Bonino, R; Bregeon, J; Britto, R J; Bruel, P; Buehler, R; Caliandro, G A; Cameron, R A; Caragiulo, M; Caraveo, P A; Cavazzuti, E; Cecchi, C; Charles, E; Chekhtman, A; Chiang, J; Chiaro, G; Ciprini, S; Cohen-Tanugi, J; Cominsky, L R; Costanza, F; Cutini, S; D'Ammando, F; de Angelis, A; de Palma, F; Desiante, R; Digel, S W; Di Mauro, M; Di Venere, L; Domínguez, A; Drell, P S; Favuzzi, C; Fegan, S J; Ferrara, E C; Franckowiak, A; Fukazawa, Y; Funk, S; Fusco, P; Gargano, F; Gasparrini, D; Giglietto, N; Giommi, P; Giordano, F; Giroletti, M; Godfrey, G; Green, D; Grenier, I A; Guiriec, S; Hays, E; Horan, D; Iafrate, G; Jogler, T; Jóhannesson, G; Kuss, M; La Mura, G; Larsson, S; Latronico, L; Li, J; Li, L; Longo, F; Loparco, F; Lott, B; Lovellette, M N; Lubrano, P; Madejski, G M; Magill, J; Maldera, S; Manfreda, A; Mayer, M; Mazziotta, M N; Michelson, P F; Mitthumsiri, W; Mizuno, T; Moiseev, A A; Monzani, M E; Morselli, A; Moskalenko, I V; Murgia, S; Negro, M; Nuss, E; Ohsugi, T; Okada, C; Omodei, N; Orlando, E; Ormes, J F; Paneque, D; Perkins, J S; Pesce-Rollins, M; Petrosian, V; Piron, F; Pivato, G; Porter, T A; Rainò, S; Rando, R; Razzano, M; Razzaque, S; Reimer, A; Reimer, O; Reposeur, T; Romani, R W; Sánchez-Conde, M; Schmid, J; Schulz, A; Sgrò, C; Simone, D; Siskind, E J; Spada, F; Spandre, G; Spinelli, P; Suson, D J; Takahashi, H; Thayer, J B; Tibaldo, L; Torres, D F; Troja, E; Vianello, G; Yassine, M; Zimmer, S

    2016-04-15

    The Fermi Large Area Telescope (LAT) Collaboration has recently released a catalog of 360 sources detected above 50 GeV (2FHL). This catalog was obtained using 80 months of data re-processed with Pass 8, the newest event-level analysis, which significantly improves the acceptance and angular resolution of the instrument. Most of the 2FHL sources at high Galactic latitude are blazars. Using detailed Monte Carlo simulations, we measure, for the first time, the source count distribution, dN/dS, of extragalactic γ-ray sources at E>50  GeV and find that it is compatible with a Euclidean distribution down to the lowest measured source flux in the 2FHL (∼8×10^{-12}  ph cm^{-2} s^{-1}). We employ a one-point photon fluctuation analysis to constrain the behavior of dN/dS below the source detection threshold. Overall, the source count distribution is constrained over three decades in flux and found compatible with a broken power law with a break flux, S_{b}, in the range [8×10^{-12},1.5×10^{-11}]  ph cm^{-2} s^{-1} and power-law indices below and above the break of α_{2}∈[1.60,1.75] and α_{1}=2.49±0.12, respectively. Integration of dN/dS shows that point sources account for at least 86_{-14}^{+16}% of the total extragalactic γ-ray background. The simple form of the derived source count distribution is consistent with a single population (i.e., blazars) dominating the source counts to the minimum flux explored by this analysis. We estimate the density of sources detectable in blind surveys that will be performed in the coming years by the Cherenkov Telescope Array.

  19. Resolving the Extragalactic γ -Ray Background above 50 GeV with the Fermi Large Area Telescope

    DOE PAGES

    Ackermann, M.; Ajello, M.; Albert, A.; ...

    2016-04-14

    The Fermi Large Area Telescope (LAT) Collaboration has recently released a catalog of 360 sources detected above 50 GeV (2FHL). This catalog was obtained using 80 months of data re-processed with Pass 8, the newest event-level analysis, which significantly improves the acceptance and angular resolution of the instrument. Most of the 2FHL sources at high Galactic latitude are blazars. In this paper, using detailed Monte Carlo simulations, we measure, for the first time, the source count distribution, dN/dS, of extragalactic γ-ray sources at E > 50 GeV and find that it is compatible with a Euclidean distribution down to the lowest measured source flux in the 2FHL (~8 x 10 -12 ph cm -2s -1). We employ a one-point photon fluctuation analysis to constrain the behavior of dN/dS below the source detection threshold. Overall, the source count distribution is constrained over three decades in flux and found compatible with a broken power law with a break flux, S b, in the range [8 x 10 -12, 1.5 x 10 -11] ph cm -2s -1 and power-law indices below and above the break of α 2 ϵ [1.60, 1.75] and α 1 = 2.49 ± 0.12, respectively. Integration of dN/dS shows that point sources account for at least 86more » $$+16\\atop{-14}$$ % of the total extragalactic γ-ray background. The simple form of the derived source count distribution is consistent with a single population (i.e., blazars) dominating the source counts to the minimum flux explored by this analysis. Finally, we estimate the density of sources detectable in blind surveys that will be performed in the coming years by the Cherenkov Telescope Array.« less

  20. CCD photometry of 1218+304 1219+28 and 1727+50: Point sources, associated nebulosity and broadband spectra

    NASA Technical Reports Server (NTRS)

    Weistrop, D.; Shaffer, D. B.; Mushotzky, R. F.; Reitsma, H. J.; Smith, B. A.

    1981-01-01

    Visual and far red surface photometry were obtained of two X-ray emitting BL Lacertae objects, 1218+304 (2A1219+305) and 1727+50 (Izw 187), as well as the highly variable object 1219+28 (ON 231, W Com). The intensity distribution for 1727+50 can be modeled using a central point source plus a de Vaucouleurs intensity law for an underlying galaxy. The broad band spectral energy distribution so derived is consistent with what is expected for an elliptical galaxy. The spectral index of the point source is alpha = 0.97. Additional VLBI and X-ray data are also reported for 1727+50. There is nebulosity associated with the recently discovered object 1218+304. No nebulosity is found associated with 1219+28. A comparison of the results with observations at X-ray and radio frequencies suggests that all the emission from 1727+50 and 1218+304 can be interpreted as due solely to direct synchrotron emission. If this is the case, the data further imply the existence of relativistic motion effects and continuous particle injection.

  1. Analysis of non-point and point source pollution in China: case study in Shima Watershed in Guangdong Province

    NASA Astrophysics Data System (ADS)

    Fang, Huaiyang; Lu, Qingshui; Gao, Zhiqiang; Shi, Runhe; Gao, Wei

    2013-09-01

    China economy has been rapidly increased since 1978. Rapid economic growth led to fast growth of fertilizer and pesticide consumption. A significant portion of fertilizers and pesticides entered the water and caused water quality degradation. At the same time, rapid economic growth also caused more and more point source pollution discharge into the water. Eutrophication has become a major threat to the water bodies. Worsening environment problems forced governments to take measures to control water pollution. We extracted land cover from Landsat TM images; calculated point source pollution with export coefficient method; then SWAT model was run to simulate non-point source pollution. We found that the annual TP loads from industry pollution into rivers are 115.0 t in the entire watershed. Average annual TP loads from each sub-basin ranged from 0 to 189.4 ton. Higher TP loads of each basin from livestock and human living mainly occurs in the areas where they are far from large towns or cities and the TP loads from industry are relatively low. Mean annual TP loads that delivered to the streams was 246.4 tons and the highest TP loads occurred in north part of this area, and the lowest TP loads is mainly distributed in middle part. Therefore, point source pollution has much high proportion in this area and governments should take measures to control point source pollution.

  2. Statistical measurement of the gamma-ray source-count distribution as a function of energy

    DOE PAGES

    Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; ...

    2016-07-29

    Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. Here, we employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ~50 GeV. Furthermore, the index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index ofmore » $${2.2}_{-0.3}^{+0.7}$$ in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain $${83}_{-13}^{+7}$$% ($${81}_{-19}^{+52}$$%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). Our method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.« less

  3. Statistical measurement of the gamma-ray source-count distribution as a function of energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza

    Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. Here, we employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ~50 GeV. Furthermore, the index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index ofmore » $${2.2}_{-0.3}^{+0.7}$$ in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain $${83}_{-13}^{+7}$$% ($${81}_{-19}^{+52}$$%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). Our method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.« less

  4. Oil spill contamination probability in the southeastern Levantine basin.

    PubMed

    Goldman, Ron; Biton, Eli; Brokovich, Eran; Kark, Salit; Levin, Noam

    2015-02-15

    Recent gas discoveries in the eastern Mediterranean Sea led to multiple operations with substantial economic interest, and with them there is a risk of oil spills and their potential environmental impacts. To examine the potential spatial distribution of this threat, we created seasonal maps of the probability of oil spill pollution reaching an area in the Israeli coastal and exclusive economic zones, given knowledge of its initial sources. We performed simulations of virtual oil spills using realistic atmospheric and oceanic conditions. The resulting maps show dominance of the alongshore northerly current, which causes the high probability areas to be stretched parallel to the coast, increasing contamination probability downstream of source points. The seasonal westerly wind forcing determines how wide the high probability areas are, and may also restrict these to a small coastal region near source points. Seasonal variability in probability distribution, oil state, and pollution time is also discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. GIS Based Distributed Runoff Predictions in Variable Source Area Watersheds Employing the SCS-Curve Number

    NASA Astrophysics Data System (ADS)

    Steenhuis, T. S.; Mendoza, G.; Lyon, S. W.; Gerard Marchant, P.; Walter, M. T.; Schneiderman, E.

    2003-04-01

    Because the traditional Soil Conservation Service Curve Number (SCS-CN) approach continues to be ubiquitously used in GIS-BASED water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed within an integrated GIS modeling environment a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Spatial representation of hydrologic processes is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point source pollution. The methodology presented here uses the traditional SCS-CN method to predict runoff volume and spatial extent of saturated areas and uses a topographic index to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was incorporated in an existing GWLF water quality model and applied to sub-watersheds of the Delaware basin in the Catskill Mountains region of New York State. We found that the distributed CN-VSA approach provided a physically-based method that gives realistic results for watersheds with VSA hydrology.

  6. CHARACTERIZING SPATIAL AND TEMPORAL DYNAMICS: DEVELOPMENT OF A GRID-BASED WATERSHED MERCURY LOADING MODEL

    EPA Science Inventory

    A distributed grid-based watershed mercury loading model has been developed to characterize spatial and temporal dynamics of mercury from both point and non-point sources. The model simulates flow, sediment transport, and mercury dynamics on a daily time step across a diverse lan...

  7. An Ultradeep Chandra Catalog of X-Ray Point Sources in the Galactic Center Star Cluster

    NASA Astrophysics Data System (ADS)

    Zhu, Zhenlin; Li, Zhiyuan; Morris, Mark R.

    2018-04-01

    We present an updated catalog of X-ray point sources in the inner 500″ (∼20 pc) of the Galactic center (GC), where the nuclear star cluster (NSC) stands, based on a total of ∼4.5 Ms of Chandra observations taken from 1999 September to 2013 April. This ultradeep data set offers unprecedented sensitivity for detecting X-ray sources in the GC, down to an intrinsic 2–10 keV luminosity of 1.0 × 1031 erg s‑1. A total of 3619 sources are detected in the 2–8 keV band, among which ∼3500 are probable GC sources and ∼1300 are new identifications. The GC sources collectively account for ∼20% of the total 2–8 keV flux from the inner 250″ region where detection sensitivity is the greatest. Taking advantage of this unprecedented sample of faint X-ray sources that primarily traces the old stellar populations in the NSC, we revisit global source properties, including long-term variability, cumulative spectra, luminosity function, and spatial distribution. Based on the equivalent width and relative strength of the iron lines, we suggest that in addition to the arguably predominant population of magnetic cataclysmic variables (CVs), nonmagnetic CVs contribute substantially to the detected sources, especially in the lower-luminosity group. On the other hand, the X-ray sources have a radial distribution closely following the stellar mass distribution in the NSC, but much flatter than that of the known X-ray transients, which are presumably low-mass X-ray binaries (LMXBs) caught in outburst. This, together with the very modest long-term variability of the detected sources, strongly suggests that quiescent LMXBs are a minor (less than a few percent) population.

  8. Development of Load Duration Curve System in Data Scarce Watersheds Based on a Distributed Hydrological Model

    NASA Astrophysics Data System (ADS)

    WANG, J.

    2017-12-01

    In stream water quality control, the total maximum daily load (TMDL) program is very effective. However, the load duration curves (LDC) of TMDL are difficult to be established because no sufficient observed flow and pollutant data can be provided in data-scarce watersheds in which no hydrological stations or consecutively long-term hydrological data are available. Although the point sources or a non-point sources of pollutants can be clarified easily with the aid of LDC, where does the pollutant come from and to where it will be transported in the watershed cannot be traced by LDC. To seek out the best management practices (BMPs) of pollutants in a watershed, and to overcome the limitation of LDC, we proposed to develop LDC based on a distributed hydrological model of SWAT for the water quality management in data scarce river basins. In this study, firstly, the distributed hydrological model of SWAT was established with the scarce-hydrological data. Then, the long-term daily flows were generated with the established SWAT model and rainfall data from the adjacent weather station. Flow duration curves (FDC) was then developed with the aid of generated daily flows by SWAT model. Considering the goal of water quality management, LDC curves of different pollutants can be obtained based on the FDC. With the monitored water quality data and the LDC curves, the water quality problems caused by the point or non-point source pollutants in different seasons can be ascertained. Finally, the distributed hydrological model of SWAT was employed again to tracing the spatial distribution and the origination of the pollutants of coming from what kind of agricultural practices and/or other human activities. A case study was conducted in the Jian-jiang river, a tributary of Yangtze river, of Duyun city, Guizhou province. Results indicate that this kind of method can realize the water quality management based on TMDL and find out the suitable BMPs for reducing pollutant in a watershed.

  9. The feasibility of effluent trading in the oil and gas industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veil, J.A.

    1997-09-01

    In January 1996, the U.S. Environmental Protection Agency (EPA) released a policy statement endorsing wastewater effluent trading in watersheds, hoping to promote additional interest in the subject. The policy describes five types of effluent trades - point source/point source, point source/nonpoint source, pretreatment, intraplant, and nonpoint source/nonpoint source. This paper evaluates the feasibility of effluent trading for facilities in the oil and gas industry. The evaluation leads to the conclusion that potential for effluent trading is very low in the exploration and production and distribution and marketing sectors; trading potential is moderate for the refining sector except for intraplant trades,more » for which the potential is high. Good potential also exists for other types of water-related trades that do not directly involve effluents (e.g., wetlands mitigation banking). The potential for effluent trading in the energy industries and in other sectors would be enhanced if Congress amended the Clean Water Act (CWA) to formally authorize such trading.« less

  10. Point spread functions for earthquake source imaging: An interpretation based on seismic interferometry

    USGS Publications Warehouse

    Nakahara, Hisashi; Haney, Matt

    2015-01-01

    Recently, various methods have been proposed and applied for earthquake source imaging, and theoretical relationships among the methods have been studied. In this study, we make a follow-up theoretical study to better understand the meanings of earthquake source imaging. For imaging problems, the point spread function (PSF) is used to describe the degree of blurring and degradation in an obtained image of a target object as a response of an imaging system. In this study, we formulate PSFs for earthquake source imaging. By calculating the PSFs, we find that waveform source inversion methods remove the effect of the PSF and are free from artifacts. However, the other source imaging methods are affected by the PSF and suffer from the effect of blurring and degradation due to the restricted distribution of receivers. Consequently, careful treatment of the effect is necessary when using the source imaging methods other than waveform inversions. Moreover, the PSF for source imaging is found to have a link with seismic interferometry with the help of the source-receiver reciprocity of Green’s functions. In particular, the PSF can be related to Green’s function for cases in which receivers are distributed so as to completely surround the sources. Furthermore, the PSF acts as a low-pass filter. Given these considerations, the PSF is quite useful for understanding the physical meaning of earthquake source imaging.

  11. The strong UV source in the active E Galaxy NGC 4552

    NASA Technical Reports Server (NTRS)

    Oconnell, R. W.; Thuan, T. X.; Puschell, J. J.

    1986-01-01

    1200-3200 A IUE spectra of the nucleus of NGC 4552 (M89) were obtained in order to investigate the nature of the strong 10 micron source in this Galaxy. There is a strong, extended UV source in NGC 4552 which has a spatial distribution nearly identical with that at optical wavelengths and is undoubedly stellar in origin. Its properties are consistent with the correlation between UV source strength and metallicity pointed out by Faber (1983). There is no evidence for a nonthermal point source in the UV. It appears unlikely that the 10 micron emission is from heated dust grains. Instead, it is believed the 10 micron radiation is nonthermal in origin, implying a remarkably small size of only 0.1 AU for this source.

  12. The solid angle (geometry factor) for a spherical surface source and an arbitrary detector aperture

    DOE PAGES

    Favorite, Jeffrey A.

    2016-01-13

    It is proven that the solid angle (or geometry factor, also called the geometrical efficiency) for a spherically symmetric outward-directed surface source with an arbitrary radius and polar angle distribution and an arbitrary detector aperture is equal to the solid angle for an isotropic point source located at the center of the spherical surface source and the same detector aperture.

  13. Studies of acoustic emission from point and extended sources

    NASA Technical Reports Server (NTRS)

    Sachse, W.; Kim, K. Y.; Chen, C. P.

    1986-01-01

    The use of simulated and controlled acoustic emission signals forms the basis of a powerful tool for the detailed study of various deformation and wave interaction processes in materials. The results of experiments and signal analyses of acoustic emission resulting from point sources such as various types of indentation-produced cracks in brittle materials and the growth of fatigue cracks in 7075-T6 aluminum panels are discussed. Recent work dealing with the modeling and subsequent signal processing of an extended source of emission in a material is reviewed. Results of the forward problem and the inverse problem are presented with the example of a source distributed through the interior of a specimen.

  14. Levels of CDDs, CDFs, PCBs and Hg in Rural Soils of US (Project Overview)

    EPA Science Inventory

    No systematic survey of dioxins in soil has been conducted in the US. Soils represent the largest reservoir source of dioxins. As point source emissions are reduced emissions from soils become increasingly important. Understanding the distribution of dioxin levels in soils is ...

  15. Capturing microbial sources distributed in a mixed-use watershed within an integrated environmental modeling workflow

    EPA Science Inventory

    Many watershed models simulate overland and instream microbial fate and transport, but few provide loading rates on land surfaces and point sources to the waterbody network. This paper describes the underlying equations for microbial loading rates associated with 1) land-applied ...

  16. Capturing microbial sources distributed in a mixed-use watershed within an integrated environmental modeling workflow

    USDA-ARS?s Scientific Manuscript database

    Many watershed models simulate overland and instream microbial fate and transport, but few provide loading rates on land surfaces and point sources to the waterbody network. This paper describes the underlying equations for microbial loading rates associated with 1) land-applied manure on undevelope...

  17. [Spatial heterogeneity and classified control of agricultural non-point source pollution in Huaihe River Basin].

    PubMed

    Zhou, Liang; Xu, Jian-Gang; Sun, Dong-Qi; Ni, Tian-Hua

    2013-02-01

    Agricultural non-point source pollution is of importance in river deterioration. Thus identifying and concentrated controlling the key source-areas are the most effective approaches for non-point source pollution control. This study adopts inventory method to analysis four kinds of pollution sources and their emissions intensity of the chemical oxygen demand (COD), total nitrogen (TN), and total phosphorus (TP) in 173 counties (cities, districts) in Huaihe River Basin. The four pollution sources include livestock breeding, rural life, farmland cultivation, aquacultures. The paper mainly addresses identification of non-point polluted sensitivity areas, key pollution sources and its spatial distribution characteristics through cluster, sensitivity evaluation and spatial analysis. A geographic information system (GIS) and SPSS were used to carry out this study. The results show that: the COD, TN and TP emissions of agricultural non-point sources were 206.74 x 10(4) t, 66.49 x 10(4) t, 8.74 x 10(4) t separately in Huaihe River Basin in 2009; the emission intensity were 7.69, 2.47, 0.32 t.hm-2; the proportions of COD, TN, TP emissions were 73%, 24%, 3%. The paper achieves that: the major pollution source of COD, TN and TP was livestock breeding and rural life; the sensitivity areas and priority pollution control areas among the river basin of non-point source pollution are some sub-basins of the upper branches in Huaihe River, such as Shahe River, Yinghe River, Beiru River, Jialu River and Qingyi River; livestock breeding is the key pollution source in the priority pollution control areas. Finally, the paper concludes that pollution type of rural life has the highest pollution contribution rate, while comprehensive pollution is one type which is hard to control.

  18. The social ecology of water in a Mumbai slum: failures in water quality, quantity, and reliability

    PubMed Central

    2013-01-01

    Background Urban slums in developing countries that are not recognized by the government often lack legal access to municipal water supplies. This results in the creation of insecure “informal” water distribution systems (i.e., community-run or private systems outside of the government’s purview) that may increase water-borne disease risk. We evaluate an informal water distribution system in a slum in Mumbai, India using commonly accepted health and social equity indicators. We also identify predictors of bacterial contamination of drinking water using logistic regression analysis. Methods Data were collected through two studies: the 2008 Baseline Needs Assessment survey of 959 households and the 2011 Seasonal Water Assessment, in which 229 samples were collected for water quality testing over three seasons. Water samples were collected in each season from the following points along the distribution system: motors that directly tap the municipal supply (i.e., “point-of-source” water), hoses going to slum lanes, and storage and drinking water containers from 21 households. Results Depending on season, households spend an average of 52 to 206 times more than the standard municipal charge of Indian rupees 2.25 (US dollars 0.04) per 1000 liters for water, and, in some seasons, 95% use less than the WHO-recommended minimum of 50 liters per capita per day. During the monsoon season, 50% of point-of-source water samples were contaminated. Despite a lack of point-of-source water contamination in other seasons, stored drinking water was contaminated in all seasons, with rates as high as 43% for E. coli and 76% for coliform bacteria. In the multivariate logistic regression analysis, monsoon and summer seasons were associated with significantly increased odds of drinking water contamination. Conclusions Our findings reveal severe deficiencies in water-related health and social equity indicators. All bacterial contamination of drinking water occurred due to post-source contamination during storage in the household, except during the monsoon season, when there was some point-of-source water contamination. This suggests that safe storage and household water treatment interventions may improve water quality in slums. Problems of exorbitant expense, inadequate quantity, and poor point-of-source quality can only be remedied by providing unrecognized slums with equitable access to municipal water supplies. PMID:23442300

  19. Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian

    2018-03-01

    In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.

  20. Data Applicability of Heritage and New Hardware For Launch Vehicle Reliability Models

    NASA Technical Reports Server (NTRS)

    Al Hassan, Mohammad; Novack, Steven

    2015-01-01

    Bayesian reliability requires the development of a prior distribution to represent degree of belief about the value of a parameter (such as a component's failure rate) before system specific data become available from testing or operations. Generic failure data are often provided in reliability databases as point estimates (mean or median). A component's failure rate is considered a random variable where all possible values are represented by a probability distribution. The applicability of the generic data source is a significant source of uncertainty that affects the spread of the distribution. This presentation discusses heuristic guidelines for quantifying uncertainty due to generic data applicability when developing prior distributions mainly from reliability predictions.

  1. An analytical approach to gravitational lensing by an ensemble of axisymmetric lenses

    NASA Technical Reports Server (NTRS)

    Lee, Man Hoi; Spergel, David N.

    1990-01-01

    The problem of gravitational lensing by an ensemble of identical axisymmetric lenses randomly distributed on a single lens plane is considered and a formal expression is derived for the joint probability density of finding shear and convergence at a random point on the plane. The amplification probability for a source can be accurately estimated from the distribution in shear and convergence. This method is applied to two cases: lensing by an ensemble of point masses and by an ensemble of objects with Gaussian surface mass density. There is no convergence for point masses whereas shear is negligible for wide Gaussian lenses.

  2. Studies on the Effects of High Renewable Penetrations on Driving Point Impedance and Voltage Regulator Performance: National Renewable Energy Laboratory/Sacramento Municipal Utility District Load Tap Changer Driving Point Impedance Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagarajan, Adarsh; Coddington, Michael H.; Brown, David

    Voltage regulators perform as desired when regulating from the source to the load and when regulating from a strong source (utility) to a weak source (distributed generation). (See the glossary for definitions of a strong source and weak source.) Even when the control is provisioned for reverse operation, it has been observed that tap-changing voltage regulators do not perform as desired in reverse when attempting regulation from the weak source to the strong source. The region of performance that is not as well understood is the regulation between sources that are approaching equal strength. As part of this study, wemore » explored all three scenarios: regulator control from a strong source to a weak source (classic case), control from a weak source to a strong source (during reverse power flow), and control between equivalent sources.« less

  3. THE SPITZER-IRAC POINT-SOURCE CATALOG OF THE VELA-D CLOUD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strafella, F.; Elia, D.; Campeggio, L., E-mail: francesco.strafella@le.infn.i, E-mail: loretta.campeggio@le.infn.i, E-mail: eliad@oal.ul.p

    2010-08-10

    This paper presents the observations of Cloud D in the Vela Molecular Ridge, obtained with the Infrared Array Camera (IRAC) camera on board the Spitzer Space Telescope at the wavelengths {lambda} = 3.6, 4.5, 5.8, and 8.0 {mu}m. A photometric catalog of point sources, covering a field of approximately 1.2 deg{sup 2}, has been extracted and complemented with additional available observational data in the millimeter region. Previous observations of the same region, obtained with the Spitzer MIPS camera in the photometric bands at 24 {mu}m and 70 {mu}m, have also been reconsidered to allow an estimate of the spectral slopemore » of the sources in a wider spectral range. A total of 170,299 point sources, detected at the 5{sigma} sensitivity level in at least one of the IRAC bands, have been reported in the catalog. There were 8796 sources for which good quality photometry was obtained in all four IRAC bands. For this sample, a preliminary characterization of the young stellar population based on the determination of spectral slope is discussed; combining this with diagnostics in the color-magnitude and color-color diagrams, the relative population of young stellar objects (YSOs) in different evolutionary classes has been estimated and a total of 637 candidate YSOs have been selected. The main differences in their relative abundances have been highlighted and a brief account for their spatial distribution is given. The star formation rate has also been estimated and compared with the values derived for other star-forming regions. Finally, an analysis of the spatial distribution of the sources by means of the two-point correlation function shows that the younger population, constituted by the Class I and flat-spectrum sources, is significantly more clustered than the Class II and III sources.« less

  4. Uncertainty in gridded CO 2 emissions estimates

    DOE PAGES

    Hogue, Susannah; Marland, Eric; Andres, Robert J.; ...

    2016-05-19

    We are interested in the spatial distribution of fossil-fuel-related emissions of CO 2 for both geochemical and geopolitical reasons, but it is important to understand the uncertainty that exists in spatially explicit emissions estimates. Working from one of the widely used gridded data sets of CO 2 emissions, we examine the elements of uncertainty, focusing on gridded data for the United States at the scale of 1° latitude by 1° longitude. Uncertainty is introduced in the magnitude of total United States emissions, the magnitude and location of large point sources, the magnitude and distribution of non-point sources, and from themore » use of proxy data to characterize emissions. For the United States, we develop estimates of the contribution of each component of uncertainty. At 1° resolution, in most grid cells, the largest contribution to uncertainty comes from how well the distribution of the proxy (in this case population density) represents the distribution of emissions. In other grid cells, the magnitude and location of large point sources make the major contribution to uncertainty. Uncertainty in population density can be important where a large gradient in population density occurs near a grid cell boundary. Uncertainty is strongly scale-dependent with uncertainty increasing as grid size decreases. In conclusion, uncertainty for our data set with 1° grid cells for the United States is typically on the order of ±150%, but this is perhaps not excessive in a data set where emissions per grid cell vary over 8 orders of magnitude.« less

  5. Linear Power-Flow Models in Multiphase Distribution Networks: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstein, Andrey; Dall'Anese, Emiliano

    This paper considers multiphase unbalanced distribution systems and develops approximate power-flow models where bus-voltages, line-currents, and powers at the point of common coupling are linearly related to the nodal net power injections. The linearization approach is grounded on a fixed-point interpretation of the AC power-flow equations, and it is applicable to distribution systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. The proposed linear models can facilitate the development of computationally-affordable optimization and control applications -- frommore » advanced distribution management systems settings to online and distributed optimization routines. Performance of the proposed models is evaluated on different test feeders.« less

  6. X-ray emission from galaxies - The distribution of low-luminosity X-ray sources in the Galactic Centre region

    NASA Astrophysics Data System (ADS)

    Heard, Victoria; Warwick, Robert

    2012-09-01

    We report a study of the extended X-ray emission observed in the Galactic Centre (GC) region based on archival XMM-Newton data. The GC diffuse emission can be decomposed into three distinct components: the emission from low-luminosity point sources; the fluorescence of (and reflection from) dense molecular material; and soft (kT ~1 keV), diffuse thermal plasma emission most likely energised by supernova explosions. Here, we examine the emission due to unresolved point sources. We show that this source component accounts for the bulk of the 6.7-keV and 6.9-keV line emission. We fit the surface brightness distribution evident in these lines with an empirical 2-d model, which we then compare with a prediction derived from a 3-d mass model for the old stellar population in the GC region. We find that the X-ray surface brightness declines more rapidly with angular offset from Sgr A* than the mass-model prediction. One interpretation is that the X-ray luminosity per solar mass characterising the GC source population is increasing towards the GC. Alternatively, some refinement of the mass-distribution within the nuclear stellar disc may be required. The unresolved X-ray source population is most likely dominated by magnetic CVs. We use the X-ray observations to set constraints on the number density of such sources in the GC region. Our analysis does not support the premise that the GC is pervaded by very hot (~ 7.5 keV) thermal plasma, which is truly diffuse in nature.

  7. [Estimation of nonpoint source pollutant loads and optimization of the best management practices (BMPs) in the Zhangweinan River basin].

    PubMed

    Xu, Hua-Shan; Xu, Zong-Xue; Liu, Pin

    2013-03-01

    One of the key techniques in establishing and implementing TMDL (total maximum daily load) is to utilize hydrological model to quantify non-point source pollutant loads, establish BMPs scenarios, reduce non-point source pollutant loads. Non-point source pollutant loads under different years (wet, normal and dry year) were estimated by using SWAT model in the Zhangweinan River basin, spatial distribution characteristics of non-point source pollutant loads were analyzed on the basis of the simulation result. During wet years, total nitrogen (TN) and total phosphorus (TP) accounted for 0.07% and 27.24% of the total non-point source pollutant loads, respectively. Spatially, agricultural and residential land with steep slope are the regions that contribute more non-point source pollutant loads in the basin. Compared to non-point source pollutant loads with those during the baseline period, 47 BMPs scenarios were set to simulate the reduction efficiency of different BMPs scenarios for 5 kinds of pollutants (organic nitrogen, organic phosphorus, nitrate nitrogen, dissolved phosphorus and mineral phosphorus) in 8 prior controlled subbasins. Constructing vegetation type ditch was optimized as the best measure to reduce TN and TP by comparing cost-effective relationship among different BMPs scenarios, and the costs of unit pollutant reduction are 16.11-151.28 yuan x kg(-1) for TN, and 100-862.77 yuan x kg(-1) for TP, which is the most cost-effective measure among the 47 BMPs scenarios. The results could provide a scientific basis and technical support for environmental protection and sustainable utilization of water resources in the Zhangweinan River basin.

  8. Probing the Spatial Distribution of the Interstellar Dust Medium by High Angular Resolution X-ray Halos of Point Sources

    NASA Astrophysics Data System (ADS)

    Xiang, Jingen

    X-rays are absorbed and scattered by dust grains when they travel through the interstellar medium. The scattering within small angles results in an X-ray ``halo''. The halo properties are significantly affected by the energy of radiation, the optical depth of the scattering, the grain size distributions and compositions, and the spatial distribution of dust along the line of sight (LOS). Therefore analyzing the X-ray halo properties is an important tool to study the size distribution and spatial distribution of interstellar grains, which plays a central role in the astrophysical study of the interstellar medium, such as the thermodynamics and chemistry of the gas and the dynamics of star formation. With excellent angular resolution, good energy resolution and broad energy band, the Chandra ACIS is so far the best instrument for studying the X-ray halos. But the direct images of bright sources obtained with ACIS usually suffer from severe pileup which prevents us from obtaining the halos in small angles. We first improve the method proposed by Yao et al to resolve the X-ray dust scattering halos of point sources from the zeroth order data in CC-mode or the first order data in TE mode with Chandra HETG/ACIS. Using this method we re-analyze the Cygnus X-1 data observed with Chandra. Then we studied the X-ray dust scattering halos around 17 bright X-ray point sources using Chandra data. All sources were observed with the HETG/ACIS in CC-mode or TE-mode. Using the interstellar grain models of WD01 model and MRN model to fit the halo profiles, we get the hydrogen column densities and the spatial distributions of the scattering dust grains along the line of sights (LOS) to these sources. We find there is a good linear correlation not only between the scattering hydrogen column density from WD01 model and the one from MRN model, but also between N_{H} derived from spectral fits and the one derived from the grain models WD01 and MRN (except for GX 301-2 and Vela X-1): N_{H,WD01} = (0.720±0.009) × N_{H,abs} + (0.051±0.013) and N_{H, MRN} = (1.156±0.016) × N_{H,abs} + (0.062±0.024) in the units 10^{22} cm^{-2}. Then the correlation between FHI and N_{H} is obtained. Both WD01 model and MRN model fits show that the scattering dust density very close to these sources is much higher than the normal interstellar medium and we consider it is the evidence of molecular clouds around these X-ray binaries. We also find that there is the linear correlation between the effective distance through the galactic dust layer and hydrogen scattering olumn density N_{H} excluding the one in x=0.99-1.0 but the correlation does not exist between he effective distance and the N_{H} in x=0.99-1.0. It shows that the dust nearby the X-ray sources is not the dust from galactic disk. Then we estimate the structure and density of the stellar wind around the special X-ray pulsars Vela X-1 and GX 301-2. Finally we discuss the possibility of probing the three dimensional structure of the interstellar using the X-ray halos of the transient sources, probing the spatial distributions of interstellar dust medium nearby the point sources, even the structure of the stellar winds using higher angular resolution X-ray dust scattering halos and testing the model that the black hole can be formed from the direct collapse of a massive star without supernova using the statistical distribution of the dust density nearby the X-ray binaries.

  9. Rocket ultraviolet imagery of the Andromeda galaxy

    NASA Technical Reports Server (NTRS)

    Carruthers, G. R.; Opal, C. B.; Heckathorn, H. M.

    1978-01-01

    Far-UV electrographic imagery of M31 is presented which was obtained during a sounding-rocket flight with an electrographic Schmidt camera sensitive in the wavelength range from 1230 to 2000 A. The resolution in the imagery is such that 50% of the energy from a point source is confined within a circle 40 arcsec in radius. Two conspicuous features are observed in the UV image of M31: one corresponding to a bright association (NGC 206) in the SW region of the disk and one centered on the galactic nucleus. Indications of the general spiral-arm structure are also evident. Absolute photometry and brightness distributions are obtained for the observed features, and both the central region and NGC 206 are shown to be diffuse sources. It is found that the brightness distribution of the central region is a flat ellipse with its major axis closely aligned with the major axis of the galaxy, which favors a source model consisting of young early-type stars close to the galactic plane and constitutes strong evidence against a nonthermal point source at the galactic center.

  10. Monte Carlo study of the impact of a magnetic field on the dose distribution in MRI-guided HDR brachytherapy using Ir-192

    NASA Astrophysics Data System (ADS)

    Beld, E.; Seevinck, P. R.; Lagendijk, J. J. W.; Viergever, M. A.; Moerland, M. A.

    2016-09-01

    In the process of developing a robotic MRI-guided high-dose-rate (HDR) prostate brachytherapy treatment, the influence of the MRI scanner’s magnetic field on the dose distribution needs to be investigated. A magnetic field causes a deflection of electrons in the plane perpendicular to the magnetic field, and it leads to less lateral scattering along the direction parallel with the magnetic field. Monte Carlo simulations were carried out to determine the influence of the magnetic field on the electron behavior and on the total dose distribution around an Ir-192 source. Furthermore, the influence of air pockets being present near the source was studied. The Monte Carlo package Geant4 was utilized for the simulations. The simulated geometries consisted of a simplified point source inside a water phantom. Magnetic field strengths of 0 T, 1.5 T, 3 T, and 7 T were considered. The simulation results demonstrated that the dose distribution was nearly unaffected by the magnetic field for all investigated magnetic field strengths. Evidence was found that, from a dose perspective, the HDR prostate brachytherapy treatment using Ir-192 can be performed safely inside the MRI scanner. No need was found to account for the magnetic field during treatment planning. Nevertheless, the presence of air pockets in close vicinity to the source, particularly along the direction parallel with the magnetic field, appeared to be an important point for consideration.

  11. Monte Carlo study of the impact of a magnetic field on the dose distribution in MRI-guided HDR brachytherapy using Ir-192.

    PubMed

    Beld, E; Seevinck, P R; Lagendijk, J J W; Viergever, M A; Moerland, M A

    2016-09-21

    In the process of developing a robotic MRI-guided high-dose-rate (HDR) prostate brachytherapy treatment, the influence of the MRI scanner's magnetic field on the dose distribution needs to be investigated. A magnetic field causes a deflection of electrons in the plane perpendicular to the magnetic field, and it leads to less lateral scattering along the direction parallel with the magnetic field. Monte Carlo simulations were carried out to determine the influence of the magnetic field on the electron behavior and on the total dose distribution around an Ir-192 source. Furthermore, the influence of air pockets being present near the source was studied. The Monte Carlo package Geant4 was utilized for the simulations. The simulated geometries consisted of a simplified point source inside a water phantom. Magnetic field strengths of 0 T, 1.5 T, 3 T, and 7 T were considered. The simulation results demonstrated that the dose distribution was nearly unaffected by the magnetic field for all investigated magnetic field strengths. Evidence was found that, from a dose perspective, the HDR prostate brachytherapy treatment using Ir-192 can be performed safely inside the MRI scanner. No need was found to account for the magnetic field during treatment planning. Nevertheless, the presence of air pockets in close vicinity to the source, particularly along the direction parallel with the magnetic field, appeared to be an important point for consideration.

  12. Analysis and attenuation of artifacts caused by spatially and temporally correlated noise sources in Green's function estimates

    NASA Astrophysics Data System (ADS)

    Martin, E. R.; Dou, S.; Lindsey, N.; Chang, J. P.; Biondi, B. C.; Ajo Franklin, J. B.; Wagner, A. M.; Bjella, K.; Daley, T. M.; Freifeld, B. M.; Robertson, M.; Ulrich, C.; Williams, E. F.

    2016-12-01

    Localized strong sources of noise in an array have been shown to cause artifacts in Green's function estimates obtained via cross-correlation. Their effect is often reduced through the use of cross-coherence. Beyond independent localized sources, temporally or spatially correlated sources of noise frequently occur in practice but violate basic assumptions of much of the theory behind ambient noise Green's function retrieval. These correlated noise sources can occur in urban environments due to transportation infrastructure, or in areas around industrial operations like pumps running at CO2 sequestration sites or oil and gas drilling sites. Better understanding of these artifacts should help us develop and justify methods for their automatic removal from Green's function estimates. We derive expected artifacts in cross-correlations from several distributions of correlated noise sources including point sources that are exact time-lagged repeats of each other and Gaussian-distributed in space and time with covariance that exponentially decays. Assuming the noise distribution stays stationary over time, the artifacts become more coherent as more ambient noise is included in the Green's function estimates. We support our results with simple computational models. We observed these artifacts in Green's function estimates from a 2015 ambient noise study in Fairbanks, AK where a trenched distributed acoustic sensing (DAS) array was deployed to collect ambient noise alongside a road with the goal of developing a permafrost thaw monitoring system. We found that joints in the road repeatedly being hit by cars travelling at roughly the speed limit led to artifacts similar to those expected when several points are time-lagged copies of each other. We also show test results of attenuating the effects of these sources during time-lapse monitoring of an active thaw test in the same location with noise detected by a 2D trenched DAS array.

  13. A deeper look at the X-ray point source population of NGC 4472

    NASA Astrophysics Data System (ADS)

    Joseph, T. D.; Maccarone, T. J.; Kraft, R. P.; Sivakoff, G. R.

    2017-10-01

    In this paper we discuss the X-ray point source population of NGC 4472, an elliptical galaxy in the Virgo cluster. We used recent deep Chandra data combined with archival Chandra data to obtain a 380 ks exposure time. We find 238 X-ray point sources within 3.7 arcmin of the galaxy centre, with a completeness flux, FX, 0.5-2 keV = 6.3 × 10-16 erg s-1 cm-2. Most of these sources are expected to be low-mass X-ray binaries. We finding that, using data from a single galaxy which is both complete and has a large number of objects (˜100) below 1038 erg s-1, the X-ray luminosity function is well fitted with a single power-law model. By cross matching our X-ray data with both space based and ground based optical data for NGC 4472, we find that 80 of the 238 sources are in globular clusters. We compare the red and blue globular cluster subpopulations and find red clusters are nearly six times more likely to host an X-ray source than blue clusters. We show that there is evidence that these two subpopulations have significantly different X-ray luminosity distributions. Source catalogues for all X-ray point sources, as well as any corresponding optical data for globular cluster sources, are also presented here.

  14. Recent skyshine calculations at Jefferson Lab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Degtyarenko, P.

    1997-12-01

    New calculations of the skyshine dose distribution of neutrons and secondary photons have been performed at Jefferson Lab using the Monte Carlo method. The dose dependence on neutron energy, distance to the neutron source, polar angle of a source neutron, and azimuthal angle between the observation point and the momentum direction of a source neutron have been studied. The azimuthally asymmetric term in the skyshine dose distribution is shown to be important in the dose calculations around high-energy accelerator facilities. A parameterization formula and corresponding computer code have been developed which can be used for detailed calculations of the skyshinemore » dose maps.« less

  15. High frequency seismic signal generated by landslides on complex topographies: from point source to spatially distributed sources

    NASA Astrophysics Data System (ADS)

    Mangeney, A.; Kuehnert, J.; Capdeville, Y.; Durand, V.; Stutzmann, E.; Kone, E. H.; Sethi, S.

    2017-12-01

    During their flow along the topography, landslides generate seismic waves in a wide frequency range. These so called landquakes can be recorded at very large distances (a few hundreds of km for large landslides). The recorded signals depend on the landslide seismic source and the seismic wave propagation. If the wave propagation is well understood, the seismic signals can be inverted for the seismic source and thus can be used to get information on the landslide properties and dynamics. Analysis and modeling of long period seismic signals (10-150s) have helped in this way to discriminate between different landslide scenarios and to constrain rheological parameters (e.g. Favreau et al., 2010). This was possible as topography poorly affects wave propagation at these long periods and the landslide seismic source can be approximated as a point source. In the near-field and at higher frequencies (> 1 Hz) the spatial extent of the source has to be taken into account and the influence of the topography on the recorded seismic signal should be quantified in order to extract information on the landslide properties and dynamics. The characteristic signature of distributed sources and varying topographies is studied as a function of frequency and recording distance.The time dependent spatial distribution of the forces applied to the ground by the landslide are obtained using granular flow numerical modeling on 3D topography. The generated seismic waves are simulated using the spectral element method. The simulated seismic signal is compared to observed seismic data from rockfalls at the Dolomieu Crater of Piton de la Fournaise (La Réunion).Favreau, P., Mangeney, A., Lucas, A., Crosta, G., and Bouchut, F. (2010). Numerical modeling of landquakes. Geophysical Research Letters, 37(15):1-5.

  16. CARIDEAN GRASS SHRIMP (PALAEMONETES PUGIO HOLTHIUS) AS AN INDICATOR OF SEDIMENT QUALITY IN FLORIDA COASTAL AREAS AFFECTED BY POINT AND NON-POINT SOURCE CONTAMINATION

    EPA Science Inventory

    Grass shrimp are one of the more widely distributed estuarine benthic organisms along the Gulf of Mexico and Atlantic coasts, but they have been used infrequently in contaminated sediment assessments. Early life stages of the caridean grass shrimp, Palaemonetes pugio (Holthuis), ...

  17. Performance improvement of continuous-variable quantum key distribution with an entangled source in the middle via photon subtraction

    NASA Astrophysics Data System (ADS)

    Guo, Ying; Liao, Qin; Wang, Yijun; Huang, Duan; Huang, Peng; Zeng, Guihua

    2017-03-01

    A suitable photon-subtraction operation can be exploited to improve the maximal transmission of continuous-variable quantum key distribution (CVQKD) in point-to-point quantum communication. Unfortunately, the photon-subtraction operation faces solving the improvement transmission problem of practical quantum networks, where the entangled source is located in the third part, which may be controlled by a malicious eavesdropper, instead of in one of the trusted parts, controlled by Alice or Bob. In this paper, we show that a solution can come from using a non-Gaussian operation, in particular, the photon-subtraction operation, which provides a method to enhance the performance of entanglement-based (EB) CVQKD. Photon subtraction not only can lengthen the maximal transmission distance by increasing the signal-to-noise rate but also can be easily implemented with existing technologies. Security analysis shows that CVQKD with an entangled source in the middle (ESIM) from applying photon subtraction can well increase the secure transmission distance in both direct and reverse reconciliations of the EB-CVQKD scheme, even if the entangled source originates from an untrusted part. Moreover, it can defend against the inner-source attack, which is a specific attack by an untrusted entangled source in the framework of ESIM.

  18. A preliminary assessment of small steam Rankine and Brayton point-focusing solar modules

    NASA Technical Reports Server (NTRS)

    Roschke, E. J.; Wen, L.; Steele, H.; Elgabalawi, N.; Wang, J.

    1979-01-01

    A preliminary assessment of three conceptual point-focusing distributed solar modules is presented. The basic power conversion units consist of small Brayton or Rankine engines individually coupled to two-axis, tracking, point-focusing solar collectors. An array of such modules can be linked together, via electric transport, to form a small power station. Each module also can be utilized on a stand-alone basis, as an individual power source.

  19. Performance analysis of dual-hop optical wireless communication systems over k-distribution turbulence channel with pointing error

    NASA Astrophysics Data System (ADS)

    Mishra, Neha; Sriram Kumar, D.; Jha, Pranav Kumar

    2017-06-01

    In this paper, we investigate the performance of the dual-hop free space optical (FSO) communication systems under the effect of strong atmospheric turbulence together with misalignment effects (pointing error). We consider a relay assisted link using decode and forward (DF) relaying protocol between source and destination with the assumption that Channel State Information is available at both transmitting and receiving terminals. The atmospheric turbulence channels are modeled by k-distribution with pointing error impairment. The exact closed form expression is derived for outage probability and bit error rate and illustrated through numerical plots. Further BER results are compared for the different modulation schemes.

  20. Optimization of light source parameters in the photodynamic therapy of heterogeneous prostate

    NASA Astrophysics Data System (ADS)

    Li, Jun; Altschuler, Martin D.; Hahn, Stephen M.; Zhu, Timothy C.

    2008-08-01

    The three-dimensional (3D) heterogeneous distributions of optical properties in a patient prostate can now be measured in vivo. Such data can be used to obtain a more accurate light-fluence kernel. (For specified sources and points, the kernel gives the fluence delivered to a point by a source of unit strength.) In turn, the kernel can be used to solve the inverse problem that determines the source strengths needed to deliver a prescribed photodynamic therapy (PDT) dose (or light-fluence) distribution within the prostate (assuming uniform drug concentration). We have developed and tested computational procedures to use the new heterogeneous data to optimize delivered light-fluence. New problems arise, however, in quickly obtaining an accurate kernel following the insertion of interstitial light sources and data acquisition. (1) The light-fluence kernel must be calculated in 3D and separately for each light source, which increases kernel size. (2) An accurate kernel for light scattering in a heterogeneous medium requires ray tracing and volume partitioning, thus significant calculation time. To address these problems, two different kernels were examined and compared for speed of creation and accuracy of dose. Kernels derived more quickly involve simpler algorithms. Our goal is to achieve optimal dose planning with patient-specific heterogeneous optical data applied through accurate kernels, all within clinical times. The optimization process is restricted to accepting the given (interstitially inserted) sources, and determining the best source strengths with which to obtain a prescribed dose. The Cimmino feasibility algorithm is used for this purpose. The dose distribution and source weights obtained for each kernel are analyzed. In clinical use, optimization will also be performed prior to source insertion to obtain initial source positions, source lengths and source weights, but with the assumption of homogeneous optical properties. For this reason, we compare the results from heterogeneous optical data with those obtained from average homogeneous optical properties. The optimized treatment plans are also compared with the reference clinical plan, defined as the plan with sources of equal strength, distributed regularly in space, which delivers a mean value of prescribed fluence at detector locations within the treatment region. The study suggests that comprehensive optimization of source parameters (i.e. strengths, lengths and locations) is feasible, thus allowing acceptable dose coverage in a heterogeneous prostate PDT within the time constraints of the PDT procedure.

  1. Structured background grids for generation of unstructured grids by advancing front method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar

    1991-01-01

    A new method of background grid construction is introduced for generation of unstructured tetrahedral grids using the advancing-front technique. Unlike the conventional triangular/tetrahedral background grids which are difficult to construct and usually inadequate in performance, the new method exploits the simplicity of uniform Cartesian meshes and provides grids of better quality. The approach is analogous to solving a steady-state heat conduction problem with discrete heat sources. The spacing parameters of grid points are distributed over the nodes of a Cartesian background grid by interpolating from a few prescribed sources and solving a Poisson equation. To increase the control over the grid point distribution, a directional clustering approach is used. The new method is convenient to use and provides better grid quality and flexibility. Sample results are presented to demonstrate the power of the method.

  2. Number density distribution of near-infrared sources on a sub-degree scale in the Galactic center: Comparison with the Fe XXV Kα line at 6.7 keV

    NASA Astrophysics Data System (ADS)

    Yasui, Kazuki; Nishiyama, Shogo; Yoshikawa, Tatsuhito; Nagatomo, Schun; Uchiyama, Hideki; Tsuru, Takeshi Go; Koyama, Katsuji; Tamura, Motohide; Kwon, Jungmi; Sugitani, Koji; Schödel, Rainer; Nagata, Tetsuya

    2015-12-01

    The stellar distribution derived from an H- and KS-band survey of the central region of our Galaxy is compared with the Fe XXV Kα (6.7 keV) line intensity observed with the Suzaku satellite. The survey is for the galactic coordinates |l| ≲ 3.0° and |b | ≲ 1.0° (equivalent to 0.8 kpc × 0.3 kpc for R⊙ = 8 kpc), and the number-density distribution N(KS,0; l, b) of stars is derived by using the extinction-corrected magnitude KS,0 = 10.5. This is deep enough to probe the old red-giant population and in turn to estimate the (l, b) distribution of faint X-ray point sources such as coronally active binaries and cataclysmic variables. In the Galactic plane (b = 0°), N(10.5; l, b) increases in the direction of the Galactic center as |l|-0.30±0.03 in the range of - 0.1° ≥ l ≥ - 0.7°, but this increase is significantly slower than the increase (|l|-0.44±0.02) of the Fe XXV Kα line intensity. If normalized with the ratios in the outer region 1.5° ≤ |l| ≤ 2.8°, where faint X-ray point sources are argued to dominate the diffuse Galactic X-ray ridge emission, the excess of the Fe XXV Kα line intensity over the stellar number density is at least a factor of two at |l| = 0.1°. This indicates that a significant part of the Galactic-center diffuse emission arises from a truly diffuse optically thin thermal plasma, and not from an unresolved collection of faint X-ray point sources related to the old stellar population.

  3. Monte Carlo simulation for light propagation in 3D tooth model

    NASA Astrophysics Data System (ADS)

    Fu, Yongji; Jacques, Steven L.

    2011-03-01

    Monte Carlo (MC) simulation was implemented in a three dimensional tooth model to simulate the light propagation in the tooth for antibiotic photodynamic therapy and other laser therapy. The goal of this research is to estimate the light energy deposition in the target region of tooth with given light source information, tooth optical properties and tooth structure. Two use cases were presented to demonstrate the practical application of this model. One case was comparing the isotropic point source and narrow beam dosage distribution and the other case was comparing different incident points for the same light source. This model will help the doctor for PDT design in the tooth.

  4. Spherical earth gravity and magnetic anomaly analysis by equivalent point source inversion

    NASA Technical Reports Server (NTRS)

    Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.

    1981-01-01

    To facilitate geologic interpretation of satellite elevation potential field data, analysis techniques are developed and verified in the spherical domain that are commensurate with conventional flat earth methods of potential field interpretation. A powerful approach to the spherical earth problem relates potential field anomalies to a distribution of equivalent point sources by least squares matrix inversion. Linear transformations of the equivalent source field lead to corresponding geoidal anomalies, pseudo-anomalies, vector anomaly components, spatial derivatives, continuations, and differential magnetic pole reductions. A number of examples using 1 deg-averaged surface free-air gravity anomalies of POGO satellite magnetometer data for the United States, Mexico, and Central America illustrate the capabilities of the method.

  5. Non-Point Source Pollutant Load Variation in Rapid Urbanization Areas by Remote Sensing, Gis and the L-THIA Model: A Case in Bao'an District, Shenzhen, China.

    PubMed

    Li, Tianhong; Bai, Fengjiao; Han, Peng; Zhang, Yuanyan

    2016-11-01

    Urban sprawl is a major driving force that alters local and regional hydrology and increases non-point source pollution. Using the Bao'an District in Shenzhen, China, a typical rapid urbanization area, as the study area and land-use change maps from 1988 to 2014 that were obtained by remote sensing, the contributions of different land-use types to NPS pollutant production were assessed with a localized long-term hydrologic impact assessment (L-THIA) model. The results show that the non-point source pollution load changed significantly both in terms of magnitude and spatial distribution. The loads of chemical oxygen demand, total suspended substances, total nitrogen and total phosphorus were affected by the interactions between event mean concentration and the magnitude of changes in land-use acreages and the spatial distribution. From 1988 to 2014, the loads of chemical oxygen demand, suspended substances and total phosphorus showed clearly increasing trends with rates of 132.48 %, 32.52 % and 38.76 %, respectively, while the load of total nitrogen decreased by 71.52 %. The immigrant population ratio was selected as an indicator to represent the level of rapid urbanization and industrialization in the study area, and a comparison analysis of the indicator with the four non-point source loads demonstrated that the chemical oxygen demand, total phosphorus and total nitrogen loads are linearly related to the immigrant population ratio. The results provide useful information for environmental improvement and city management in the study area.

  6. Non-Point Source Pollutant Load Variation in Rapid Urbanization Areas by Remote Sensing, Gis and the L-THIA Model: A Case in Bao'an District, Shenzhen, China

    NASA Astrophysics Data System (ADS)

    Li, Tianhong; Bai, Fengjiao; Han, Peng; Zhang, Yuanyan

    2016-11-01

    Urban sprawl is a major driving force that alters local and regional hydrology and increases non-point source pollution. Using the Bao'an District in Shenzhen, China, a typical rapid urbanization area, as the study area and land-use change maps from 1988 to 2014 that were obtained by remote sensing, the contributions of different land-use types to NPS pollutant production were assessed with a localized long-term hydrologic impact assessment (L-THIA) model. The results show that the non-point source pollution load changed significantly both in terms of magnitude and spatial distribution. The loads of chemical oxygen demand, total suspended substances, total nitrogen and total phosphorus were affected by the interactions between event mean concentration and the magnitude of changes in land-use acreages and the spatial distribution. From 1988 to 2014, the loads of chemical oxygen demand, suspended substances and total phosphorus showed clearly increasing trends with rates of 132.48 %, 32.52 % and 38.76 %, respectively, while the load of total nitrogen decreased by 71.52 %. The immigrant population ratio was selected as an indicator to represent the level of rapid urbanization and industrialization in the study area, and a comparison analysis of the indicator with the four non-point source loads demonstrated that the chemical oxygen demand, total phosphorus and total nitrogen loads are linearly related to the immigrant population ratio. The results provide useful information for environmental improvement and city management in the study area.

  7. Waveform inversion of volcano-seismic signals for an extended source

    USGS Publications Warehouse

    Nakano, M.; Kumagai, H.; Chouet, B.; Dawson, P.

    2007-01-01

    We propose a method to investigate the dimensions and oscillation characteristics of the source of volcano-seismic signals based on waveform inversion for an extended source. An extended source is realized by a set of point sources distributed on a grid surrounding the centroid of the source in accordance with the source geometry and orientation. The source-time functions for all point sources are estimated simultaneously by waveform inversion carried out in the frequency domain. We apply a smoothing constraint to suppress short-scale noisy fluctuations of source-time functions between adjacent sources. The strength of the smoothing constraint we select is that which minimizes the Akaike Bayesian Information Criterion (ABIC). We perform a series of numerical tests to investigate the capability of our method to recover the dimensions of the source and reconstruct its oscillation characteristics. First, we use synthesized waveforms radiated by a kinematic source model that mimics the radiation from an oscillating crack. Our results demonstrate almost complete recovery of the input source dimensions and source-time function of each point source, but also point to a weaker resolution of the higher modes of crack oscillation. Second, we use synthetic waveforms generated by the acoustic resonance of a fluid-filled crack, and consider two sets of waveforms dominated by the modes with wavelengths 2L/3 and 2W/3, or L and 2L/5, where W and L are the crack width and length, respectively. Results from these tests indicate that the oscillating signature of the 2L/3 and 2W/3 modes are successfully reconstructed. The oscillating signature of the L mode is also well recovered, in contrast to results obtained for a point source for which the moment tensor description is inadequate. However, the oscillating signature of the 2L/5 mode is poorly recovered owing to weaker resolution of short-scale crack wall motions. The triggering excitations of the oscillating cracks are successfully reconstructed. Copyright 2007 by the American Geophysical Union.

  8. Dosimetry of 192Ir sources used for endovascular brachytherapy

    NASA Astrophysics Data System (ADS)

    Reynaert, N.; Van Eijkeren, M.; Taeymans, Y.; Thierens, H.

    2001-02-01

    An in-phantom calibration technique for 192Ir sources used for endovascular brachytherapy is presented. Three different source lengths were investigated. The calibration was performed in a solid phantom using a Farmer-type ionization chamber at source to detector distances ranging from 1 cm to 5 cm. The dosimetry protocol for medium-energy x-rays extended with a volume-averaging correction factor was used to convert the chamber reading to dose to water. The air kerma strength of the sources was determined as well. EGS4 Monte Carlo calculations were performed to determine the depth dose distribution at distances ranging from 0.6 mm to 10 cm from the source centre. In this way we were able to convert the absolute dose rate at 1 cm distance to the reference point chosen at 2 mm distance. The Monte Carlo results were confirmed by radiochromic film measurements, performed with a double-exposure technique. The dwell times to deliver a dose of 14 Gy at the reference point were determined and compared with results given by the source supplier (CORDIS). They determined the dwell times from a Sievert integration technique based on the source activity. The results from both methods agreed to within 2% for the 12 sources that were evaluated. A Visual Basic routine that superimposes dose distributions, based on the Monte Carlo calculations and the in-phantom calibration, onto intravascular ultrasound images is presented. This routine can be used as an online treatment planning program.

  9. Toxic metals in Venics lagoon sediments: Model, observation, an possible removal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basu, A.; Molinaroli, E.

    1994-11-01

    We have modeled the distribution of nine toxic metals in the surface sediments from 163 stations in the Venice lagoon using published data. Three entrances from the Adriatic Sea control the circulation in the lagoon and divide it into three basins. We assume, for purposes of modeling, that Porto Marghera at the head of the Industrial Zone area is the single source of toxic metals in the Venice lagoon. In a standing body of lagoon water, concentration of pollutants at distance x from the source (C{sub 0}) may be given by C=C{sub 0}e{sup -kx} where k is the rate constantmore » of dispersal. We calculated k empirically using concentrations at the source, and those farthest from it, that is the end points of the lagoon. Average k values (ppm/km) in the lagoon are: Zn 0.165, Cd 0.116, Hg 0.110, Cu 0.105, Co 0.072, Pb 0.058, Ni 0.008, Cr (0.011) and Fe (0.018 percent/km), and they have complex distributions. Given the k values, concentration at source (C{sub 0}), and the distance x of any point in the lagoon from the source, we have calculated the model concentrations of the nine metals at each sampling station. Tides, currents, floor morphology, additional sources, and continued dumping perturb model distributions causing anomalies (observed minus model concentrations). Positive anomalies are found near the source, where continued dumping perturbs initial boundary conditions, and in areas of sluggish circulation. Negative anomalies are found in areas with strong currents that may flush sediments out of the lagoon. We have thus identified areas in the lagoon where higher rate of sediment removal and exchange may lesson pollution. 41 refs., 4 figs., 3 tabs.« less

  10. Sediment delivery to the Gulf of Alaska: source mechanisms along a glaciated transform margin

    USGS Publications Warehouse

    Dobson, M.R.; O'Leary, D.; Veart, M.

    1998-01-01

    Sediment delivery to the Gulf of Alaska occurs via four areally extensive deep-water fans, sourced from grounded tidewater glaciers. During periods of climatic cooling, glaciers cross a narrow shelf and discharge sediment down the continental slope. Because the coastal terrain is dominated by fjords and a narrow, high-relief Pacific watershed, deposition is dominated by channellized point-source fan accumulations, the volumes of which are primarily a function of climate. The sediment distribution is modified by a long-term tectonic translation of the Pacific plate to the north along the transform margin. As a result, the deep-water fans are gradually moved away from the climatically controlled point sources. Sets of abandoned channels record the effect of translation during the Plio-Pleistocene.

  11. A very deep IRAS survey at l(II) = 97 deg, b(II) = +30 deg

    NASA Technical Reports Server (NTRS)

    Hacking, Perry; Houck, James R.

    1987-01-01

    A deep far-infrared survey is presented using over 1000 scans made of a 4 to 6 sq. deg. field at the north ecliptic pole by the IRAS. Point sources from this survey are up to 100 times fainter than the IRAS point source catalog at 12 and 25 micrometers, and up to 10 times fainter at 60 and 100 micrometers. The 12 and 25 micrometer maps are instrumental noise-limited, and the 60 and 100 micrometer maps are confusion noise-limited. The majority of the 12 micrometer point sources are stars within the Milky Way. The 25 micrometer sources are composed almost equally of stars and galaxies. About 80% of the 60 micrometer sources correspond to galaxies on Palomar Observatory Sky Survey (POSS) enlargements. The remaining 20% are probably galaxies below the POSS detection limit. The differential source counts are presented and compared with what is predicted by the Bahcall and Soneira Standard Galaxy Model using the B-V-12 micrometer colors of stars without circumstellar dust shells given by Waters, Cote and Aumann. The 60 micrometer source counts are inconsistent with those predicted for a uniformly distributed, nonevolving universe. The implications are briefly discussed.

  12. Determination of Jet Noise Radiation Source Locations using a Dual Sideline Cross-Correlation/Spectrum Technique

    NASA Technical Reports Server (NTRS)

    Allen, C. S.; Jaeger, S. M.

    1999-01-01

    The goal of our efforts is to extrapolate nearfield jet noise measurements to the geometric far field where the jet noise sources appear to radiate from a single point. To accomplish this, information about the location of noise sources in the jet plume, the radiation patterns of the noise sources and the sound pressure level distribution of the radiated field must be obtained. Since source locations and radiation patterns can not be found with simple single microphone measurements, a more complicated method must be used.

  13. Apparatus and method using a holographic optical element for converting a spectral distribution to image points

    NASA Technical Reports Server (NTRS)

    McGill, Matthew J. (Inventor); Scott, Vibart S. (Inventor); Marzouk, Marzouk (Inventor)

    2001-01-01

    A holographic optical element transforms a spectral distribution of light to image points. The element comprises areas, each of which acts as a separate lens to image the light incident in its area to an image point. Each area contains the recorded hologram of a point source object. The image points can be made to lie in a line in the same focal plane so as to align with a linear array detector. A version of the element has been developed that has concentric equal areas to match the circular fringe pattern of a Fabry-Perot interferometer. The element has high transmission efficiency, and when coupled with high quantum efficiency solid state detectors, provides an efficient photon-collecting detection system. The element may be used as part of the detection system in a direct detection Doppler lidar system or multiple field of view lidar system.

  14. MEG (Magnetoencephalography) multipolar modeling of distributed sources using RAP-MUSIC (Recursively Applied and Projected Multiple Signal Characterization)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, J. C.; Baillet, S.; Jerbi, K.

    2001-01-01

    We describe the use of truncated multipolar expansions for producing dynamic images of cortical neural activation from measurements of the magnetoencephalogram. We use a signal-subspace method to find the locations of a set of multipolar sources, each of which represents a region of activity in the cerebral cortex. Our method builds up an estimate of the sources in a recursive manner, i.e. we first search for point current dipoles, then magnetic dipoles, and finally first order multipoles. The dynamic behavior of these sources is then computed using a linear fit to the spatiotemporal data. The final step in the proceduremore » is to map each of the multipolar sources into an equivalent distributed source on the cortical surface. The method is illustrated through an application to epileptic interictal MEG data.« less

  15. "WWW.MDTF.ORG": a World Wide Web forum for developing open-architecture, freely distributed, digital teaching file software by participant consensus.

    PubMed

    Katzman, G L; Morris, D; Lauman, J; Cochella, C; Goede, P; Harnsberger, H R

    2001-06-01

    To foster a community supported evaluation processes for open-source digital teaching file (DTF) development and maintenance. The mechanisms used to support this process will include standard web browsers, web servers, forum software, and custom additions to the forum software to potentially enable a mediated voting protocol. The web server will also serve as a focal point for beta and release software distribution, which is the desired end-goal of this process. We foresee that www.mdtf.org will provide for widespread distribution of open source DTF software that will include function and interface design decisions from community participation on the website forums.

  16. Speciated atmospheric mercury and its potential source in Guiyang, China

    NASA Astrophysics Data System (ADS)

    Fu, Xuewu; Feng, Xinbin; Qiu, Guangle; Shang, Lihai; Zhang, Hui

    2011-08-01

    Speciated atmospheric mercury (Hg) including gaseous elemental mercury (GEM), particulate Hg (PHg), and reactive gaseous Hg (RGM) were continuously measured at an urban site in Guiyang city, southwest China from August to December 2009. The averaged concentrations for GEM, PHg, and RGM were 9.72 ± 10.2 ng m -3, 368 ± 676 pg m -3, and 35.7 ± 43.9 pg m -3, respectively, which were all highly elevated compared to observations at urban sites in Europe and North America. GEM and PHg were characterized by similar monthly and diurnal patterns, with elevated levels in cold months and nighttime, respectively. In contrast, RGM did not exhibit clear monthly and diurnal variations. The variations of GEM, PHg, and RGM indicate the sampling site was significantly impacted by sources in the city municipal area. Sources identification implied that both residential coal burning and large point sources were responsible to the elevated GEM and PHg concentrations; whereas point sources were the major contributors to elevated RGM concentrations. Point sources played a different role in regulating GEM, PHg, and RGM concentrations. Aside from residential emissions, PHg levels was mostly affected by small-scale coal combustion boilers situated to the east of the sampling site, which were scarcely equipped or lacking particulate control devices; whereas point sources situated to the east, southeast, and southwest of the sampling played an important role on the distribution of atmospheric GEM and RGM.

  17. CENTAURUS A AS A POINT SOURCE OF ULTRAHIGH ENERGY COSMIC RAYS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Hang Bae, E-mail: hbkim@hanyang.ac.kr

    We probe the possibility that Centaurus A (Cen A) is a point source of ultrahigh energy cosmic rays (UHECRs) observed by Pierre Auger Observatory (PAO), through the statistical analysis of the arrival direction distribution. For this purpose, we set up the Cen A dominance model for the UHECR sources, in which Cen A contributes the fraction f {sub C} of the whole UHECR with energy above 5.5 Multiplication-Sign 10{sup 19} eV and the isotropic background contributes the remaining 1 - f {sub C} fraction. The effect of the intergalactic magnetic fields on the bending of the trajectory of Cen Amore » originated UHECRs is parameterized by the Gaussian smearing angle {theta} {sub s}. For the statistical analysis, we adopted the correlational angular distance distribution (CADD) for the reduction of the arrival direction distribution and the Kuiper test to compare the observed and the expected CADDs. We identify the excess of UHECRs in the Cen A direction and fit the CADD of the observed PAO data by varying two parameters f {sub C} and {theta} {sub s} of the Cen A dominance model. The best-fit parameter values are f {sub C} Almost-Equal-To 0.1 (the corresponding Cen A fraction observed at PAO is f {sub C,PAO} Almost-Equal-To 0.15, that is, about 10 out of 69 UHECRs) and {theta} {sub s} = 5 Degree-Sign with the maximum likelihood L {sub max} = 0.29. This result supports the existence of a point source smeared by the intergalactic magnetic fields in the direction of Cen A. If Cen A is actually the source responsible for the observed excess of UHECRs, the rms deflection angle of the excess UHECRs implies the order of 10 nG intergalactic magnetic field in the vicinity of Cen A.« less

  18. Distribution, Source, and Ecological Risk Assessment of Polycyclic Aromatic Hydrocarbons in Surface Sediment of Liaodong Bay, Northeast China

    NASA Astrophysics Data System (ADS)

    Xu, Shuang; Tao, Ping; Li, Yuxia; Guo, Qi; Zhang, Yan; Wang, Man; Jia, Hongliang; Shao, Mihua

    2018-01-01

    Sixteen polycyclic aromatic hydrocarbons (PAHs) were determined in surface sediments from Liaodong Bay, northeast China. The concentration levels of total PAHs (Σ16PAHs) in sediment were 11.0˜249.6 ng·g-1 dry weight (dw), with a mean value of 89.9 ng·g-1 dry weight (dw). From the point of the spatial distribution, high PAHs levels were found in the western areas of Liaodong Bay. In the paper, sources of PAHs were investigated by diagnostic ratios, which indicated that pyrogenic sources were the main sources of PAHs in the sediment of Liaodong Bay. Therefore, selected PAH levels in sediments were compared with Sediments Quality Guidelines (ERM-ERL indexes) for evaluation probable toxic effects on marine organism.

  19. [Spatio-temporal characteristics and source identification of water pollutants in Wenruitang River watershed].

    PubMed

    Ma, Xiao-xue; Wang, La-chun; Liao, Ling-ling

    2015-01-01

    Identifying the temp-spatial distribution and sources of water pollutants is of great significance for efficient water quality management pollution control in Wenruitang River watershed, China. A total of twelve water quality parameters, including temperature, pH, dissolved oxygen (DO), total nitrogen (TN), ammonia nitrogen (NH4+ -N), electrical conductivity (EC), turbidity (Turb), nitrite-N (NO2-), nitrate-N(NO3-), phosphate-P(PO4(3-), total organic carbon (TOC) and silicate (SiO3(2-)), were analyzed from September, 2008 to October, 2009. Geographic information system(GIS) and principal component analysis(PCA) were used to determine the spatial distribution and to apportion the sources of pollutants. The results demonstrated that TN, NH4+ -N, PO4(3-) were the main pollutants during flow period, wet period, dry period, respectively, which was mainly caused by urban point sources and agricultural and rural non-point sources. In spatial terms, the order of pollution was tertiary river > secondary river > primary river, while the water quality was worse in city zones than in the suburb and wetland zone regardless of the river classification. In temporal terms, the order of pollution was dry period > wet period > flow period. Population density, land use type and water transfer affected the water quality in Wenruitang River.

  20. The detection of carbon dioxide leaks using quasi-tomographic laser absorption spectroscopy measurements in variable wind

    DOE PAGES

    Levine, Zachary H.; Pintar, Adam L.; Dobler, Jeremy T.; ...

    2016-04-13

    Laser absorption spectroscopy (LAS) has been used over the last several decades for the measurement of trace gasses in the atmosphere. For over a decade, LAS measurements from multiple sources and tens of retroreflectors have been combined with sparse-sample tomography methods to estimate the 2-D distribution of trace gas concentrations and underlying fluxes from point-like sources. In this work, we consider the ability of such a system to detect and estimate the position and rate of a single point leak which may arise as a failure mode for carbon dioxide storage. The leak is assumed to be at a constant ratemore » giving rise to a plume with a concentration and distribution that depend on the wind velocity. Lastly, we demonstrate the ability of our approach to detect a leak using numerical simulation and also present a preliminary measurement.« less

  1. Intensity distribution of the x ray source for the AXAF VETA-I mirror test

    NASA Technical Reports Server (NTRS)

    Zhao, Ping; Kellogg, Edwin M.; Schwartz, Daniel A.; Shao, Yibo; Fulton, M. Ann

    1992-01-01

    The X-ray generator for the AXAF VETA-I mirror test is an electron impact X-ray source with various anode materials. The source sizes of different anodes and their intensity distributions were measured with a pinhole camera before the VETA-I test. The pinhole camera consists of a 30 micrometers diameter pinhole for imaging the source and a Microchannel Plate Imaging Detector with 25 micrometers FWHM spatial resolution for detecting and recording the image. The camera has a magnification factor of 8.79, which enables measuring the detailed spatial structure of the source. The spot size, the intensity distribution, and the flux level of each source were measured with different operating parameters. During the VETA-I test, microscope pictures were taken for each used anode immediately after it was brought out of the source chamber. The source sizes and the intensity distribution structures are clearly shown in the pictures. They are compared and agree with the results from the pinhole camera measurements. This paper presents the results of the above measurements. The results show that under operating conditions characteristic of the VETA-I test, all the source sizes have a FWHM of less than 0.45 mm. For a source of this size at 528 meters away, the angular size to VETA is less than 0.17 arcsec which is small compared to the on ground VETA angular resolution (0.5 arcsec, required and 0.22 arcsec, measured). Even so, the results show the intensity distributions of the sources have complicated structures. These results were crucial for the VETA data analysis and for obtaining the on ground and predicted in orbit VETA Point Response Function.

  2. Fabrication and In Situ Testing of Scalable Nitrate-Selective Electrodes for Distributed Observations

    NASA Astrophysics Data System (ADS)

    Harmon, T. C.; Rat'ko, A.; Dietrich, H.; Park, Y.; Wijsboom, Y. H.; Bendikov, M.

    2008-12-01

    Inorganic nitrogen (nitrate (NO3-) and ammonium (NH+)) from chemical fertilizer and livestock waste is a major source of pollution in groundwater, surface water and the air. While some sources of these chemicals, such as waste lagoons, are well-defined, their application as fertilizer has the potential to create distributed or non-point source pollution problems. Scalable nitrate sensors (small and inexpensive) would enable us to better assess non-point source pollution processes in agronomic soils, groundwater and rivers subject to non-point source inputs. This work describes the fabrication and testing of inexpensive PVC-membrane- based ion selective electrodes (ISEs) for monitoring nitrate levels in soil water environments. ISE-based sensors have the advantages of being easy to fabricate and use, but suffer several shortcomings, including limited sensitivity, poor precision, and calibration drift. However, modern materials have begun to yield more robust ISE types in laboratory settings. This work emphasizes the in situ behavior of commercial and fabricated sensors in soils subject to irrigation with dairy manure water. Results are presented in the context of deployment techniques (in situ versus soil lysimeters), temperature compensation, and uncertainty analysis. Observed temporal responses of the nitrate sensors exhibited diurnal cycling with elevated nitrate levels at night and depressed levels during the day. Conventional samples collected via lysimeters validated this response. It is concluded that while modern ISEs are not yet ready for long-term, unattended deployment, short-term installations (on the order of 2 to 4 days) are viable and may provide valuable insights into nitrogen dynamics in complex soil systems.

  3. Using a topographic index to distribute variable source area runoff predicted with the SCS curve-number equation

    NASA Astrophysics Data System (ADS)

    Lyon, Steve W.; Walter, M. Todd; Gérard-Marchant, Pierre; Steenhuis, Tammo S.

    2004-10-01

    Because the traditional Soil Conservation Service curve-number (SCS-CN) approach continues to be used ubiquitously in water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed and tested a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Predicting the location of source areas is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point-source pollution. The method presented here used the traditional SCS-CN approach to predict runoff volume and spatial extent of saturated areas and a topographic index, like that used in TOPMODEL, to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was applied to two subwatersheds of the Delaware basin in the Catskill Mountains region of New York State and one watershed in south-eastern Australia to produce runoff-probability maps. Observed saturated area locations in the watersheds agreed with the distributed CN-VSA method. Results showed good agreement with those obtained from the previously validated soil moisture routing (SMR) model. When compared with the traditional SCS-CN method, the distributed CN-VSA method predicted a similar total volume of runoff, but vastly different locations of runoff generation. Thus, the distributed CN-VSA approach provides a physically based method that is simple enough to be incorporated into water quality models, and other tools that currently use the traditional SCS-CN method, while still adhering to the principles of VSA hydrology.

  4. Ultra-high resolution of radiocesium distribution detection based on Cherenkov light imaging

    NASA Astrophysics Data System (ADS)

    Yamamoto, Seiichi; Ogata, Yoshimune; Kawachi, Naoki; Suzui, Nobuo; Yin, Yong-Gen; Fujimaki, Shu

    2015-03-01

    After the nuclear disaster in Fukushima, radiocesium contamination became a serious scientific concern and research of its effects on plants increased. In such plant studies, high resolution images of radiocesium are required without contacting the subjects. Cherenkov light imaging of beta radionuclides has inherently high resolution and is promising for plant research. Since 137Cs and 134Cs emit beta particles, Cherenkov light imaging will be useful for the imaging of radiocesium distribution. Consequently, we developed and tested a Cherenkov light imaging system. We used a high sensitivity cooled charge coupled device (CCD) camera (Hamamatsu Photonics, ORCA2-ER) for imaging Cherenkov light from 137Cs. A bright lens (Xenon, F-number: 0.95, lens diameter: 25 mm) was mounted on the camera and placed in a black box. With a 100-μm 137Cs point source, we obtained 220-μm spatial resolution in the Cherenkov light image. With a 1-mm diameter, 320-kBq 137Cs point source, the source was distinguished within 2-s. We successfully obtained Cherenkov light images of a plant whose root was dipped in a 137Cs solution, radiocesium-containing samples as well as line and character phantom images with our imaging system. Cherenkov light imaging is promising for the high resolution imaging of radiocesium distribution without contacting the subject.

  5. Analyzing Variability in Landscape Nutrient Loading Using Spatially-Explicit Maps in the Great Lakes Basin

    NASA Astrophysics Data System (ADS)

    Hamlin, Q. F.; Kendall, A. D.; Martin, S. L.; Whitenack, H. D.; Roush, J. A.; Hannah, B. A.; Hyndman, D. W.

    2017-12-01

    Excessive loading of nitrogen and phosphorous to the landscape has caused biologically and economically damaging eutrophication and harmful algal blooms in the Great Lakes Basin (GLB) and across the world. We mapped source-specific loads of nitrogen and phosphorous to the landscape using broadly available data across the GLB. SENSMap (Spatially Explicit Nutrient Source Map) is a 30m resolution snapshot of nutrient loads ca. 2010. We use these maps to study variable nutrient loading and provide this information to watershed managers through NOAA's GLB Tipping Points Planner. SENSMap individually maps nutrient point sources and six non-point sources: 1) atmospheric deposition, 2) septic tanks, 3) non-agricultural chemical fertilizer, 4) agricultural chemical fertilizer, 5) manure, and 6) nitrogen fixation from legumes. To model source-specific loads at high resolution, SENSMap synthesizes a wide range of remotely sensed, surveyed, and tabular data. Using these spatially explicit nutrient loading maps, we can better calibrate local land use-based water quality models and provide insight to watershed managers on how to focus nutrient reduction strategies. Here we examine differences in dominant nutrient sources across the GLB, and how those sources vary by land use. SENSMap's high resolution, source-specific approach offers a different lens to understand nutrient loading than traditional semi-distributed or land use based models.

  6. [Nitrogen non-point source pollution identification based on ArcSWAT in Changle River].

    PubMed

    Deng, Ou-Ping; Sun, Si-Yang; Lü, Jun

    2013-04-01

    The ArcSWAT (Soil and Water Assessment Tool) model was adopted for Non-point source (NPS) nitrogen pollution modeling and nitrogen source apportionment for the Changle River watershed, a typical agricultural watershed in Southeast China. Water quality and hydrological parameters were monitored, and the watershed natural conditions (including soil, climate, land use, etc) and pollution sources information were also investigated and collected for SWAT database. The ArcSWAT model was established in the Changle River after the calibrating and validating procedures of the model parameters. Based on the validated SWAT model, the contributions of different nitrogen sources to river TN loading were quantified, and spatial-temporal distributions of NPS nitrogen export to rivers were addressed. The results showed that in the Changle River watershed, Nitrogen fertilizer, nitrogen air deposition and nitrogen soil pool were the prominent pollution sources, which contributed 35%, 32% and 25% to the river TN loading, respectively. There were spatial-temporal variations in the critical sources for NPS TN export to the river. Natural sources, such as soil nitrogen pool and atmospheric nitrogen deposition, should be targeted as the critical sources for river TN pollution during the rainy seasons. Chemical nitrogen fertilizer application should be targeted as the critical sources for river TN pollution during the crop growing season. Chemical nitrogen fertilizer application, soil nitrogen pool and atmospheric nitrogen deposition were the main sources for TN exported from the garden plot, forest and residential land, respectively. However, they were the main sources for TN exported both from the upland and paddy field. These results revealed that NPS pollution controlling rules should focus on the spatio-temporal distribution of NPS pollution sources.

  7. The Ionization Source in the Nucleus of M84

    NASA Technical Reports Server (NTRS)

    Bower, G. A.; Green, R. F.; Quillen, A. C.; Danks, A.; Malumuth, E. M.; Gull, T.; Woodgate, B.; Hutchings, J.; Joseph, C.; Kaiser, M. E.

    2000-01-01

    We have obtained new Hubble Space Telescope (HST) observations of M84, a nearby massive elliptical galaxy whose nucleus contains a approximately 1.5 X 10(exp 9) solar mass dark compact object, which presumably is a supermassive black hole. Our Space Telescope Imaging Spectrograph (STIS) spectrum provides the first clear detection of emission lines in the blue (e.g., [0 II] lambda 3727, HBeta and [0 III] lambda lambda4959,5007), which arise from a compact region approximately 0".28 across centered on the nucleus. Our Near Infrared Camera and MultiObject Spectrometer (NICMOS) images exhibit the best view through the prominent dust lanes evident at optical wavelengths and provide a more accurate correction for the internal extinction. The relative fluxes of the emission lines we have detected in the blue together with those detected in the wavelength range 6295 - 6867 A by Bower et al. indicate that the gas at the nucleus is photoionized by a nonstellar process, instead of hot stars. Stellar absorption features from cool stars at the nucleus are very weak. We update the spectral energy distribution of the nuclear point source and find that although it is roughly flat in most bands, the optical to UV continuum is very red, similar to the spectral energy distribution of BL Lac. Thus, the nuclear point source seen in high-resolution optical images is not a star cluster but is instead a nonstellar source. Assuming isotropic emission from this source, we estimate that the ratio of bolometric luminosity to Eddington luminosity is about 5 x 10(exp -7). However, this could be underestimated if this source is a misaligned BL Lac object, which is a possibility suggested by the spectral energy distribution and the evidence of optical variability we describe.

  8. Local spectrum analysis of field propagation in an anisotropic medium. Part II. Time-dependent fields.

    PubMed

    Tinkelman, Igor; Melamed, Timor

    2005-06-01

    In Part I of this two-part investigation [J. Opt. Soc. Am. A 22, 1200 (2005)], we presented a theory for phase-space propagation of time-harmonic electromagnetic fields in an anisotropic medium characterized by a generic wave-number profile. In this Part II, these investigations are extended to transient fields, setting a general analytical framework for local analysis and modeling of radiation from time-dependent extended-source distributions. In this formulation the field is expressed as a superposition of pulsed-beam propagators that emanate from all space-time points in the source domain and in all directions. Using time-dependent quadratic-Lorentzian windows, we represent the field by a phase-space spectral distribution in which the propagating elements are pulsed beams, which are formulated by a transient plane-wave spectrum over the extended-source plane. By applying saddle-point asymptotics, we extract the beam phenomenology in the anisotropic environment resulting from short-pulsed processing. Finally, the general results are applied to the special case of uniaxial crystal and compared with a reference solution.

  9. Breakthrough in 4π ion emission mechanism understanding in plasma focus devices.

    PubMed

    Sohrabi, Mehdi; Zarinshad, Arefe; Habibi, Morteza

    2016-12-12

    Ion emission angular distribution mechanisms in plasma focus devices (PFD) have not yet been well developed and understood being due to the lack of an efficient wide-angle ion distribution image detection system to characterize a PFD space in detail. Present belief is that the acceleration of ions points from "anode top" upwards in forward direction within a small solid angle. A breakthrough is reported in this study, by mega-size position-sensitive polycarbonate ion image detection systems invented, on discovery of 4π ion emission from the "anode top" in a PFD space after plasma pinch instability and radial run-away of ions from the "anode cathodes array" during axial acceleration of plasma sheaths before the radial phase. These two ion emission source mechanisms behave respectively as a "Point Ion Source" and a "Line Ion Source" forming "Ion Cathode Shadows" on mega-size detectors. We believe that the inventions and discoveries made here will open new horizons for advanced ion emission studies towards better mechanisms understanding and in particular will promote efficient applications of PFDs in medicine, science and technology.

  10. Spatial distribution of pollutants in the area of the former CHP plant

    NASA Astrophysics Data System (ADS)

    Cichowicz, Robert

    2018-01-01

    The quality of atmospheric air and level of its pollution are now one of the most important issues connected with life on Earth. The frequent nuisance and exceedance of pollution standards often described in the media are generated by both low emission sources and mobile sources. Also local organized energy emission sources such as local boiler houses or CHP plants have impact on air pollution. At the same time it is important to remember that the role of local power stations in shaping air pollution immission fields depends on the height of emitters and functioning of waste gas treatment installations. Analysis of air pollution distribution was carried out in 2 series/dates, i.e. 2 and 10 weeks after closure of the CHP plant. In the analysis as a reference point the largest intersection of streets located in the immediate vicinity of the plant was selected, from which virtual circles were drawn every 50 meters, where 31 measuring points were located. As a result, the impact of carbon dioxide, hydrogen sulfide and ammonia levels could be observed and analyzed, depending on the distance from the street intersection.

  11. Restricted genetic variation in populations of Achatina (Lissachatina) fulica outside of East Africa and the Indian Ocean Islands points to the Indian Ocean Islands as the earliest known common source.

    PubMed

    Fontanilla, Ian Kendrich C; Sta Maria, Inna Mikaella P; Garcia, James Rainier M; Ghate, Hemant; Naggs, Fred; Wade, Christopher M

    2014-01-01

    The Giant African Land Snail, Achatina ( =  Lissachatina) fulica Bowdich, 1822, is a tropical crop pest species with a widespread distribution across East Africa, the Indian subcontinent, Southeast Asia, the Pacific, the Caribbean, and North and South America. Its current distribution is attributed primarily to the introduction of the snail to new areas by Man within the last 200 years. This study determined the extent of genetic diversity in global A. fulica populations using the mitochondrial 16S ribosomal RNA gene. A total of 560 individuals were evaluated from 39 global populations obtained from 26 territories. Results reveal 18 distinct A. fulica haplotypes; 14 are found in East Africa and the Indian Ocean islands, but only two haplotypes from the Indian Ocean islands emerged from this region, the C haplotype, now distributed across the tropics, and the D haplotype in Ecuador and Bolivia. Haplotype E from the Philippines, F from New Caledonia and Barbados, O from India and Q from Ecuador are variants of the emergent C haplotype. For the non-native populations, the lack of genetic variation points to founder effects due to the lack of multiple introductions from the native range. Our current data could only point with certainty to the Indian Ocean islands as the earliest known common source of A. fulica across the globe, which necessitates further sampling in East Africa to determine the source populations of the emergent haplotypes.

  12. Restricted Genetic Variation in Populations of Achatina (Lissachatina) fulica outside of East Africa and the Indian Ocean Islands Points to the Indian Ocean Islands as the Earliest Known Common Source

    PubMed Central

    Fontanilla, Ian Kendrich C.; Sta. Maria, Inna Mikaella P.; Garcia, James Rainier M.; Ghate, Hemant; Naggs, Fred; Wade, Christopher M.

    2014-01-01

    The Giant African Land Snail, Achatina ( = Lissachatina) fulica Bowdich, 1822, is a tropical crop pest species with a widespread distribution across East Africa, the Indian subcontinent, Southeast Asia, the Pacific, the Caribbean, and North and South America. Its current distribution is attributed primarily to the introduction of the snail to new areas by Man within the last 200 years. This study determined the extent of genetic diversity in global A. fulica populations using the mitochondrial 16S ribosomal RNA gene. A total of 560 individuals were evaluated from 39 global populations obtained from 26 territories. Results reveal 18 distinct A. fulica haplotypes; 14 are found in East Africa and the Indian Ocean islands, but only two haplotypes from the Indian Ocean islands emerged from this region, the C haplotype, now distributed across the tropics, and the D haplotype in Ecuador and Bolivia. Haplotype E from the Philippines, F from New Caledonia and Barbados, O from India and Q from Ecuador are variants of the emergent C haplotype. For the non-native populations, the lack of genetic variation points to founder effects due to the lack of multiple introductions from the native range. Our current data could only point with certainty to the Indian Ocean islands as the earliest known common source of A. fulica across the globe, which necessitates further sampling in East Africa to determine the source populations of the emergent haplotypes. PMID:25203830

  13. Feature Geo Analytics and Big Data Processing: Hybrid Approaches for Earth Science and Real-Time Decision Support

    NASA Astrophysics Data System (ADS)

    Wright, D. J.; Raad, M.; Hoel, E.; Park, M.; Mollenkopf, A.; Trujillo, R.

    2016-12-01

    Introduced is a new approach for processing spatiotemporal big data by leveraging distributed analytics and storage. A suite of temporally-aware analysis tools summarizes data nearby or within variable windows, aggregates points (e.g., for various sensor observations or vessel positions), reconstructs time-enabled points into tracks (e.g., for mapping and visualizing storm tracks), joins features (e.g., to find associations between features based on attributes, spatial relationships, temporal relationships or all three simultaneously), calculates point densities, finds hot spots (e.g., in species distributions), and creates space-time slices and cubes (e.g., in microweather applications with temperature, humidity, and pressure, or within human mobility studies). These "feature geo analytics" tools run in both batch and streaming spatial analysis mode as distributed computations across a cluster of servers on typical "big" data sets, where static data exist in traditional geospatial formats (e.g., shapefile) locally on a disk or file share, attached as static spatiotemporal big data stores, or streamed in near-real-time. In other words, the approach registers large datasets or data stores with ArcGIS Server, then distributes analysis across a cluster of machines for parallel processing. Several brief use cases will be highlighted based on a 16-node server cluster at 14 Gb RAM per node, allowing, for example, the buffering of over 8 million points or thousands of polygons in 1 minute. The approach is "hybrid" in that ArcGIS Server integrates open-source big data frameworks such as Apache Hadoop and Apache Spark on the cluster in order to run the analytics. In addition, the user may devise and connect custom open-source interfaces and tools developed in Python or Python Notebooks; the common denominator being the familiar REST API.

  14. Distributed least-squares estimation of a remote chemical source via convex combination in wireless sensor networks.

    PubMed

    Cao, Meng-Li; Meng, Qing-Hao; Zeng, Ming; Sun, Biao; Li, Wei; Ding, Cheng-Jun

    2014-06-27

    This paper investigates the problem of locating a continuous chemical source using the concentration measurements provided by a wireless sensor network (WSN). Such a problem exists in various applications: eliminating explosives or drugs, detecting the leakage of noxious chemicals, etc. The limited power and bandwidth of WSNs have motivated collaborative in-network processing which is the focus of this paper. We propose a novel distributed least-squares estimation (DLSE) method to solve the chemical source localization (CSL) problem using a WSN. The DLSE method is realized by iteratively conducting convex combination of the locally estimated chemical source locations in a distributed manner. Performance assessments of our method are conducted using both simulations and real experiments. In the experiments, we propose a fitting method to identify both the release rate and the eddy diffusivity. The results show that the proposed DLSE method can overcome the negative interference of local minima and saddle points of the objective function, which would hinder the convergence of local search methods, especially in the case of locating a remote chemical source.

  15. Sampling Singular and Aggregate Point Sources of Carbon Dioxide from Space Using OCO-2

    NASA Astrophysics Data System (ADS)

    Schwandner, F. M.; Gunson, M. R.; Eldering, A.; Miller, C. E.; Nguyen, H.; Osterman, G. B.; Taylor, T.; O'Dell, C.; Carn, S. A.; Kahn, B. H.; Verhulst, K. R.; Crisp, D.; Pieri, D. C.; Linick, J.; Yuen, K.; Sanchez, R. M.; Ashok, M.

    2016-12-01

    Anthropogenic carbon dioxide (CO2) sources increasingly tip the natural balance between natural carbon sources and sinks. Space-borne measurements offer opportunities to detect and analyze point source emission signals anywhere on Earth. Singular continuous point source plumes from power plants or volcanoes turbulently mix into their proximal background fields. In contrast, plumes of aggregate point sources such as cities, and transportation or fossil fuel distribution networks, mix into each other and may therefore result in broader and more persistent excess signals of total column averaged CO2 (XCO2). NASA's first satellite dedicated to atmospheric CO2observation, the Orbiting Carbon Observatory-2 (OCO-2), launched in July 2014 and now leads the afternoon constellation of satellites (A-Train). While continuously collecting measurements in eight footprints across a narrow ( < 10 km) wide swath it occasionally cross-cuts coincident emission plumes. For singular point sources like volcanoes and coal fired power plants, we have developed OCO-2 data discovery tools and a proxy detection method for plumes using SO2-sensitive TIR imaging data (ASTER). This approach offers a path toward automating plume detections with subsequent matching and mining of OCO-2 data. We found several distinct singular source CO2signals. For aggregate point sources, we investigated whether OCO-2's multi-sounding swath observing geometry can reveal intra-urban spatial emission structures in the observed variability of XCO2 data. OCO-2 data demonstrate that we can detect localized excess XCO2 signals of 2 to 6 ppm against suburban and rural backgrounds. Compared to single-shot GOSAT soundings which detected urban/rural XCO2differences in megacities (Kort et al., 2012), the OCO-2 swath geometry opens up the path to future capabilities enabling urban characterization of greenhouse gases using hundreds of soundings over a city at each satellite overpass. California Institute of Technology

  16. The excitation of long period seismic waves by a source spanning a structural discontinuity

    NASA Astrophysics Data System (ADS)

    Woodhouse, J. H.

    Simple theoretical results are obtained for the excitation of seismic waves by an indigenous seismic source in the case that the source volume is intersected by a structural discontinuity. In the long wavelength approximation the seismic radiation is identical to that of a point source placed on one side of the discontinuity or of a different point source placed on the other side. The moment tensors of these two equivalent sources are related by a specific linear transformation and may differ appreciably both in magnitude and geometry. Either of these sources could be obtained by linear inversion of seismic data but the physical interpretation is more complicated than in the usual case. A source which involved no volume change would, for example, yield an isotropic component if, during inversion, it were assumed to lie on the wrong side of the discontinuity. The problem of determining the true moment tensor of the source is indeterminate unless further assumptions are made about the stress glut distribution; one way to resolve this indeterminancy is to assume proportionality between the integrated stress glut on each side of the discontinuity.

  17. The Chandra Xbootes Survey - IV: Mid-Infrared and Submillimeter Counterparts

    NASA Astrophysics Data System (ADS)

    Brown, Arianna; Mitchell-Wynne, Ketron; Cooray, Asantha R.; Nayyeri, Hooshang

    2016-06-01

    In this work, we use a Bayesian technique to identify mid-IR and submillimeter counterparts for 3,213 X-ray point sources detected in the Chandra XBoötes Survey so as to characterize the relationship between black hole activity and star formation in the XBoötes region. The Chandra XBoötes Survey is a 5-ks X-ray survey of the 9.3 square degree Boötes Field of the NOAO Deep Wide-Field Survey (NDWFS), a survey imaged from the optical to the near-IR. We use a likelihood ratio analysis on Spitzer-IRAC data taken from The Spitzer Deep, Wide-Field Survey (SDWFS) to determine mid-IR counterparts, and a similar method on Herschel-SPIRE sources detected at 250µm from The Herschel Multi-tiered Extragalactic Survey to determine the submillimeter counterparts. The likelihood ratio analysis (LRA) provides the probability that a(n) IRAC or SPIRE point source is the true counterpart to a Chandra source. The analysis is comprised of three parts: the normalized magnitude distributions of counterparts and background sources, and the radial probability distribution of the separation distance between the IRAC or SPIRE source and the Chandra source. Many Chandra sources have multiple prospective counterparts in each band, so additional analysis is performed to determine the identification reliability of the candidates. Identification reliability values lie between 0 and 1, and sources with identification reliability values ≥0.8 are chosen to be the true counterparts. With these results, we will consider the statistical implications of the sample's redshifts, mid-IR and submillimeter luminosities, and star formation rates.

  18. A Novel Field-Deployable Point-of-Care Diagnostic Test for Cutaneous Leishmaniasis

    DTIC Science & Technology

    2017-10-01

    Leishmaniasis PRINCIPAL INVESTIGATOR: LT. Danett K. Bishop CONTRACTING ORGANIZATION: The Henry M. Jackson for the Advancement of Military Medicine Bethesda...21702-5012 DISTRIBUTION STATEMENT: Approved for Public Release; Distribution Unlimited The views, opinions and/or findings contained in this...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the

  19. An Improved Statistical Point-source Foreground Model for the Epoch of Reionization

    NASA Astrophysics Data System (ADS)

    Murray, S. G.; Trott, C. M.; Jordan, C. H.

    2017-08-01

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.

  20. Local spectrum analysis of field propagation in an anisotropic medium. Part I. Time-harmonic fields.

    PubMed

    Tinkelman, Igor; Melamed, Timor

    2005-06-01

    The phase-space beam summation is a general analytical framework for local analysis and modeling of radiation from extended source distributions. In this formulation, the field is expressed as a superposition of beam propagators that emanate from all points in the source domain and in all directions. In this Part I of a two-part investigation, the theory is extended to include propagation in anisotropic medium characterized by a generic wave-number profile for time-harmonic fields; in a companion paper [J. Opt. Soc. Am. A 22, 1208 (2005)], the theory is extended to time-dependent fields. The propagation characteristics of the beam propagators in a homogeneous anisotropic medium are considered. With use of Gaussian windows for the local processing of either ordinary or extraordinary electromagnetic field distributions, the field is represented by a phase-space spectral distribution in which the propagating elements are Gaussian beams that are formulated by using Gaussian plane-wave spectral distributions over the extended source plane. By applying saddle-point asymptotics, we extract the Gaussian beam phenomenology in the anisotropic environment. The resulting field is parameterized in terms of the spatial evolution of the beam curvature, beam width, etc., which are mapped to local geometrical properties of the generic wave-number profile. The general results are applied to the special case of uniaxial crystal, and it is found that the asymptotics for the Gaussian beam propagators, as well as the physical phenomenology attached, perform remarkably well.

  1. Information entropy to measure the spatial and temporal complexity of solute transport in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Li, Weiyao; Huang, Guanhua; Xiong, Yunwu

    2016-04-01

    The complexity of the spatial structure of porous media, randomness of groundwater recharge and discharge (rainfall, runoff, etc.) has led to groundwater movement complexity, physical and chemical interaction between groundwater and porous media cause solute transport in the medium more complicated. An appropriate method to describe the complexity of features is essential when study on solute transport and conversion in porous media. Information entropy could measure uncertainty and disorder, therefore we attempted to investigate complexity, explore the contact between the information entropy and complexity of solute transport in heterogeneous porous media using information entropy theory. Based on Markov theory, two-dimensional stochastic field of hydraulic conductivity (K) was generated by transition probability. Flow and solute transport model were established under four conditions (instantaneous point source, continuous point source, instantaneous line source and continuous line source). The spatial and temporal complexity of solute transport process was characterized and evaluated using spatial moment and information entropy. Results indicated that the entropy increased as the increase of complexity of solute transport process. For the point source, the one-dimensional entropy of solute concentration increased at first and then decreased along X and Y directions. As time increased, entropy peak value basically unchanged, peak position migrated along the flow direction (X direction) and approximately coincided with the centroid position. With the increase of time, spatial variability and complexity of solute concentration increase, which result in the increases of the second-order spatial moment and the two-dimensional entropy. Information entropy of line source was higher than point source. Solute entropy obtained from continuous input was higher than instantaneous input. Due to the increase of average length of lithoface, media continuity increased, flow and solute transport complexity weakened, and the corresponding information entropy also decreased. Longitudinal macro dispersivity declined slightly at early time then rose. Solute spatial and temporal distribution had significant impacts on the information entropy. Information entropy could reflect the change of solute distribution. Information entropy appears a tool to characterize the spatial and temporal complexity of solute migration and provides a reference for future research.

  2. Selective structural source identification

    NASA Astrophysics Data System (ADS)

    Totaro, Nicolas

    2018-04-01

    In the field of acoustic source reconstruction, the inverse Patch Transfer Function (iPTF) has been recently proposed and has shown satisfactory results whatever the shape of the vibrating surface and whatever the acoustic environment. These two interesting features are due to the virtual acoustic volume concept underlying the iPTF methods. The aim of the present article is to show how this concept of virtual subsystem can be used in structures to reconstruct the applied force distribution. Some virtual boundary conditions can be applied on a part of the structure, called virtual testing structure, to identify the force distribution applied in that zone regardless of the presence of other sources outside the zone under consideration. In the present article, the applicability of the method is only demonstrated on planar structures. However, the final example show how the method can be applied to a complex shape planar structure with point welded stiffeners even in the tested zone. In that case, if the virtual testing structure includes the stiffeners the identified force distribution only exhibits the positions of external applied forces. If the virtual testing structure does not include the stiffeners, the identified force distribution permits to localize the forces due to the coupling between the structure and the stiffeners through the welded points as well as the ones due to the external forces. This is why this approach is considered here as a selective structural source identification method. It is demonstrated that this approach clearly falls in the same framework as the Force Analysis Technique, the Virtual Fields Method or the 2D spatial Fourier transform. Even if this approach has a lot in common with these latters, it has some interesting particularities like its low sensitivity to measurement noise.

  3. Airborne methane remote measurements reveal heavy-tail flux distribution in Four Corners region

    PubMed Central

    Thorpe, Andrew K.; Thompson, David R.; Hulley, Glynn; Kort, Eric Adam; Vance, Nick; Borchardt, Jakob; Krings, Thomas; Gerilowski, Konstantin; Sweeney, Colm; Conley, Stephen; Bue, Brian D.; Aubrey, Andrew D.; Hook, Simon; Green, Robert O.

    2016-01-01

    Methane (CH4) impacts climate as the second strongest anthropogenic greenhouse gas and air quality by influencing tropospheric ozone levels. Space-based observations have identified the Four Corners region in the Southwest United States as an area of large CH4 enhancements. We conducted an airborne campaign in Four Corners during April 2015 with the next-generation Airborne Visible/Infrared Imaging Spectrometer (near-infrared) and Hyperspectral Thermal Emission Spectrometer (thermal infrared) imaging spectrometers to better understand the source of methane by measuring methane plumes at 1- to 3-m spatial resolution. Our analysis detected more than 250 individual methane plumes from fossil fuel harvesting, processing, and distributing infrastructures, spanning an emission range from the detection limit ∼ 2 kg/h to 5 kg/h through ∼ 5,000 kg/h. Observed sources include gas processing facilities, storage tanks, pipeline leaks, and well pads, as well as a coal mine venting shaft. Overall, plume enhancements and inferred fluxes follow a lognormal distribution, with the top 10% emitters contributing 49 to 66% to the inferred total point source flux of 0.23 Tg/y to 0.39 Tg/y. With the observed confirmation of a lognormal emission distribution, this airborne observing strategy and its ability to locate previously unknown point sources in real time provides an efficient and effective method to identify and mitigate major emissions contributors over a wide geographic area. With improved instrumentation, this capability scales to spaceborne applications [Thompson DR, et al. (2016) Geophys Res Lett 43(12):6571–6578]. Further illustration of this potential is demonstrated with two detected, confirmed, and repaired pipeline leaks during the campaign. PMID:27528660

  4. The feasibility of effluent trading in the energy industries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veil, J.A.

    1997-05-01

    In January 1996, the U.S. Environmental Protection Agency (EPA) released a policy statement endorsing effluent trading in watersheds, hoping to spur additional interest in the subject. The policy describes five types of effluent trades - point source/point source, point source/nonpoint source, pretreatment, intraplant, and nonpoint source/nonpoint source. This report evaluates the feasibility of effluent trading for facilities in the oil and gas industry (exploration and production, refining, and distribution and marketing segments), electric power industry, and the coal industry (mines and preparation plants). Nonpoint source/nonpoint source trades are not considered since the energy industry facilities evaluated here are all pointmore » sources. EPA has administered emission trading programs in its air quality program for many years. Programs for offsets, bubbles, banking, and netting are supported by federal regulations, and the 1990 Clean Air Act (CAA) amendments provide a statutory basis for trading programs to control ozone and acid rain. Different programs have had varying degrees of success, but few have come close to meeting their expectations. Few trading programs have been established under the Clean Water Act (CWA). One intraplant trading program was established by EPA in its effluent limitation guidelines (ELGs) for the iron and steel industry. The other existing effluent trading programs were established by state or local governments and have had minimal success.« less

  5. Optimized Dose Distribution of Gammamed Plus Vaginal Cylinders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Supe, Sanjay S.; Bijina, T.K.; Varatharaj, C.

    2009-04-01

    Endometrial carcinoma is the most common malignancy arising in the female genital tract. Intracavitary vaginal cuff irradiation may be given alone or with external beam irradiation in patients determined to be at risk for locoregional recurrence. Vaginal cylinders are often used to deliver a brachytherapy dose to the vaginal apex and upper vagina or the entire vaginal surface in the management of postoperative endometrial cancer or cervical cancer. The dose distributions of HDR vaginal cylinders must be evaluated carefully, so that clinical experiences with LDR techniques can be used in guiding optimal use of HDR techniques. The aim of thismore » study was to optimize dose distribution for Gammamed plus vaginal cylinders. Placement of dose optimization points was evaluated for its effect on optimized dose distributions. Two different dose optimization point models were used in this study, namely non-apex (dose optimization points only on periphery of cylinder) and apex (dose optimization points on periphery and along the curvature including the apex points). Thirteen dwell positions were used for the HDR dosimetry to obtain a 6-cm active length. Thus 13 optimization points were available at the periphery of the cylinder. The coordinates of the points along the curvature depended on the cylinder diameters and were chosen for each cylinder so that four points were distributed evenly in the curvature portion of the cylinder. Diameter of vaginal cylinders varied from 2.0 to 4.0 cm. Iterative optimization routine was utilized for all optimizations. The effects of various optimization routines (iterative, geometric, equal times) was studied for the 3.0-cm diameter vaginal cylinder. The effect of source travel step size on the optimized dose distributions for vaginal cylinders was also evaluated. All optimizations in this study were carried for dose of 6 Gy at dose optimization points. For both non-apex and apex models of vaginal cylinders, doses for apex point and three dome points were higher for the apex model compared with the non-apex model. Mean doses to the optimization points for both the cylinder models and all the cylinder diameters were 6 Gy, matching with the prescription dose of 6 Gy. Iterative optimization routine resulted in the highest dose to apex point and dome points. The mean dose for optimization point was 6.01 Gy for iterative optimization and was much higher than 5.74 Gy for geometric and equal times routines. Step size of 1 cm gave the highest dose to the apex point. This step size was superior in terms of mean dose to optimization points. Selection of dose optimization points for the derivation of optimized dose distributions for vaginal cylinders affects the dose distributions.« less

  6. Investigating the generation of Love waves in secondary microseisms using 3D numerical simulations

    NASA Astrophysics Data System (ADS)

    Wenk, Stefan; Hadziioannou, Celine; Pelties, Christian; Igel, Heiner

    2014-05-01

    Longuet-Higgins (1950) proposed that secondary microseismic noise can be attributed to oceanic disturbances by surface gravity wave interference causing non-linear, second-order pressure perturbations at the ocean bottom. As a first approximation, this source mechanism can be considered as a force acting normal to the ocean bottom. In an isotropic, layered, elastic Earth model with plain interfaces, vertical forces generate P-SV motions in the vertical plane of source and receiver. In turn, only Rayleigh waves are excited at the free surface. However, several authors report on significant Love wave contributions in the secondary microseismic frequency band of real data measurements. The reason is still insufficiently analysed and several hypothesis are under debate: - The source mechanism has strongest influence on the excitation of shear motions, whereas the source direction dominates the effect of Love wave generation in case of point force sources. Darbyshire and Okeke (1969) proposed the topographic coupling effect of pressure loads acting on a sloping sea-floor to generate the shear tractions required for Love wave excitation. - Rayleigh waves can be converted into Love waves by scattering. Therefore, geometric scattering at topographic features or internal scattering by heterogeneous material distributions can cause Love wave generation. - Oceanic disturbances act on large regions of the ocean bottom, and extended sources have to be considered. In combination with topographic coupling and internal scattering, the extent of the source region and the timing of an extended source should effect Love wave excitation. We try to elaborate the contribution of different source mechanisms and scattering effects on Love to Rayleigh wave energy ratios by 3D numerical simulations. In particular, we estimate the amount of Love wave energy generated by point and extended sources acting on the free surface. Simulated point forces are modified in their incident angle, whereas extended sources are adapted in their spatial extent, magnitude and timing. Further, the effect of variations in the correlation length and perturbation magnitude of a random free surface topography as well as an internal random material distribution are studied.

  7. Searching for minimum in dependence of squared speed-of-sound on collision energy

    DOE PAGES

    Liu, Fu -Hu; Gao, Li -Na; Lacey, Roy A.

    2016-01-01

    Experimore » mental results of the rapidity distributions of negatively charged pions produced in proton-proton ( p - p ) and beryllium-beryllium (Be-Be) collisions at different beam momentums, measured by the NA61/SHINE Collaboration at the super proton synchrotron (SPS), are described by a revised (three-source) Landau hydrodynamic model. The squared speed-of-sound parameter c s 2 is then extracted from the width of rapidity distribution. There is a local minimum (knee point) which indicates a softest point in the equation of state (EoS) appearing at about 40 A  GeV/ c (or 8.8 GeV) in c s 2 excitation function (the dependence of c s 2 on incident beam momentum (or center-of-mass energy)). This knee point should be related to the searching for the onset of quark deconfinement and the critical point of quark-gluon plasma (QGP) phase transition.« less

  8. A GIS-based multi-source and multi-box modeling approach (GMSMB) for air pollution assessment--a North American case study.

    PubMed

    Wang, Bao-Zhen; Chen, Zhi

    2013-01-01

    This article presents a GIS-based multi-source and multi-box modeling approach (GMSMB) to predict the spatial concentration distributions of airborne pollutant on local and regional scales. In this method, an extended multi-box model combined with a multi-source and multi-grid Gaussian model are developed within the GIS framework to examine the contributions from both point- and area-source emissions. By using GIS, a large amount of data including emission sources, air quality monitoring, meteorological data, and spatial location information required for air quality modeling are brought into an integrated modeling environment. It helps more details of spatial variation in source distribution and meteorological condition to be quantitatively analyzed. The developed modeling approach has been examined to predict the spatial concentration distribution of four air pollutants (CO, NO(2), SO(2) and PM(2.5)) for the State of California. The modeling results are compared with the monitoring data. Good agreement is acquired which demonstrated that the developed modeling approach could deliver an effective air pollution assessment on both regional and local scales to support air pollution control and management planning.

  9. Effect of transverse vibrations of fissile nuclei on the angular and spin distributions of low-energy fission fragments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bunakov, V. E.; Kadmensky, S. G., E-mail: kadmensky@phys.vsu.ru; Lyubashevsky, D. E.

    2016-05-15

    It is shown that A. Bohr’s classic theory of angular distributions of fragments originating from low-energy fission should be supplemented with quantum corrections based on the involvement of a superposition of a very large number of angular momenta L{sub m} in the description of the relative motion of fragments flying apart along the straight line coincidentwith the symmetry axis. It is revealed that quantum zero-point wriggling-type vibrations of the fissile system in the vicinity of its scission point are a source of these angular momenta and of high fragment spins observed experimentally.

  10. A COMPUTATIONAL FRAMEWORK FOR EVALUATION OF NPS MANAGEMENT SCENARIOS: ROLE OF PARAMETER UNCERTAINTY

    EPA Science Inventory

    Utility of complex distributed-parameter watershed models for evaluation of the effectiveness of non-point source sediment and nutrient abatement scenarios such as Best Management Practices (BMPs) often follows the traditional {calibrate ---> validate ---> predict} procedure. Des...

  11. Knee point search using cascading top-k sorting with minimized time complexity.

    PubMed

    Wang, Zheng; Tseng, Shian-Shyong

    2013-01-01

    Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.

  12. Potency backprojection

    NASA Astrophysics Data System (ADS)

    Okuwaki, R.; Kasahara, A.; Yagi, Y.

    2017-12-01

    The backprojection (BP) method has been one of the powerful tools of tracking seismic-wave sources of the large/mega earthquakes. The BP method projects waveforms onto a possible source point by stacking them with the theoretical-travel-time shifts between the source point and the stations. Following the BP method, the hybrid backprojection (HBP) method was developed to enhance depth-resolution of projected images and mitigate the dummy imaging of the depth phases, which are shortcomings of the BP method, by stacking cross-correlation functions of the observed waveforms and theoretically calculated Green's functions (GFs). The signal-intensity of the BP/HBP image at a source point is related to how much of observed waveforms was radiated from that point. Since the amplitude of the GF associated with the slip-rate increases with depth as the rigidity increases with depth, the intensity of the BP/HBP image inherently has depth dependence. To make a direct comparison of the BP/HBP image with the corresponding slip distribution inferred from a waveform inversion, and discuss the rupture properties along the fault drawn from the waveforms in high- and low-frequencies with the BP/HBP methods and the waveform inversion, respectively, it is desirable to have the variants of BP/HBP methods that directly image the potency-rate-density distribution. Here we propose new formulations of the BP/HBP methods, which image the distribution of the potency-rate density by introducing alternative normalizing factors in the conventional formulations. For the BP method, the observed waveform is normalized with the maximum amplitude of P-phase of the corresponding GF. For the HBP method, we normalize the cross-correlation function with the squared-sum of the GF. The normalized waveforms or the cross-correlation functions are then stacked for all the stations to enhance the signal to noise ratio. We will present performance-tests of the new formulations by using synthetic waveforms and the real data of the Mw 8.3 2015 Illapel Chile earthquake, and further discuss the limitations of the new BP/HBP methods proposed in this study when they are used for exploring the rupture properties of the earthquakes.

  13. Isotopes, Inventories and Seasonality: Unraveling Methane Source Distribution in the Complex Landscapes of the United Kingdom.

    NASA Astrophysics Data System (ADS)

    Lowry, D.; Fisher, R. E.; Zazzeri, G.; Lanoisellé, M.; France, J.; Allen, G.; Nisbet, E. G.

    2017-12-01

    Unlike the big open landscapes of many continents with large area sources dominated by one particular methane emission type that can be isotopically characterized by flight measurements and sampling, the complex patchwork of urban, fossil and agricultural methane sources across NW Europe require detailed ground surveys for characterization (Zazzeri et al., 2017). Here we outline the findings from multiple seasonal urban and rural measurement campaigns in the United Kingdom. These surveys aim to: 1) Assess source distribution and baseline in regions of planned fracking, and relate to on-site continuous baseline climatology. 2) Characterize spatial and seasonal differences in the isotopic signatures of the UNFCCC source categories, and 3) Assess the spatial validity of the 1 x 1 km UK inventory for large continuous emitters, proposed point sources, and seasonal / ephemeral emissions. The UK inventory suggests that 90% of methane emissions are from 3 source categories, ruminants, landfill and gas distribution. Bag sampling and GC-IRMS delta13C analysis shows that landfill gives a constant signature of -57 ±3 ‰ throughout the year. Fugitive gas emissions are consistent regionally depending on the North Sea supply regions feeding the network (-41 ± 2 ‰ in N England, -37 ± 2 ‰ in SE England). Ruminant, mostly cattle, emissions are far more complex as these spend winters in barns and summers in fields, but are essentially a mix of 2 end members, breath at -68 ±3 ‰ and manure at -51 ±3 ‰, resulting in broad summer field emission plumes of -64 ‰ and point winter barn emission plumes of -58 ‰. The inventory correctly locates emission hotspots from landfill, larger sewage treatment plants and gas compressor stations, giving a broad overview of emission distribution for regional model validation. Mobile surveys are adding an extra layer of detail to this which, combined with isotopic characterization, has identified spatial distribution of gas pipe leaks, some persisting since 2013 (Zazzeri et al., 2015), and seasonality and spatial variability of livestock emissions. Importantly existing significant gas leaks close to proposed fracking sites have been characterized so that any emissions to atmosphere with a different isotopic signature will be detected. Zazzeri, G., Atm. Env. 110, 151-162 (2015); Zazzeri, G., Sci. Rep. 7, 4854 (2017).

  14. Transmission system for distribution of video over long-haul optical point-to-point links using a microwave photonic filter in the frequency range of 0.01-10 GHz

    NASA Astrophysics Data System (ADS)

    Zaldívar Huerta, Ignacio E.; Pérez Montaña, Diego F.; Nava, Pablo Hernández; Juárez, Alejandro García; Asomoza, Jorge Rodríguez; Leal Cruz, Ana L.

    2013-12-01

    We experimentally demonstrate the use of an electro-optical transmission system for distribution of video over long-haul optical point-to-point links using a microwave photonic filter in the frequency range of 0.01-10 GHz. The frequency response of the microwave photonic filter consists of four band-pass windows centered at frequencies that can be tailored to the function of the spectral free range of the optical source, the chromatic dispersion parameter of the optical fiber used, as well as the length of the optical link. In particular, filtering effect is obtained by the interaction of an externally modulated multimode laser diode emitting at 1.5 μm associated to the length of a dispersive optical fiber. Filtered microwave signals are used as electrical carriers to transmit TV-signal over long-haul optical links point-to-point. Transmission of TV-signal coded on the microwave band-pass windows located at 4.62, 6.86, 4.0 and 6.0 GHz are achieved over optical links of 25.25 km and 28.25 km, respectively. Practical applications for this approach lie in the field of the FTTH access network for distribution of services as video, voice, and data.

  15. Radiative flux from a planar multiple point source within a cylindrical enclosure reaching a coaxial circular plane

    NASA Astrophysics Data System (ADS)

    Tryka, Stanislaw

    2007-04-01

    A general formula and some special integral formulas were presented for calculating radiative fluxes incident on a circular plane from a planar multiple point source within a coaxial cylindrical enclosure perpendicular to the source. These formula were obtained for radiation propagating in a homogeneous isotropic medium assuming that the lateral surface of the enclosure completely absorbs the incident radiation. Exemplary results were computed numerically and illustrated with three-dimensional surface plots. The formulas presented are suitable for determining fluxes of radiation reaching planar circular detectors, collectors or other planar circular elements from systems of laser diodes, light emitting diodes and fiber lamps within cylindrical enclosures, as well as small biological emitters (bacteria, fungi, yeast, etc.) distributed on planar bases of open nontransparent cylindrical containers.

  16. Distributed Seismic Moment Fault Model, Spectral Characteristics and Radiation Patterns

    NASA Astrophysics Data System (ADS)

    Shani-Kadmiel, Shahar; Tsesarsky, Michael; Gvirtzman, Zohar

    2014-05-01

    We implement a Distributed Seismic Moment (DSM) fault model, a physics-based representation of an earthquake source based on a skewed-Gaussian slip distribution over an elliptical rupture patch, for the purpose of forward modeling of seismic-wave propagation in 3-D heterogeneous medium. The elliptical rupture patch is described by 13 parameters: location (3), dimensions of the patch (2), patch orientation (1), focal mechanism (3), nucleation point (2), peak slip (1), rupture velocity (1). A node based second order finite difference approach is used to solve the seismic-wave equations in displacement formulation (WPP, Nilsson et al., 2007). Results of our DSM fault model are compared with three commonly used fault models: Point Source Model (PSM), Haskell's fault Model (HM), and HM with Radial (HMR) rupture propagation. Spectral features of the waveforms and radiation patterns from these four models are investigated. The DSM fault model best incorporates the simplicity and symmetry of the PSM with the directivity effects of the HMR while satisfying the physical requirements, i.e., smooth transition from peak slip at the nucleation point to zero at the rupture patch border. The implementation of the DSM in seismic-wave propagation forward models comes at negligible computational cost. Reference: Nilsson, S., Petersson, N. A., Sjogreen, B., and Kreiss, H.-O. (2007). Stable Difference Approximations for the Elastic Wave Equation in Second Order Formulation. SIAM Journal on Numerical Analysis, 45(5), 1902-1936.

  17. Investigating line- versus point-laser excitation for three-dimensional fluorescence imaging and tomography employing a trimodal imaging system

    NASA Astrophysics Data System (ADS)

    Cao, Liji; Peter, Jörg

    2013-06-01

    The adoption of axially oriented line illumination patterns for fluorescence excitation in small animals for fluorescence surface imaging (FSI) and fluorescence optical tomography (FOT) is being investigated. A trimodal single-photon-emission-computed-tomography/computed-tomography/optical-tomography (SPECT-CT-OT) small animal imaging system is being modified for employment of point- and line-laser excitation sources. These sources can be arbitrarily positioned around the imaged object. The line source is set to illuminate the object along its entire axial direction. Comparative evaluation of point and line illumination patterns for FSI and FOT is provided involving phantom as well as mouse data. Given the trimodal setup, CT data are used to guide the optical approaches by providing boundary information. Furthermore, FOT results are also being compared to SPECT. Results show that line-laser illumination yields a larger axial field of view (FOV) in FSI mode, hence faster data acquisition, and practically acceptable FOT reconstruction throughout the whole animal. Also, superimposed SPECT and FOT data provide additional information on similarities as well as differences in the distribution and uptake of both probe types. Fused CT data enhance further the anatomical localization of the tracer distribution in vivo. The feasibility of line-laser excitation for three-dimensional fluorescence imaging and tomography is demonstrated for initiating further research, however, not with the intention to replace one by the other.

  18. Astrophysical signatures of leptonium

    NASA Astrophysics Data System (ADS)

    Ellis, Simon C.; Bland-Hawthorn, Joss

    2018-01-01

    More than 1043 positrons annihilate every second in the centre of our Galaxy yet, despite four decades of observations, their origin is still unknown. Many candidates have been proposed, such as supernovae and low mass X-ray binaries. However, these models are difficult to reconcile with the distribution of positrons, which are highly concentrated in the Galactic bulge, and therefore require specific propagation of the positrons through the interstellar medium. Alternative sources include dark matter decay, or the supermassive black hole, both of which would have a naturally high bulge-to-disc ratio. The chief difficulty in reconciling models with the observations is the intrinsically poor angular resolution of gamma-ray observations, which cannot resolve point sources. Essentially all of the positrons annihilate via the formation of positronium. This gives rise to the possibility of observing recombination lines of positronium emitted before the atom annihilates. These emission lines would be in the UV and the NIR, giving an increase in angular resolution of a factor of 104 compared to gamma ray observations, and allowing the discrimination between point sources and truly diffuse emission. Analogously to the formation of positronium, it is possible to form atoms of true muonium and true tauonium. Since muons and tauons are intrinsically unstable, the formation of such leptonium atoms will be localised to their places of origin. Thus observations of true muonium or true tauonium can provide another way to distinguish between truly diffuse sources such as dark matter decay, and an unresolved distribution of point sources. Contribution to the Topical Issue "Low Energy Positron and Electron Interactions", edited by James Sullivan, Ron White, Michael Bromley, Ilya Fabrikant and David Cassidy.

  19. Distribution and sources of polyfluoroalkyl substances (PFAS) in the River Rhine watershed.

    PubMed

    Möller, Axel; Ahrens, Lutz; Surm, Renate; Westerveld, Joke; van der Wielen, Frans; Ebinghaus, Ralf; de Voogt, Pim

    2010-10-01

    The concentration profile of 40 polyfluoroalkyl substances (PFAS) in surface water along the River Rhine watershed from the Lake Constance to the North Sea was investigated. The aim of the study was to investigate the influence of point as well as diffuse sources, to estimate fluxes of PFAS into the North Sea and to identify replacement compounds of perfluorooctane sulfonate (PFOS) and perfluorooctanoic acid (PFOA). In addition, an interlaboratory comparison of the method performance was conducted. The PFAS pattern was dominated by perfluorobutane sulfonate (PFBS) and perfluorobutanoic acid (PFBA) with concentrations up to 181 ng/L and 335 ng/L, respectively, which originated from industrial point sources. Fluxes of SigmaPFAS were estimated to be approximately 6 tonnes/year which is much higher than previous estimations. Both, the River Rhine and the River Scheldt, seem to act as important sources of PFAS into the North Sea. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  20. Modeling unsteady sound refraction by coherent structures in a high-speed jet

    NASA Astrophysics Data System (ADS)

    Kan, Pinqing; Lewalle, Jacques

    2011-11-01

    We construct a visual model for the unsteady refraction of sound waves from point sources in a Ma = 0.6 jet. The mass and inviscid momentum equations give an equation governing acoustic fluctuations, including anisotropic propagation, attenuation and sources; differences with Lighthill's equation will be discussed. On this basis, the theory of characteristics gives canonical equations for the acoustic paths from any source into the far field. We model a steady mean flow in the near-jet region including the potential core and the mixing region downstream of its collapse, and model the convection of coherent structures as traveling wave perturbations of this mean flow. For a regular distribution of point sources in this region, we present a visual rendition of fluctuating distortion, lensing and deaf spots from the viewpoint of a far-field observer. Supported in part by AFOSR Grant FA-9550-10-1-0536 and by a Syracuse University Graduate Fellowship.

  1. Ground deposition of liquid droplets released from a point source in the atmospheric surface layer

    NASA Astrophysics Data System (ADS)

    Panneton, Bernard

    1989-01-01

    A series of field experiments is presented in which the ground deposition of liquid droplets, 120 and 150 microns in diameter, released from a point source at 7 m above ground level, was measured. A detailed description of the experimental technique is provided, and the results are presented and compared to the predictions of a few models. A new rotating droplet generator is described. Droplets are produced by the forced breakup of capillary liquid jets and droplet coalescence is inhibited by the rotational motion of the spray head. The two dimensional deposition patterns are presented in the form of plots of contours of constant density, normalized arcwise distributions and crosswind integrated distributions. The arcwise distributions follow a Gaussian distribution whose standard deviation is evaluated using a modified Pasquill's technique. Models of the crosswind integrated deposit from Godson, Csanady, Walker, Bache and Sayer, and Wilson et al are evaluated. The results indicate that the Wilson et al random walk model is adequate for predicting the ground deposition of the 150 micron droplets. In one case, where the ratio of the droplet settling velocity to the mean wind speed was largest, Walker's model proved to be adequate. Otherwise, none of the models were acceptable in light of the experimental data.

  2. Sources of endocrine-disrupting compounds in North Carolina waterways: a geographic information systems approach

    USGS Publications Warehouse

    Sackett, Dana K.; Pow, Crystal Lee; Rubino, Matthew J.; Aday, D.D.; Cope, W. Gregory; Kullman, Seth W.; Rice, J.A.; Kwak, Thomas J.; Law, L.M.

    2015-01-01

    The presence of endocrine-disrupting compounds (EDCs), particularly estrogenic compounds, in the environment has drawn public attention across the globe, yet a clear understanding of the extent and distribution of estrogenic EDCs in surface waters and their relationship to potential sources is lacking. The objective of the present study was to identify and examine the potential input of estrogenic EDC sources in North Carolina water bodies using a geographic information system (GIS) mapping and analysis approach. Existing data from state and federal agencies were used to create point and nonpoint source maps depicting the cumulative contribution of potential sources of estrogenic EDCs to North Carolina surface waters. Water was collected from 33 sites (12 associated with potential point sources, 12 associated with potential nonpoint sources, and 9 reference), to validate the predictive results of the GIS analysis. Estrogenicity (measured as 17β-estradiol equivalence) ranged from 0.06 ng/L to 56.9 ng/L. However, the majority of sites (88%) had water 17β-estradiol concentrations below 1 ng/L. Sites associated with point and nonpoint sources had significantly higher 17β-estradiol levels than reference sites. The results suggested that water 17β-estradiol was reflective of GIS predictions, confirming the relevance of landscape-level influences on water quality and validating the GIS approach to characterize such relationships.

  3. Sources of endocrine-disrupting compounds in North Carolina waterways: a geographic information systems approach.

    PubMed

    Sackett, Dana K; Pow, Crystal Lee; Rubino, Matthew J; Aday, D Derek; Cope, W Gregory; Kullman, Seth; Rice, James A; Kwak, Thomas J; Law, Mac

    2015-02-01

    The presence of endocrine-disrupting compounds (EDCs), particularly estrogenic compounds, in the environment has drawn public attention across the globe, yet a clear understanding of the extent and distribution of estrogenic EDCs in surface waters and their relationship to potential sources is lacking. The objective of the present study was to identify and examine the potential input of estrogenic EDC sources in North Carolina water bodies using a geographic information system (GIS) mapping and analysis approach. Existing data from state and federal agencies were used to create point and nonpoint source maps depicting the cumulative contribution of potential sources of estrogenic EDCs to North Carolina surface waters. Water was collected from 33 sites (12 associated with potential point sources, 12 associated with potential nonpoint sources, and 9 reference), to validate the predictive results of the GIS analysis. Estrogenicity (measured as 17β-estradiol equivalence) ranged from 0.06 ng/L to 56.9 ng/L. However, the majority of sites (88%) had water 17β-estradiol concentrations below 1 ng/L. Sites associated with point and nonpoint sources had significantly higher 17β-estradiol levels than reference sites. The results suggested that water 17β-estradiol was reflective of GIS predictions, confirming the relevance of landscape-level influences on water quality and validating the GIS approach to characterize such relationships. © 2014 SETAC.

  4. Pointing error analysis of Risley-prism-based beam steering system.

    PubMed

    Zhou, Yuan; Lu, Yafei; Hei, Mo; Liu, Guangcan; Fan, Dapeng

    2014-09-01

    Based on the vector form Snell's law, ray tracing is performed to quantify the pointing errors of Risley-prism-based beam steering systems, induced by component errors, prism orientation errors, and assembly errors. Case examples are given to elucidate the pointing error distributions in the field of regard and evaluate the allowances of the error sources for a given pointing accuracy. It is found that the assembly errors of the second prism will result in more remarkable pointing errors in contrast with the first one. The pointing errors induced by prism tilt depend on the tilt direction. The allowances of bearing tilt and prism tilt are almost identical if the same pointing accuracy is planned. All conclusions can provide a theoretical foundation for practical works.

  5. Watershed Management Tool for Selection and Spacial Allocation of Non-Point Source Pollution Control Practices

    EPA Science Inventory

    Distributed-parameter watershed models are often utilized for evaluating the effectiveness of sediment and nutrient abatement strategies through the traditional {calibrate→ validate→ predict} approach. The applicability of the method is limited due to modeling approximations. In ...

  6. [Distribution Characteristics and Source of Fluoride in Groundwater in Lower Plain Area of North China Plain: A Case Study in Nanpi County].

    PubMed

    Kong, Xiao-le; Wang, Shi-qin; Zhao, Huan; Yuan, Rui-qiang

    2015-11-01

    There is an obvious regional contradiction between water resources and agricultural produce in lower plain area of North China, however, excessive fluorine in deep groundwater further limits the use of regional water resources. In order to understand the spatial distribution characteristics and source of F(-) in groundwater, study was carried out in Nanpi County by field survey and sampling, hydrogeochemical analysis and stable isotopes methods. The results showed that the center of low fluoride concentrations of shallow groundwater was located around reservoir of Dalang Lake, and centers of high fluoride concentrations were located in southeast and southwest of the study area. The region with high fluoride concentration was consistent with the over-exploitation region of deep groundwater. Point source pollution of subsurface drainage and non-point source of irrigation with deep groundwater in some regions were the main causes for the increasing F(-) concentrations of shallow groundwater in parts of the sampling sites. Rock deposition and hydrogeology conditions were the main causes for the high F(-) concentrations (1.00 mg x L(-1), threshold of drinking water quality standard in China) in deep groundwater. F(-) released from clay minerals into the water increased the F(-) concentrations in deep groundwater because of over-exploitation. With the increasing exploitation and utilization of brackish shallow groundwater and the compressing and restricting of deep groundwater exploitation, the water environment in the middle and east lower plain area of North China will undergo significant change, and it is important to identify the distribution and source of F(-) in surface water and groundwater for reasonable development and use of water resources in future.

  7. Comparative Studies for the Sodium and Potassium Atmospheres of the Moon and Mercury

    NASA Technical Reports Server (NTRS)

    Smyth, William H.

    1999-01-01

    A summary discussion of recent sodium and potassium observations for the atmospheres of the Moon and Mercury is presented with primary emphasis on new full-disk images that have become available for sodium. For the sodium atmosphere, image observations for both the Moon and Mercury are fitted with model calculations (1) that have the same source speed distribution, one recently measured for electron-stimulated desorption and thought to apply equally well to photon-stimulated desorption, (2) that have similar average surface sodium fluxes, about 2.8 x 10(exp 5) to 8.9 x 10(exp 5) atoms cm(exp -2)s(exp -1) for the Moon and approximately 3.5 x 10(exp 5) to 1.4 x 10(exp 6) atoms cm(exp -2)s(exp -1) for Mercury, but (3) that have very different distributions for the source surface area. For the Moon, a sunlit hemispherical surface source of between approximately 5.3 x 10(exp 22) to 1.2 x 10(exp 23) atoms/s is required with a spatial dependence at least as sharp as the square of the cosine of the solar zenith angle. For Mercury, a time dependent source that varies from 1.5 x 10(exp 22) to 5.8 x l0(exp 22) atoms/s is required which is confined to a small surface area located at, but asymmetrically distributed about, the subsolar point. The nature of the Mercury source suggest that the planetary magnetopause near the subsolar point acts as a time varying and partially protective shield through which charged particles may pass to interact with and liberate gas from the planetary surface. Suggested directions for future research activities are discussed.

  8. Point source sulphur dioxide peaks and hospital presentations for asthma.

    PubMed

    Donoghue, A M; Thomas, M

    1999-04-01

    To examine the effect on hospital presentations for asthma of brief exposures to sulphur dioxide (SO2) (within the range 0-8700 micrograms/m3) emanating from two point sources in a remote rural city of 25,000 people. A time series analysis of SO2 concentrations and hospital presentations for asthma was undertaken at Mount Isa where SO2 is released into the atmosphere by a copper smelter and a lead smelter. The study examined 5 minute block mean SO2 concentrations and daily hospital presentations for asthma, wheeze, or shortness of breath. Generalised linear models and generalised additive models based on a Poisson distribution were applied. There was no evidence of any positive relation between peak SO2 concentrations and hospital presentations or admissions for asthma, wheeze, or shortness of breath. Brief exposures to high concentrations of SO2 emanating from point sources at Mount Isa do not cause sufficiently serious symptoms in asthmatic people to require presentation to hospital.

  9. Organic matter in sediment layers of an acidic mining lake as assessed by lipid analysis. Part II: Neutral lipids.

    PubMed

    Poerschmann, Juergen; Koschorreck, Matthias; Górecki, Tadeusz

    2017-02-01

    Natural neutralization of acidic mining lakes is often limited by organic matter. The knowledge of the sources and degradability of organic matter is crucial for understanding alkalinity generation in these lakes. Sediments collected at different depths (surface sediment layer from 0 to 1 cm and deep sediment layer from 4 to 5cm) from an acidic mining lake were studied in order to characterize sedimentary organic matter based on neutral signature markers. Samples were exhaustively extracted, subjected to pre-chromatographic derivatizations and analyzed by GC/MS. Herein, molecular distributions of diagnostic alkanes/alkenes, terpenes/terpenoids, polycyclic aromatic hydrocarbons, aliphatic alcohols and ketones, sterols, and hopanes/hopanoids were addressed. Characterization of the contribution of natural vs. anthropogenic sources to the sedimentary organic matter in these extreme environments was then possible based on these distributions. With the exception of polycyclic aromatic hydrocarbons, combined concentrations across all marker classes proved higher in the surface sediment layer as compared to those in the deep sediment layer. Alkane and aliphatic alcohol distributions pointed to predominantly allochthonous over autochthonous contribution to sedimentary organic matter. Sterol patterns were dominated by phytosterols of terrestrial plants including stigmasterol and β-sitosterol. Hopanoid markers with the ββ-biohopanoid "biological" configuration were more abundant in the surface sediment layer, which pointed to higher bacterial activity. The pattern of polycyclic aromatic hydrocarbons pointed to prevailing anthropogenic input. Pyrolytic makers were likely to due to atmospheric deposition from a nearby former coal combustion facility. The combined analysis of the array of biomarkers provided new insights into the sources and transformations of organic matter in lake sediments. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. The Wave Principle Of The Distribution Of Substance In Solar System

    NASA Astrophysics Data System (ADS)

    Smirnov, V.

    The opinion about the wave nature of substanceS distribution in Solar system comes out of fundamental book of J.Kepler "Welt Harmonik" . In this book by J.Kepler the musical proportions are united with geometrical means of building Plato's in- scribed and described figures. The definition of the planetsS orbits according to the constructed SPlatoS figuresT is geometrically possible in case of existence of com- & cedil;mon measure for these geometrical constructions. Proportions, received by J.Kepler, are possible in the case of formations of standing waves in the space of Solar system, when the place of the formation of planets conforms the main surfaces of standing waves having as the source the central luminary of Solar system. Similarly in experiments of Chladni, during the formation of standing wave on the planes of fluctuating plate scattered along its particles are collecting together, getting from points which fluctuate with maximal amplitude, to the points, the amplitude of fluctuations of which is equal to zero, filling in the main lines. (On space this will be the "main surfaces"). If we will consider the Central luminary of the planetS system or their satellites as a source of "gravitational waves" which are reflected from the environment with less density on the borders of system in the period of its initial evolution then the standing wave with crests and nodes in definite points along the direction of its distribution. According to the principle of the unity of the laws of nature, evidently that not only the equation of Schrodinger E., but also pattern of superstring with corresponding modes can describe the history of formation and the existence of macrobodies of Solar System. So, if we will consider the central luminary the source of gravitational waves which, reflecting from less densible environment, surrounding scattering substance of Solar system in the period of its initial evolution, then standing gravitational wave with certain points of maximum displacement and main points will form. The error in several cases in mentioned calculations does not exceed 10

  11. A Composite Source Model With Fractal Subevent Size Distribution

    NASA Astrophysics Data System (ADS)

    Burjanek, J.; Zahradnik, J.

    A composite source model, incorporating different sized subevents, provides a pos- sible description of complex rupture processes during earthquakes. The number of subevents with characteristic dimension greater than R is proportional to R-2. The subevents do not overlap with each other, and the sum of their areas equals to the area of the target event (e.g. mainshock) . The subevents are distributed randomly over the fault. Each subevent is modeled as a finite source, using kinematic approach (radial rupture propagation, constant rupture velocity, boxcar slip-velocity function, with constant rise time on the subevent). The final slip at each subevent is related to its characteristic dimension, using constant stress-drop scaling. Variation of rise time with subevent size is a free parameter of modeling. The nucleation point of each subevent is taken as the point closest to mainshock hypocentre. The synthetic Green's functions are calculated by the discrete-wavenumber method in a 1D horizontally lay- ered crustal model in a relatively coarse grid of points covering the fault plane. The Green's functions needed for the kinematic model in a fine grid are obtained by cu- bic spline interpolation. As different frequencies may be efficiently calculated with different sampling, the interpolation simplifies and speeds-up the procedure signifi- cantly. The composite source model described above allows interpretation in terms of a kinematic model with non-uniform final slip and rupture velocity spatial distribu- tions. The 1994 Northridge earthquake (Mw = 6.7) is used as a validation event. The strong-ground motion modeling of the 1999 Athens earthquake (Mw = 5.9) is also performed.

  12. Repeat synoptic sampling reveals drivers of change in carbon and nutrient chemistry of Arctic catchments

    NASA Astrophysics Data System (ADS)

    Zarnetske, J. P.; Abbott, B. W.; Bowden, W. B.; Iannucci, F.; Griffin, N.; Parker, S.; Pinay, G.; Aanderud, Z.

    2017-12-01

    Dissolved organic carbon (DOC), nutrients, and other solute concentrations are increasing in rivers across the Arctic. Two hypotheses have been proposed to explain these trends: 1. distributed, top-down permafrost degradation, and 2. discrete, point-source delivery of DOC and nutrients from permafrost collapse features (thermokarst). While long-term monitoring at a single station cannot discriminate between these mechanisms, synoptic sampling of multiple points in the stream network could reveal the spatial structure of solute sources. In this context, we sampled carbon and nutrient chemistry three times over two years in 119 subcatchments of three distinct Arctic catchments (North Slope, Alaska). Subcatchments ranged from 0.1 to 80 km2, and included three distinct types of Arctic landscapes - mountainous, tundra, and glacial-lake catchments. We quantified the stability of spatial patterns in synoptic water chemistry and analyzed high-frequency time series from the catchment outlets across the thaw season to identify source areas for DOC, nutrients, and major ions. We found that variance in solute concentrations between subcatchments collapsed at spatial scales between 1 to 20 km2, indicating a continuum of diffuse- and point-source dynamics, depending on solute and catchment characteristics (e.g. reactivity, topography, vegetation, surficial geology). Spatially-distributed mass balance revealed conservative transport of DOC and nitrogen, and indicates there may be strong in-stream retention of phosphorus, providing a network-scale confirmation of previous reach-scale studies in these Arctic catchments. Overall, we present new approaches to analyzing synoptic data for change detection and quantification of ecohydrological mechanisms in ecosystems in the Arctic and beyond.

  13. Integration of Heterogenous Digital Surface Models

    NASA Astrophysics Data System (ADS)

    Boesch, R.; Ginzler, C.

    2011-08-01

    The application of extended digital surface models often reveals, that despite an acceptable global accuracy for a given dataset, the local accuracy of the model can vary in a wide range. For high resolution applications which cover the spatial extent of a whole country, this can be a major drawback. Within the Swiss National Forest Inventory (NFI), two digital surface models are available, one derived from LiDAR point data and the other from aerial images. Automatic photogrammetric image matching with ADS80 aerial infrared images with 25cm and 50cm resolution is used to generate a surface model (ADS-DSM) with 1m resolution covering whole switzerland (approx. 41000 km2). The spatially corresponding LiDAR dataset has a global point density of 0.5 points per m2 and is mainly used in applications as interpolated grid with 2m resolution (LiDAR-DSM). Although both surface models seem to offer a comparable accuracy from a global view, local analysis shows significant differences. Both datasets have been acquired over several years. Concerning LiDAR-DSM, different flight patterns and inconsistent quality control result in a significantly varying point density. The image acquisition of the ADS-DSM is also stretched over several years and the model generation is hampered by clouds, varying illumination and shadow effects. Nevertheless many classification and feature extraction applications requiring high resolution data depend on the local accuracy of the used surface model, therefore precise knowledge of the local data quality is essential. The commercial photogrammetric software NGATE (part of SOCET SET) generates the image based surface model (ADS-DSM) and delivers also a map with figures of merit (FOM) of the matching process for each calculated height pixel. The FOM-map contains matching codes like high slope, excessive shift or low correlation. For the generation of the LiDAR-DSM only first- and last-pulse data was available. Therefore only the point distribution can be used to derive a local accuracy measure. For the calculation of a robust point distribution measure, a constrained triangulation of local points (within an area of 100m2) has been implemented using the Open Source project CGAL. The area of each triangle is a measure for the spatial distribution of raw points in this local area. Combining the FOM-map with the local evaluation of LiDAR points allows an appropriate local accuracy evaluation of both surface models. The currently implemented strategy ("partial replacement") uses the hypothesis, that the ADS-DSM is superior due to its better global accuracy of 1m. If the local analysis of the FOM-map within the 100m2 area shows significant matching errors, the corresponding area of the triangulated LiDAR points is analyzed. If the point density and distribution is sufficient, the LiDAR-DSM will be used in favor of the ADS-DSM at this location. If the local triangulation reflects low point density or the variance of triangle areas exceeds a threshold, the investigated location will be marked as NODATA area. In a future implementation ("anisotropic fusion") an anisotropic inverse distance weighting (IDW) will be used, which merges both surface models in the point data space by using FOM-map and local triangulation to derive a quality weight for each of the interpolation points. The "partial replacement" implementation and the "fusion" prototype for the anisotropic IDW make use of the Open Source projects CGAL (Computational Geometry Algorithms Library), GDAL (Geospatial Data Abstraction Library) and OpenCV (Open Source Computer Vision).

  14. Spatial distribution and source apportionment of water pollution in different administrative zones of Wen-Rui-Tang (WRT) river watershed, China.

    PubMed

    Yang, Liping; Mei, Kun; Liu, Xingmei; Wu, Laosheng; Zhang, Minghua; Xu, Jianming; Wang, Fan

    2013-08-01

    Water quality degradation in river systems has caused great concerns all over the world. Identifying the spatial distribution and sources of water pollutants is the very first step for efficient water quality management. A set of water samples collected bimonthly at 12 monitoring sites in 2009 and 2010 were analyzed to determine the spatial distribution of critical parameters and to apportion the sources of pollutants in Wen-Rui-Tang (WRT) river watershed, near the East China Sea. The 12 monitoring sites were divided into three administrative zones of urban, suburban, and rural zones considering differences in land use and population density. Multivariate statistical methods [one-way analysis of variance, principal component analysis (PCA), and absolute principal component score-multiple linear regression (APCS-MLR) methods] were used to investigate the spatial distribution of water quality and to apportion the pollution sources. Results showed that most water quality parameters had no significant difference between the urban and suburban zones, whereas these two zones showed worse water quality than the rural zone. Based on PCA and APCS-MLR analysis, urban domestic sewage and commercial/service pollution, suburban domestic sewage along with fluorine point source pollution, and agricultural nonpoint source pollution with rural domestic sewage pollution were identified to the main pollution sources in urban, suburban, and rural zones, respectively. Understanding the water pollution characteristics of different administrative zones could put insights into effective water management policy-making especially in the area across various administrative zones.

  15. Magnetoacoustic Tomography with Magnetic Induction: Bioimepedance reconstruction through vector source imaging

    PubMed Central

    Mariappan, Leo; He, Bin

    2013-01-01

    Magneto acoustic tomography with magnetic induction (MAT-MI) is a technique proposed to reconstruct the conductivity distribution in biological tissue at ultrasound imaging resolution. A magnetic pulse is used to generate eddy currents in the object, which in the presence of a static magnetic field induces Lorentz force based acoustic waves in the medium. This time resolved acoustic waves are collected with ultrasound transducers and, in the present work, these are used to reconstruct the current source which gives rise to the MAT-MI acoustic signal using vector imaging point spread functions. The reconstructed source is then used to estimate the conductivity distribution of the object. Computer simulations and phantom experiments are performed to demonstrate conductivity reconstruction through vector source imaging in a circular scanning geometry with a limited bandwidth finite size piston transducer. The results demonstrate that the MAT-MI approach is capable of conductivity reconstruction in a physical setting. PMID:23322761

  16. Experimental and Analytical Studies of Shielding Concepts for Point Sources and Jet Noises.

    NASA Astrophysics Data System (ADS)

    Wong, Raymond Lee Man

    This analytical and experimental study explores concepts for jet noise shielding. Model experiments centre on solid planar shields, simulating engine-over-wing installations, and 'sugar scoop' shields. Tradeoff on effective shielding length is set by interference 'edge noise' as the shield trailing edge approaches the spreading jet. Edge noise is minimized by (i) hyperbolic cutouts which trim off the portions of most intense interference between the jet flow and the barrier and (ii) hybrid shields--a thermal refractive extension (a flame); for (ii) the tradeoff is combustion noise. In general, shielding attenuation increases steadily with frequency, following low frequency enhancement by edge noise. Although broadband attenuation is typically only several dB, the reduction of the subjectively weighted perceived noise levels is higher. In addition, calculated ground contours of peak PN dB show a substantial contraction due to shielding: this reaches 66% for one of the 'sugar scoop' shields for the 90 PN dB contour. The experiments are complemented by analytical predictions. They are divided into an engineering scheme for jet noise shielding and more rigorous analysis for point source shielding. The former approach combines point source shielding with a suitable jet source distribution. The results are synthesized into a predictive algorithm for jet noise shielding: the jet is modelled as a line distribution of incoherent sources with narrow band frequency (TURN)(axial distance)('-1). The predictive version agrees well with experiment (1 to 1.5 dB) up to moderate frequencies. The insertion loss deduced from the point source measurements for semi-infinite as well as finite rectangular shields agrees rather well with theoretical calculation based on the exact half plane solution and the superposition of asymptotic closed-form solutions. An approximate theory, the Maggi-Rubinowicz line integral, is found to yield reasonable predictions for thin barriers including cutouts if a certain correction is applied. The more exact integral equation approach (solved numerically) is applied to a more demanding geometry: a half round sugar scoop shield. It is found that the solutions of integral equation derived from Helmholtz formula in normal derivative form show satisfactory agreement with measurements.

  17. A new continuous light source for high-speed imaging

    NASA Astrophysics Data System (ADS)

    Paton, R. T.; Hall, R. E.; Skews, B. W.

    2017-02-01

    Xenon arc lamps have been identified as a suitable continuous light source for high-speed imaging, specifically high-speed Schlieren and shadowgraphy. One issue when setting us such systems is the time that it takes to reduce a finite source to the approximation of a point source for z-type schlieren. A preliminary design of a compact compound lens for use with a commercial Xenon arc lamp was tested for suitability. While it was found that there is some dimming of the illumination at the spot periphery, the overall spectral and luminance distribution of the compact source is quite acceptable, especially considering the time benefit that it represents.

  18. A 6.7 GHz Methanol Maser Survey at High Galactic Latitudes

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Chen, Xi; Shen, Zhi-Qiang; Li, Xiao-Qiong; Wang, Jun-Zhi; Jiang, Dong-Rong; Li, Juan; Dong, Jian; Wu, Ya-Jun; Qiao, Hai-Hua; Ren, Zhiyuan

    2017-09-01

    We performed a systematic 6.7 GHz Class II methanol maser survey using the Shanghai Tianma Radio Telescope toward targets selected from the all-sky Wide-Field Infrared Survey Explorer (WISE) point catalog. In this paper, we report the results from the survey of those at high Galactic latitudes, I.e., | b| > 2°. Of 1473 selected WISE point sources at high latitude, 17 point positions that were actually associated with 12 sources were detected with maser emission, reflecting the rarity (1%-2%) of methanol masers in the region away from the Galactic plane. Out of the 12 sources, 3 are detected for the first time. The spectral energy distribution at infrared bands shows that these new detected masers occur in the massive star-forming regions. Compared to previous detections, the methanol maser changes significantly in both spectral profiles and flux densities. The infrared WISE images show that almost all of these masers are located in the positions of the bright WISE point sources. Compared to the methanol masers at the Galactic plane, these high-latitude methanol masers provide good tracers for investigating the physics and kinematics around massive young stellar objects, because they are believed to be less affected by the surrounding cluster environment.

  19. Developing an Open Source, Reusable Platform for Distributed Collaborative Information Management in the Early Detection Research Network

    NASA Technical Reports Server (NTRS)

    Hart, Andrew F.; Verma, Rishi; Mattmann, Chris A.; Crichton, Daniel J.; Kelly, Sean; Kincaid, Heather; Hughes, Steven; Ramirez, Paul; Goodale, Cameron; Anton, Kristen; hide

    2012-01-01

    For the past decade, the NASA Jet Propulsion Laboratory, in collaboration with Dartmouth University has served as the center for informatics for the Early Detection Research Network (EDRN). The EDRN is a multi-institution research effort funded by the U.S. National Cancer Institute (NCI) and tasked with identifying and validating biomarkers for the early detection of cancer. As the distributed network has grown, increasingly formal processes have been developed for the acquisition, curation, storage, and dissemination of heterogeneous research information assets, and an informatics infrastructure has emerged. In this paper we discuss the evolution of EDRN informatics, its success as a mechanism for distributed information integration, and the potential sustainability and reuse benefits of emerging efforts to make the platform components themselves open source. We describe our experience transitioning a large closed-source software system to a community driven, open source project at the Apache Software Foundation, and point to lessons learned that will guide our present efforts to promote the reuse of the EDRN informatics infrastructure by a broader community.

  20. Safety of packaged water distribution limited by household recontamination in rural Cambodia.

    PubMed

    Holman, Emily J; Brown, Joe

    2014-06-01

    Packaged water treatment schemes represent a growing model for providing safer water in low-income settings, yet post-distribution recontamination of treated water may limit this approach. This study evaluates drinking water quality and household water handling practices in a floating village in Tonlé Sap Lake, Cambodia, through a pilot cross-sectional study of 108 households, approximately half of which used packaged water as the main household drinking water source. We hypothesized that households purchasing drinking water from local packaged water treatment plants would have microbiologically improved drinking water at the point of consumption. We found no meaningful difference in microbiological drinking water quality between households using packaged, treated water and those collecting water from other sources, including untreated surface water, however. Households' water storage and handling practices and home hygiene may have contributed to recontamination of drinking water. Further measures to protect water quality at the point-of-use may be required even if water is treated and packaged in narrow-mouthed containers.

  1. Dynamic analysis of ultrasonically levitated droplet with moving particle semi-implicit and distributed point source method

    NASA Astrophysics Data System (ADS)

    Wada, Yuji; Yuge, Kohei; Nakamura, Ryohei; Tanaka, Hiroki; Nakamura, Kentaro

    2015-07-01

    Numerical analysis of an ultrasonically levitated droplet with a free surface boundary is discussed. The droplet is known to change its shape from sphere to spheroid when it is suspended in a standing wave owing to the acoustic radiation force. However, few studies on numerical simulation have been reported in association with this phenomenon including fluid dynamics inside the droplet. In this paper, coupled analysis using the distributed point source method (DPSM) and the moving particle semi-implicit (MPS) method, both of which do not require grids or meshes to handle the moving boundary with ease, is suggested. A droplet levitated in a plane standing wave field between a piston-vibrating ultrasonic transducer and a reflector is simulated with the DPSM-MPS coupled method. The dynamic change in the spheroidal shape of the droplet is successfully reproduced numerically, and the gravitational center and the change in the spheroidal aspect ratio are discussed and compared with the previous literature.

  2. Realtime Gas Emission Monitoring at Hazardous Sites Using a Distributed Point-Source Sensing Infrastructure.

    PubMed

    Manes, Gianfranco; Collodi, Giovanni; Gelpi, Leonardo; Fusco, Rosanna; Ricci, Giuseppe; Manes, Antonio; Passafiume, Marco

    2016-01-20

    This paper describes a distributed point-source monitoring platform for gas level and leakage detection in hazardous environments. The platform, based on a wireless sensor network (WSN) architecture, is organised into sub-networks to be positioned in the plant's critical areas; each sub-net includes a gateway unit wirelessly connected to the WSN nodes, hence providing an easily deployable, stand-alone infrastructure featuring a high degree of scalability and reconfigurability. Furthermore, the system provides automated calibration routines which can be accomplished by non-specialized maintenance operators without system reliability reduction issues. Internet connectivity is provided via TCP/IP over GPRS (Internet standard protocols over mobile networks) gateways at a one-minute sampling rate. Environmental and process data are forwarded to a remote server and made available to authenticated users through a user interface that provides data rendering in various formats and multi-sensor data fusion. The platform is able to provide real-time plant management with an effective; accurate tool for immediate warning in case of critical events.

  3. Occurrence, spatial distribution, and ecological risks of typical hydroxylated polybrominated diphenyl ethers in surface sediments from a large freshwater lake of China.

    PubMed

    Liu, Dan; Wu, Sheng-Min; Zhang, Qin; Guo, Min; Cheng, Jie; Zhang, Sheng-Hu; Yao, Cheng; Chen, Jian-Qiu

    2017-02-01

    Hydroxylated polybrominated diphenyl ethers (OH-PBDEs) have been frequently observed in marine aquatic environments; however, little information is available on the occurrence of these compounds in freshwater aquatic environments, including freshwater lakes. In this study, we investigated the occurrence and spatial distribution of typical OH-PBDEs, including 2'-OH-BDE-68, 3-OH-BDE-47, 5-OH-BDE-47, and 6-OH-BDE-47 in surface sediments of Taihu Lake. 3-OH-BDE-47 was the predominant congener, followed by 5-OH-BDE-47, 2'-OH-BDE-68, and 6-OH-BDE-47. Distributions of these compounds are drastically different between sampling site which may be a result of differences in nearby point sources, such as the discharge of industrial wastewater and e-waste leachate. The positive correlation between ∑OH-PBDEs and total organic carbon (TOC) was moderate (r = 0.485, p < 0.05), and site S3 and S15 were excluded due to point source pollution, suggesting that OH-PBDEs concentrations were controlled by sediment TOC content, as well as other factors. The pairwise correlations between the concentrations of these compounds suggest that these compounds may have similar input sources and environmental behavior. The target compounds in the sediments of Lake Taihu pose low risks to aquatic organisms. Results show that OH-PBDEs in Lake Taihu are largely dependent on pollution sources. Because of bioaccumulation and subsequent harmful effects on aquatic organisms, the concentrations of OH-PBDEs in freshwater ecosystems are of environmental concern.

  4. The Chandra Source Catalog 2.0: the Galactic center region

    NASA Astrophysics Data System (ADS)

    Civano, Francesca Maria; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Plummer, David A.; Primini, Francis Anthony; Rots, Arnold H.; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula

    2018-01-01

    The second release of the Chandra Source Catalog (CSC 2.0) comprises all the 10,382 ACIS and HRC-I imaging observations taken by Chandra and released publicly through the end of 2014. Among these, 534 single observations surrounding the Galactic center are included, covering a total area of ~19deg2 and a total exposure time of ~9 Ms.The single 534 observations were merged into 379 stacks (overlapping observations with aim-points within 60") to increase the flux limit for source detection purposes.Thanks to the combination of the point source detection algorithm with the maximum likelihood technique used to asses the source significance, ~21,000 detections are listed in the CSC 2.0 for this field only, 80% of which are unique sources. The central region of this field around the SgrA* location has the deepest exposure of 2.2 Ms and the highest source density with ~5000 sources. In this poster, we present details about this region including source distribution and density, coverage, exposure.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the ChandraX-ray Center.

  5. The first Extreme Ultraviolet Explorer source catalog

    NASA Technical Reports Server (NTRS)

    Bowyer, S.; Lieu, R.; Lampton, M.; Lewis, J.; Wu, X.; Drake, J. J.; Malina, R. F.

    1994-01-01

    The Extreme Ultraviolet Explorer (EUVE) has conducted an all-sky survey to locate and identify point sources of emission in four extreme ultraviolet wavelength bands centered at approximately 100, 200, 400, and 600 A. A companion deep survey of a strip along half the ecliptic plane was simultaneously conducted. In this catalog we report the sources found in these surveys using rigorously defined criteria uniformly applied to the data set. These are the first surveys to be made in the three longer wavelength bands, and a substantial number of sources were detected in these bands. We present a number of statistical diagnostics of the surveys, including their source counts, their sensitivites, and their positional error distributions. We provide a separate list of those sources reported in the EUVE Bright Source List which did not meet our criteria for inclusion in our primary list. We also provide improved count rate and position estimates for a majority of these sources based on the improved methodology used in this paper. In total, this catalog lists a total of 410 point sources, of which 372 have plausible optical ultraviolet, or X-ray identifications, which are also listed.

  6. Thermal power systems point-focusing distributed receiver technology project. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    Lucas, J.

    1979-01-01

    Thermal or electrical power from the sun's radiated energy through Point-Focusing Distributed Receiver Technology is the goal of this project. The energy thus produced must be technically, as well as economically, competitive with other energy sources. This project is to support the industrial development of the required technology to achieve the above stated goal. Solar energy is concentrated by either a reflecting surface or a lense to a receiver where it is transferred to a working liquid or gas. Receiver temperatures are in the 1000 - 2000 F range. Conceptual design studies are expected to identify power conversion units with a viable place in the solar energy future. Rankine and Brayton cycle engines are under investigation. This report details the Jet Propulsion Laboratory's accomplishments with point-focusing technology in Fy 1978.

  7. VizieR Online Data Catalog: Galactic Center old stars distribution (Gallego-Cano+, 2018)

    NASA Astrophysics Data System (ADS)

    Gallego-Cano, E.; Schoedel, R.; Nogueras-Lara, F.; Gallego-Calvente, A. T.; Amaro-Seoane, P.; Baumgardt, H.

    2017-09-01

    Photometric and astrometric parameters for the point source detections in the central parsec in the Galactic Centre. As we described in the manuscript, we work on four pointings which we do not combine to a final mosaic to avoid distortion issues. We analyse those four pointings in four different ways, applying different sets of StarFinder parameters. Therefore we present 16 tables, one for each pointing in the observations and StarFinder parameters. We present the extinction and completeness-corrected stellar density in three different magnitudes ranges. The tables are used to represent Figure 9 in the paper. (20 data files).

  8. Prediction of sound fields in acoustical cavities using the boundary element method. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Kipp, C. R.; Bernhard, R. J.

    1985-01-01

    A method was developed to predict sound fields in acoustical cavities. The method is based on the indirect boundary element method. An isoparametric quadratic boundary element is incorporated. Pressure, velocity and/or impedance boundary conditions may be applied to a cavity by using this method. The capability to include acoustic point sources within the cavity is implemented. The method is applied to the prediction of sound fields in spherical and rectangular cavities. All three boundary condition types are verified. Cases with a point source within the cavity domain are also studied. Numerically determined cavity pressure distributions and responses are presented. The numerical results correlate well with available analytical results.

  9. Instantaneous and time-averaged dispersion and measurement models for estimation theory applications with elevated point source plumes

    NASA Technical Reports Server (NTRS)

    Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.

    1977-01-01

    Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.

  10. An analytically soluble problem in fully nonlinear statistical gravitational lensing

    NASA Technical Reports Server (NTRS)

    Schneider, P.

    1987-01-01

    The amplification probability distribution p(I)dI for a point source behind a random star field which acts as the deflector exhibits a I exp-3 behavior for large amplification, as can be shown from the universality of the lens equation near critical lines. In this paper it is shown that the amplitude of the I exp-3 tail can be derived exactly for arbitrary mass distribution of the stars, surface mass density of stars and smoothly distributed matter, and large-scale shear. This is then compared with the corresponding linear result.

  11. Computer Modeling of High-Intensity Cs-Sputter Ion Sources

    NASA Astrophysics Data System (ADS)

    Brown, T. A.; Roberts, M. L.; Southon, J. R.

    The grid-point mesh program NEDLab has been used to computer model the interior of the high-intensity Cs-sputter source used in routine operations at the Center for Accelerator Mass Spectrometry (CAMS), with the goal of improving negative ion output. NEDLab has several features that are important to realistic modeling of such sources. First, space-charge effects are incorporated in the calculations through an automated ion-trajectories/Poissonelectric-fields successive-iteration process. Second, space charge distributions can be averaged over successive iterations to suppress model instabilities. Third, space charge constraints on ion emission from surfaces can be incorporate under Child's Law based algorithms. Fourth, the energy of ions emitted from a surface can be randomly chosen from within a thermal energy distribution. And finally, ions can be emitted from a surface at randomized angles The results of our modeling effort indicate that significant modification of the interior geometry of the source will double Cs+ ion production from our spherical ionizer and produce a significant increase in negative ion output from the source.

  12. A highly embedded protostar in SFO 18: IRAS 05417+0907

    NASA Astrophysics Data System (ADS)

    Saha, Piyali; Gopinathan, Maheswar; Puravankara, Manoj; Sharma, Neha; Soam, Archana

    2018-04-01

    Bright-rimmed clouds, located at the periphery of relatively evolved HIT regions, are considered to be the sites of star formation possibly triggered by the implosion caused due to the ionizing radiation from nearby massive stars. SFO 18 is one such region showing a bright-rim on the side facing the 0-type star, A Ori. A point source, IRAS 05417+0907, is detected towards the high density region of the cloud. A molecular outflow has been found to be associated with the source. The outflow is directed towards a Herbig-Haro object, HH 175. From the Spitzer and WISE observations, we show evidence of a physical connection between the molecular outflow, IRAS 05417+0907 and the HH object. The spectral energy distribution constructed using multi-wavelength data shows that the point source is most likely a highly embedded protostar.

  13. Analysis of temporal decay of diffuse broadband sound fields in enclosures by decomposition in powers of an absorption parameter

    NASA Astrophysics Data System (ADS)

    Bliss, Donald; Franzoni, Linda; Rouse, Jerry; Manning, Ben

    2005-09-01

    An analysis method for time-dependent broadband diffuse sound fields in enclosures is described. Beginning with a formulation utilizing time-dependent broadband intensity boundary sources, the strength of these wall sources is expanded in a series in powers of an absorption parameter, thereby giving a separate boundary integral problem for each power. The temporal behavior is characterized by a Taylor expansion in the delay time for a source to influence an evaluation point. The lowest-order problem has a uniform interior field proportional to the reciprocal of the absorption parameter, as expected, and exhibits relatively slow exponential decay. The next-order problem gives a mean-square pressure distribution that is independent of the absorption parameter and is primarily responsible for the spatial variation of the reverberant field. This problem, which is driven by input sources and the lowest-order reverberant field, depends on source location and the spatial distribution of absorption. Additional problems proceed at integer powers of the absorption parameter, but are essentially higher-order corrections to the spatial variation. Temporal behavior is expressed in terms of an eigenvalue problem, with boundary source strength distributions expressed as eigenmodes. Solutions exhibit rapid short-time spatial redistribution followed by long-time decay of a predominant spatial mode.

  14. Epidemiological Perspectives on Maltreatment Prevention

    ERIC Educational Resources Information Center

    Wulczyn, Fred

    2009-01-01

    Fred Wulczyn explores how data on the incidence and distribution of child maltreatment shed light on planning and implementing maltreatment prevention programs. He begins by describing and differentiating among the three primary sources of national data on maltreatment. Wulczyn then points out several important patterns in the data. The first…

  15. 40 CFR 423.10 - Applicability.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... ELECTRIC POWER GENERATING POINT SOURCE CATEGORY § 423.10 Applicability. The provisions of this part are... engaged in the generation of electricity for distribution and sale which results primarily from a process utilizing fossil-type fuel (coal, oil, or gas) or nuclear fuel in conjunction with a thermal cycle employing...

  16. 40 CFR 423.10 - Applicability.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... ELECTRIC POWER GENERATING POINT SOURCE CATEGORY § 423.10 Applicability. The provisions of this part are... engaged in the generation of electricity for distribution and sale which results primarily from a process utilizing fossil-type fuel (coal, oil, or gas) or nuclear fuel in conjunction with a thermal cycle employing...

  17. 40 CFR 423.10 - Applicability.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... ELECTRIC POWER GENERATING POINT SOURCE CATEGORY § 423.10 Applicability. The provisions of this part are... engaged in the generation of electricity for distribution and sale which results primarily from a process utilizing fossil-type fuel (coal, oil, or gas) or nuclear fuel in conjunction with a thermal cycle employing...

  18. 40 CFR 423.10 - Applicability.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... ELECTRIC POWER GENERATING POINT SOURCE CATEGORY § 423.10 Applicability. The provisions of this part are... engaged in the generation of electricity for distribution and sale which results primarily from a process utilizing fossil-type fuel (coal, oil, or gas) or nuclear fuel in conjunction with a thermal cycle employing...

  19. 40 CFR 423.10 - Applicability.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... ELECTRIC POWER GENERATING POINT SOURCE CATEGORY § 423.10 Applicability. The provisions of this part are... engaged in the generation of electricity for distribution and sale which results primarily from a process utilizing fossil-type fuel (coal, oil, or gas) or nuclear fuel in conjunction with a thermal cycle employing...

  20. Data-based diffraction kernels for surface waves from convolution and correlation processes through active seismic interferometry

    NASA Astrophysics Data System (ADS)

    Chmiel, Malgorzata; Roux, Philippe; Herrmann, Philippe; Rondeleux, Baptiste; Wathelet, Marc

    2018-05-01

    We investigated the construction of diffraction kernels for surface waves using two-point convolution and/or correlation from land active seismic data recorded in the context of exploration geophysics. The high density of controlled sources and receivers, combined with the application of the reciprocity principle, allows us to retrieve two-dimensional phase-oscillation diffraction kernels (DKs) of surface waves between any two source or receiver points in the medium at each frequency (up to 15 Hz, at least). These DKs are purely data-based as no model calculations and no synthetic data are needed. They naturally emerge from the interference patterns of the recorded wavefields projected on the dense array of sources and/or receivers. The DKs are used to obtain multi-mode dispersion relations of Rayleigh waves, from which near-surface shear velocity can be extracted. Using convolution versus correlation with a grid of active sources is an important step in understanding the physics of the retrieval of surface wave Green's functions. This provides the foundation for future studies based on noise sources or active sources with a sparse spatial distribution.

  1. Design methodology for micro-discrete planar optics with minimum illumination loss for an extended source.

    PubMed

    Shim, Jongmyeong; Park, Changsu; Lee, Jinhyung; Kang, Shinill

    2016-08-08

    Recently, studies have examined techniques for modeling the light distribution of light-emitting diodes (LEDs) for various applications owing to their low power consumption, longevity, and light weight. The energy mapping technique, a design method that matches the energy distributions of an LED light source and target area, has been the focus of active research because of its design efficiency and accuracy. However, these studies have not considered the effects of the emitting area of the LED source. Therefore, there are limitations to the design accuracy for small, high-power applications with a short distance between the light source and optical system. A design method for compensating for the light distribution of an extended source after the initial optics design based on a point source was proposed to overcome such limits, but its time-consuming process and limited design accuracy with multiple iterations raised the need for a new design method that considers an extended source in the initial design stage. This study proposed a method for designing discrete planar optics that controls the light distribution and minimizes the optical loss with an extended source and verified the proposed method experimentally. First, the extended source was modeled theoretically, and a design method for discrete planar optics with the optimum groove angle through energy mapping was proposed. To verify the design method, design for the discrete planar optics was achieved for applications in illumination for LED flash. In addition, discrete planar optics for LED illuminance were designed and fabricated to create a uniform illuminance distribution. Optical characterization of these structures showed that the design was optimal; i.e., we plotted the optical losses as a function of the groove angle, and found a clear minimum. Simulations and measurements showed that an efficient optical design was achieved for an extended source.

  2. The roles of the binodal curve and the spinodal curve in expansions from the supercritical state with flashing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knuth, Eldon L.; Miller, David R.; Even, Uzi

    2014-12-09

    Data extracted from time-of-flight (TOF) measurements made on steady-state He free jets at Göttingen already in 1986 and for pulsed Ne free jets investigated recently at Tel Aviv have been added to an earlier plot of terminal condensed-phase mass fraction x{sub 2∞} as a function of the dimensionless scaling parameter Γ. Γ characterizes the source (fluid species, temperature, pressure and throat diameter); values of x{sub 2∞} are extracted from TOF measurements using conservation of energy in the free-jet expansion. For nozzles consisting of an orifice in a thin plate; the extracted data yield 22 data points which are correlated satisfactorilymore » by a single curve. The Ne free jets were expanded from a conical nozzle with a 20° half angle; the three extracted data points stand together but apart from the aforementioned curve, indicating that the presence of the conical wall influences significantly the expansion and hence the condensation. The 22 data points for the expansions via an orifice consist of 15 measurements with expansions from the gas-phase side of the binodal curve which crossed the binodal curve downstream from the sonic point and 7 measurements with expansions of the gas-phase product of the flashing which occurred after an expansion from the liquid-phase side of the binodal curve crossed the binodal curve upstream from the sonic point. The association of these 22 points with a single curve supports the alternating-phase model for flows with flashing upstream from the sonic point proposed earlier. In order to assess the role of the spinodal curve in such expansions, the spinodal curves for He and Ne were computed using general multi-parameter Helmholtz-free-energy equation-of-state formulations. Then, for the several sets of source-chamber conditions used in the free-jet measurements, thermodynamic states at key locations in the free-jet expansions (binodal curve, sonic point and spinodal curve) were evaluated, with the expansion presumed to be metastable from the binodal curve to the spinodal curve. TOF distributions with more than two peaks (interpreted earlier as superimposed alternating-state TOF distributions) indicated flashing of the metastable flow downstream from the binodal curve but upstream from the sonic point. This relatively early flashing is due apparently to destabilizing interactions with the walls of the source. If the expansion crosses the binodal curve downstream from the nozzle, the metastable fluid does not interact with surfaces and flashing might be delayed until the expansion reaches the spinodal curve. It is concluded that, if the expansion crosses the binodal curve before reaching the sonic point, the resulting metastable fluid downstream from the binodal curve interacts with the adjacent surfaces and flashes into liquid and vapor phases which expand alternately through the nozzle; the two associated alternating TOF distributions are superposed by the chopping process so that the result has the appearance of a single distribution with three peaks.« less

  3. The Galactic Distribution of Massive Star Formation from the Red MSX Source Survey

    NASA Astrophysics Data System (ADS)

    Figura, Charles C.; Urquhart, J. S.

    2013-01-01

    Massive stars inject enormous amounts of energy into their environments in the form of UV radiation and molecular outflows, creating HII regions and enriching local chemistry. These effects provide feedback mechanisms that aid in regulating star formation in the region, and may trigger the formation of subsequent generations of stars. Understanding the mechanics of massive star formation presents an important key to understanding this process and its role in shaping the dynamics of galactic structure. The Red MSX Source (RMS) survey is a multi-wavelength investigation of ~1200 massive young stellar objects (MYSO) and ultra-compact HII (UCHII) regions identified from a sample of colour-selected sources from the Midcourse Space Experiment (MSX) point source catalog and Two Micron All Sky Survey. We present a study of over 900 MYSO and UCHII regions investigated by the RMS survey. We review the methods used to determine distances, and investigate the radial galactocentric distribution of these sources in context with the observed structure of the galaxy. The distribution of MYSO and UCHII regions is found to be spatially correlated with the spiral arms and galactic bar. We examine the radial distribution of MYSOs and UCHII regions and find variations in the star formation rate between the inner and outer Galaxy and discuss the implications for star formation throughout the galactic disc.

  4. 1SXPS: A Deep Swift X-Ray Telescope Point Source Catalog with Light Curves and Spectra

    NASA Technical Reports Server (NTRS)

    Evans, P. A.; Osborne, J. P.; Beardmore, A. P.; Page, K. L.; Willingale, R.; Mountford, C. J.; Pagani, C.; Burrows, D. N.; Kennea, J. A.; Perri, M.; hide

    2013-01-01

    We present the 1SXPS (Swift-XRT point source) catalog of 151,524 X-ray point sources detected by the Swift-XRT in 8 yr of operation. The catalog covers 1905 sq deg distributed approximately uniformly on the sky. We analyze the data in two ways. First we consider all observations individually, for which we have a typical sensitivity of approximately 3 × 10(exp -13) erg cm(exp -2) s(exp -1) (0.3-10 keV). Then we co-add all data covering the same location on the sky: these images have a typical sensitivity of approximately 9 × 10(exp -14) erg cm(exp -2) s(exp -1) (0.3-10 keV). Our sky coverage is nearly 2.5 times that of 3XMM-DR4, although the catalog is a factor of approximately 1.5 less sensitive. The median position error is 5.5 (90% confidence), including systematics. Our source detection method improves on that used in previous X-ray Telescope (XRT) catalogs and we report greater than 68,000 new X-ray sources. The goals and observing strategy of the Swift satellite allow us to probe source variability on multiple timescales, and we find approximately 30,000 variable objects in our catalog. For every source we give positions, fluxes, time series (in four energy bands and two hardness ratios), estimates of the spectral properties, spectra and spectral fits for the brightest sources, and variability probabilities in multiple energy bands and timescales.

  5. Distribution System Upgrade Unit Cost Database

    DOE Data Explorer

    Horowitz, Kelsey

    2017-11-30

    This database contains unit cost information for different components that may be used to integrate distributed photovotaic (D-PV) systems onto distribution systems. Some of these upgrades and costs may also apply to integration of other distributed energy resources (DER). Which components are required, and how many of each, is system-specific and should be determined by analyzing the effects of distributed PV at a given penetration level on the circuit of interest in combination with engineering assessments on the efficacy of different solutions to increase the ability of the circuit to host additional PV as desired. The current state of the distribution system should always be considered in these types of analysis. The data in this database was collected from a variety of utilities, PV developers, technology vendors, and published research reports. Where possible, we have included information on the source of each data point and relevant notes. In some cases where data provided is sensitive or proprietary, we were not able to specify the source, but provide other information that may be useful to the user (e.g. year, location where equipment was installed). NREL has carefully reviewed these sources prior to inclusion in this database. Additional information about the database, data sources, and assumptions is included in the "Unit_cost_database_guide.doc" file included in this submission. This guide provides important information on what costs are included in each entry. Please refer to this guide before using the unit cost database for any purpose.

  6. An experimental study of the impact of trees and urban form on the turbulent dispersion of heavy particles from near ground point sources

    NASA Astrophysics Data System (ADS)

    Stoll, R., II; Christen, A.; Mahaffee, W.; Salesky, S.; Therias, A.; Caitlin, S.

    2016-12-01

    Pollution in the form of small particles has a strong impact on a wide variety of urban processes that play an important role in the function of urban ecosystems and ultimately human health and well-being. As a result, a substantial body of research exists on the sources, sinks, and transport characteristics of urban particulate matter. Most of the existing experimental work examining point sources employed gases (e.g., SF6) as the working medium. Furthermore, the focus of most studies has been on the dispersion of pollutants far from the source location. Here, our focus is on the turbulent dispersion of heavy particles in the near source region of a suburban neighborhood. To this end, we conducted a series of heavy particle releases in the Sunset neighborhood of Vancouver, Canada during June, 2017. The particles where dispersed from a near ground point source at two different locations. The Sunset neighborhood is composed mostly of single dwelling detached houses and has been used in numerous previous urban studies. One of the release points was just upwind of a 4-way intersection and the other in the middle of a contiguous block of houses. Each location had a significant density of trees. A minimum of four different successful release events were conducted at each site. During each release, fluorescing micro particles (mean diameter approx. 30 micron) were released from ultrasonic atomizer nozzles for a duration of approximately 20 minutes. The particles where sampled at 50 locations (1.5 m height) in the area downwind of the release over distances from 1-15 times the mean canopy height ( 6 m) using rotating impaction traps. In addition to the 50 sampler locations, instantaneous wind velocities were measured with eight sonic anemometers distributed horizontally and vertically throughout the release area. The resulting particle plume distributions indicate a strong impact of local urban form in the near source region and a high degree of sensitivity to the local wind direction measured from the sonic anemometers. In addition to presenting the experimental data, initial comparisons to a Lagrangian particle dispersion model driven by a mass consistent diagnostic wind field will be presented.

  7. An experimental study of the impact of trees and urban form on the turbulent dispersion of heavy particles from near ground point sources

    NASA Astrophysics Data System (ADS)

    Stoll, R., II; Christen, A.; Mahaffee, W.; Salesky, S.; Therias, A.; Caitlin, S.

    2017-12-01

    Pollution in the form of small particles has a strong impact on a wide variety of urban processes that play an important role in the function of urban ecosystems and ultimately human health and well-being. As a result, a substantial body of research exists on the sources, sinks, and transport characteristics of urban particulate matter. Most of the existing experimental work examining point sources employed gases (e.g., SF6) as the working medium. Furthermore, the focus of most studies has been on the dispersion of pollutants far from the source location. Here, our focus is on the turbulent dispersion of heavy particles in the near source region of a suburban neighborhood. To this end, we conducted a series of heavy particle releases in the Sunset neighborhood of Vancouver, Canada during June, 2017. The particles where dispersed from a near ground point source at two different locations. The Sunset neighborhood is composed mostly of single dwelling detached houses and has been used in numerous previous urban studies. One of the release points was just upwind of a 4-way intersection and the other in the middle of a contiguous block of houses. Each location had a significant density of trees. A minimum of four different successful release events were conducted at each site. During each release, fluorescing micro particles (mean diameter approx. 30 micron) were released from ultrasonic atomizer nozzles for a duration of approximately 20 minutes. The particles where sampled at 50 locations (1.5 m height) in the area downwind of the release over distances from 1-15 times the mean canopy height ( 6 m) using rotating impaction traps. In addition to the 50 sampler locations, instantaneous wind velocities were measured with eight sonic anemometers distributed horizontally and vertically throughout the release area. The resulting particle plume distributions indicate a strong impact of local urban form in the near source region and a high degree of sensitivity to the local wind direction measured from the sonic anemometers. In addition to presenting the experimental data, initial comparisons to a Lagrangian particle dispersion model driven by a mass consistent diagnostic wind field will be presented.

  8. Dosimetry for a uterine cervix cancer treatment

    NASA Astrophysics Data System (ADS)

    Rodríguez-Ponce, Miguel; Rodríguez-Villafuerte, Mercedes; Sánchez-Castro, Ricardo

    2003-09-01

    The dose distribution around the 3M 137Cs brachytherapy source as well as the same source inside the Amersham ASN 8231 applicator was measured using thermoluminescent dosimeters and radiochromic films. Some of the results were compared with those obtained from a Monte Carlo simulation and a good agreement was observed. The teletherapy dose distribution was measured using a pin-point ionization chamber. In addition, the experimental measurements and the Monte Carlo results were used to estimate the dose received in the rectum and bladder of an hypothetical patient treated with brachytherapy and compared with the dose distribution obtained from the Hospital's brachytherapy planning system. A 20 % dose reduction to the rectum and bladder was observed in both Monte Carlo and experimental measurements, compared with the results of the planning system, which results in a better dose control to these structures.

  9. Acoustic radiation from the submerged circular cylindrical shell treated with active constrained layer damping

    NASA Astrophysics Data System (ADS)

    Yuan, Li-Yun; Xiang, Yu; Lu, Jing; Jiang, Hong-Hua

    2015-12-01

    Based on the transfer matrix method of exploring the circular cylindrical shell treated with active constrained layer damping (i.e., ACLD), combined with the analytical solution of the Helmholtz equation for a point source, a multi-point multipole virtual source simulation method is for the first time proposed for solving the acoustic radiation problem of a submerged ACLD shell. This approach, wherein some virtual point sources are assumed to be evenly distributed on the axial line of the cylindrical shell, and the sound pressure could be written in the form of the sum of the wave functions series with the undetermined coefficients, is demonstrated to be accurate to achieve the radiation acoustic pressure of the pulsating and oscillating spheres respectively. Meanwhile, this approach is proved to be accurate to obtain the radiation acoustic pressure for a stiffened cylindrical shell. Then, the chosen number of the virtual distributed point sources and truncated number of the wave functions series are discussed to achieve the approximate radiation acoustic pressure of an ACLD cylindrical shell. Applying this method, different radiation acoustic pressures of a submerged ACLD cylindrical shell with different boundary conditions, different thickness values of viscoelastic and piezoelectric layer, different feedback gains for the piezoelectric layer and coverage of ACLD are discussed in detail. Results show that a thicker thickness and larger velocity gain for the piezoelectric layer and larger coverage of the ACLD layer can obtain a better damping effect for the whole structure in general. Whereas, laying a thicker viscoelastic layer is not always a better treatment to achieve a better acoustic characteristic. Project supported by the National Natural Science Foundation of China (Grant Nos. 11162001, 11502056, and 51105083), the Natural Science Foundation of Guangxi Zhuang Autonomous Region, China (Grant No. 2012GXNSFAA053207), the Doctor Foundation of Guangxi University of Science and Technology, China (Grant No. 12Z09), and the Development Project of the Key Laboratory of Guangxi Zhuang Autonomous Region, China (Grant No. 1404544).

  10. NuSTAR Hard X-Ray Survey of the Galactic Center Region. II. X-Ray Point Sources

    NASA Technical Reports Server (NTRS)

    Hong, Jaesub; Mori, Kaya; Hailey, Charles J.; Nynka, Melania; Zhang, Shou; Gotthelf, Eric; Fornasini, Francesca M.; Krivonos, Roman; Bauer, Franz; Perez, Kerstin; hide

    2016-01-01

    We present the first survey results of hard X-ray point sources in the Galactic Center (GC) region by NuSTAR. We have discovered 70 hard (3-79 keV) X-ray point sources in a 0.6 deg(sup 2) region around Sgr?A* with a total exposure of 1.7 Ms, and 7 sources in the Sgr B2 field with 300 ks. We identify clear Chandra counterparts for 58 NuSTAR sources and assign candidate counterparts for the remaining 19. The NuSTAR survey reaches X-ray luminosities of approx. 4× and approx. 8 ×10(exp 32) erg/s at the GC (8 kpc) in the 3-10 and 10-40 keV bands, respectively. The source list includes three persistent luminous X-ray binaries (XBs) and the likely run-away pulsar called the Cannonball. New source-detection significance maps reveal a cluster of hard (>10 keV) X-ray sources near the Sgr A diffuse complex with no clear soft X-ray counterparts. The severe extinction observed in the Chandra spectra indicates that all the NuSTAR sources are in the central bulge or are of extragalactic origin. Spectral analysis of relatively bright NuSTAR sources suggests that magnetic cataclysmic variables constitute a large fraction (>40%-60%). Both spectral analysis and logN-logS distributions of the NuSTAR sources indicate that the X-ray spectra of the NuSTAR sources should have kT > 20 keV on average for a single temperature thermal plasma model or an average photon index of Lambda = 1.5-2 for a power-law model. These findings suggest that the GC X-ray source population may contain a larger fraction of XBs with high plasma temperatures than the field population.

  11. Distribution patterns of mercury in Lakes and Rivers of northeastern North America

    USGS Publications Warehouse

    Dennis, Ian F.; Clair, Thomas A.; Driscoll, Charles T.; Kamman, Neil; Chalmers, Ann T.; Shanley, Jamie; Norton, Stephen A.; Kahl, Steve

    2005-01-01

    We assembled 831 data points for total mercury (Hgt) and 277 overlapping points for methyl mercury (CH3Hg+) in surface waters from Massachussetts, USA to the Island of Newfoundland, Canada from State, Provincial, and Federal government databases. These geographically indexed values were used to determine: (a) if large-scale spatial distribution patterns existed and (b) whether there were significant relationships between the two main forms of aquatic Hg as well as with total organic carbon (TOC), a well know complexer of metals. We analyzed the catchments where samples were collected using a Geographical Information System (GIS) approach, calculating catchment sizes, mean slope, and mean wetness index. Our results show two main spatial distribution patterns. We detected loci of high Hgt values near urbanized regions of Boston MA and Portland ME. However, except for one unexplained exception, the highest Hgt and CH3Hg+ concentrations were located in regions far from obvious point sources. These correlated to topographically flat (and thus wet) areas that we relate to wetland abundances. We show that aquatic Hgt and CH3Hg+ concentrations are generally well correlated with TOC and with each other. Over the region, CH3Hg+ concentrations are typically approximately 15% of Hgt. There is an exception in the Boston region where CH3Hg+ is low compared to the high Hgt values. This is probably due to the proximity of point sources of inorganic Hg and a lack of wetlands. We also attempted to predict Hg concentrations in water with statistical models using catchment features as variables. We were only able to produce statistically significant predictive models in some parts of regions due to the lack of suitable digital information, and because data ranges in some regions were too narrow for meaningful regression analyses.

  12. Estimation of sulphur dioxide emission rate from a power plant based on the remote sensing measurement with an imaging-DOAS instrument

    NASA Astrophysics Data System (ADS)

    Chong, Jihyo; Kim, Young J.; Baek, Jongho; Lee, Hanlim

    2016-10-01

    Major anthropogenic sources of sulphur dioxide in the troposphere include point sources such as power plants and combustion-derived industrial sources. Spatially resolved remote sensing of atmospheric trace gases is desirable for better estimation and validation of emission from those sources. It has been reported that Imaging Differential Optical Absorption Spectroscopy (I-DOAS) technique can provide the spatially resolved two-dimensional distribution measurement of atmospheric trace gases. This study presents the results of I-DOAS observations of SO2 from a large power plant. The stack plume from the Taean coal-fired power plant was remotely sensed with an I-DOAS instrument. The slant column density (SCD) of SO2 was derived by data analysis of the absorption spectra of the scattered sunlight measured by an I-DOAS over the power plant stacks. Two-dimensional distribution of SO2 SCD was obtained over the viewing window of the I-DOAS instrument. The measured SCDs were converted to mixing ratios in order to estimate the rate of SO2 emission from each stack. The maximum mixing ratio of SO2 was measured to be 28.1 ppm with a SCD value of 4.15×1017 molecules/cm2. Based on the exit velocity of the plume from the stack, the emission rate of SO2 was estimated to be 22.54 g/s. Remote sensing of SO2 with an I-DOAS instrument can be very useful for independent estimation and validation of the emission rates from major point sources as well as area sources.

  13. Heterogeneity of direct aftershock productivity of the main shock rupture

    NASA Astrophysics Data System (ADS)

    Guo, Yicun; Zhuang, Jiancang; Hirata, Naoshi; Zhou, Shiyong

    2017-07-01

    The epidemic type aftershock sequence (ETAS) model is widely used to describe and analyze the clustering behavior of seismicity. Instead of regarding large earthquakes as point sources, the finite-source ETAS model treats them as ruptures that extend in space. Each earthquake rupture consists of many patches, and each patch triggers its own aftershocks isotropically. We design an iterative algorithm to invert the unobserved fault geometry based on the stochastic reconstruction method. This model is applied to analyze the Japan Meteorological Agency (JMA) catalog during 1964-2014. We take six great earthquakes with magnitudes >7.5 after 1980 as finite sources and reconstruct the aftershock productivity patterns on each rupture surface. Comparing results from the point-source ETAS model, we find the following: (1) the finite-source model improves the data fitting; (2) direct aftershock productivity is heterogeneous on the rupture plane; (3) the triggering abilities of M5.4+ events are enhanced; (4) the background rate is higher in the off-fault region and lower in the on-fault region for the Tohoku earthquake, while high probabilities of direct aftershocks distribute all over the source region in the modified model; (5) the triggering abilities of five main shocks become 2-6 times higher after taking the rupture geometries into consideration; and (6) the trends of the cumulative background rate are similar in both models, indicating the same levels of detection ability for seismicity anomalies. Moreover, correlations between aftershock productivity and slip distributions imply that aftershocks within rupture faults are adjustments to coseismic stress changes due to slip heterogeneity.

  14. Motivations for Parent Involvement within a Community School Setting

    ERIC Educational Resources Information Center

    Mercanti-Anthony, Michael-Joseph

    2012-01-01

    Increasingly, education reform advocates have pointed to the growing community school movement as a partial answer to the myriad challenges facing urban public education. Rooted in the ideas of John Dewey, community schools are generally defined as localized community hubs of partnerships--often serving as sources of service distribution and…

  15. Environmental and Water Quality Operational Studies. Riverine Influences on the Water Quality Characteristics of West Point Lake.

    DTIC Science & Technology

    1984-01-01

    photosynthetic productivity and -. the confinement of river water to intermediate depths. Increases in manga - nese concentration and dye distribution...reduction of suspended in- soluble manganese would not account for the increase in dissolved manga - .. nese. The only available source of additional

  16. The Joint Milli-Arcsecond Pathfinder Survey (JMAPS): Mission Overview and Attitude Sensing Applications

    DTIC Science & Technology

    2009-01-01

    employs a set of reference targets such as asteroids that are relatively numer- ous, more or less uniformly distributed around the Sun, and relatively...point source-like. Just such a population exists—90 km-class asteroids . There are about 100 of these objects with relatively well-know orbits...These are main belt objects that are approximately evenly distributed around the sun. They are large enough to be quasi-spherical in nature, and as a

  17. Distribution and Fate of Tributyltin in the United States Marine Environment

    DTIC Science & Technology

    Tributyltin ( TBT ) has been measured in water in 12 of 15 harbors studied during U.S. Navy baseline surveys. The highest concentrations of TBT (some...no detectable (5 ng dm-3) TBT . TBT monitoring studies with increased detection limits (l ng dm-3) have documented a high degree of TBT variability...associated with tide, season and intermittent point source discharges. Although yacht harbors were shown to be the principal TBT source in most regions

  18. Distributed source pollutant transport module based on BTOPMC: a case study of the Laixi River basin in the Sichuan province of southwest China

    NASA Astrophysics Data System (ADS)

    Zhang, Hongbo; Ao, Tianqi; Gusyev, Maksym; Ishidaira, Hiroshi; Magome, Jun; Takeuchi, Kuniyoshi

    2018-06-01

    Nitrogen and phosphorus concentrations in Chinese river catchments are contributed by agricultural non-point and industrial point sources causing deterioration of river water quality and degradation of ecosystem functioning for a long distance downstream. To evaluate these impacts, a distributed pollutant transport module was developed on the basis of BTOPMC (Block-Wise Use of TOPMODEL with Muskingum-Cunge Method), a grid-based distributed hydrological model, using the water flow routing process of BTOPMC as the carrier of pollutant transport due a direct runoff. The pollutant flux at each grid is simulated based on mass balance of pollutants within the grid and surface water transport of these pollutants occurs between grids in the direction of the water flow on daily time steps. The model was tested in the study area of the Lu county area situated in the Laixi River basin in the Sichuan province of southwest China. The simulated concentrations of nitrogen and phosphorus are compared with the available monthly data at several water quality stations. These results demonstrate a greater pollutant concentration in the beginning of high flow period indicating the main mechanism of pollution transport. From these preliminary results, we suggest that the distributed pollutant transport model can reflect the characteristics of the pollutant transport and reach the expected target.

  19. Cyber-Physical Trade-Offs in Distributed Detection Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S; Yao, David K. Y.; Chin, J. C.

    2010-01-01

    We consider a network of sensors that measure the scalar intensity due to the background or a source combined with background, inside a two-dimensional monitoring area. The sensor measurements may be random due to the underlying nature of the source and background or due to sensor errors or both. The detection problem is infer the presence of a source of unknown intensity and location based on sensor measurements. In the conventional approach, detection decisions are made at the individual sensors, which are then combined at the fusion center, for example using the majority rule. With increased communication and computation costs,more » we show that a more complex fusion algorithm based on measurements achieves better detection performance under smooth and non-smooth source intensity functions, Lipschitz conditions on probability ratios and a minimum packing number for the state-space. We show that these conditions for trade-offs between the cyber costs and physical detection performance are applicable for two detection problems: (i) point radiation sources amidst background radiation, and (ii) sources and background with Gaussian distributions.« less

  20. Do gamma-ray burst sources repeat?

    NASA Technical Reports Server (NTRS)

    Meegan, C. A.; Hartmann, D. H.; Brainerd, J. J.; Briggs, M.; Paciesas, W. S.; Pendleton, G.; Kouveliotou, C.; Fishman, G.; Blumenthal, G.; Brock, M.

    1994-01-01

    The demonstration of repeated gamma-ray bursts from an individual source would severely constrain burst source models. Recent reports of evidence for repetition in the first BATSE burst catalog have generated renewed interest in this issue. Here, we analyze the angular distribution of 585 bursts of the second BATSE catalog (Meegan et al. 1994). We search for evidence of burst recurrence using the nearest and farthest neighbor statistic ad the two-point angular correlation function. We find the data to be consistent with the hypothesis that burst sources do not repeat; however, a repeater fraction of up to about 20% of the bursts cannot be excluded.

  1. Chandra Deep X-ray Observation of a Typical Galactic Plane Region and Near-Infrared Identification

    NASA Technical Reports Server (NTRS)

    Ebisawa, K.; Tsujimoto, M.; Paizis, A.; Hamaguichi, K.; Bamba, A.; Cutri, R.; Kaneda, H.; Maeda, Y.; Sato, G.; Senda, A.

    2004-01-01

    Using the Chandra Advanced CCD Imaging Spectrometer Imaging array (ACIS-I), we have carried out a deep hard X-ray observation of the Galactic plane region at (l,b) approx. (28.5 deg,0.0 deg), where no discrete X-ray source has been reported previously. We have detected 274 new point X-ray sources (4 sigma confidence) as well as strong Galactic diffuse emission within two partidly overlapping ACIS-I fields (approx. 250 sq arcmin in total). The point source sensitivity was approx. 3 x 10(exp -15)ergs/s/sq cm in the hard X-ray band (2-10 keV and approx. 2 x 10(exp -16) ergs/s/sq cm in the soft band (0.5-2 keV). Sum of all the detected point source fluxes account for only approx. 10 % of the total X-ray fluxes in the field of view. In order to explain the total X-ray fluxes by a superposition of fainter point sources, an extremely rapid increase of the source population is required below our sensitivity limit, which is hardly reconciled with any source distribution in the Galactic plane. Therefore, we conclude that X-ray emission from the Galactic plane has truly diffuse origin. Only 26 point sources were detected both in the soft and hard bands, indicating that there are two distinct classes of the X-ray sources distinguished by the spectral hardness ratio. Surface number density of the hard sources is only slightly higher than observed at the high Galactic latitude regions, strongly suggesting that majority of the hard X-ray sources are active galaxies seen through the Galactic plane. Following the Chandra observation, we have performed a near-infrared (NIR) survey with SOFI at ESO/NTT to identify these new X-ray sources. Since the Galactic plane is opaque in NIR, we did not see the background extragalactic sources in NIR. In fact, only 22 % of the hard sources had NIR counterparts which are most likely to be Galactic origin. Composite X-ray energy spectrum of those hard X-ray sources having NIR counterparts exhibits a narrow approx. 6.7 keV iron emission line, which is a signature of Galactic quiescent cataclysmic variables (CVs).

  2. Assimilation of concentration measurements for retrieving multiple point releases in atmosphere: A least-squares approach to inverse modelling

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Rani, Raj

    2015-10-01

    The study addresses the identification of multiple point sources, emitting the same tracer, from their limited set of merged concentration measurements. The identification, here, refers to the estimation of locations and strengths of a known number of simultaneous point releases. The source-receptor relationship is described in the framework of adjoint modelling by using an analytical Gaussian dispersion model. A least-squares minimization framework, free from an initialization of the release parameters (locations and strengths), is presented to estimate the release parameters. This utilizes the distributed source information observable from the given monitoring design and number of measurements. The technique leads to an exact retrieval of the true release parameters when measurements are noise free and exactly described by the dispersion model. The inversion algorithm is evaluated using the real data from multiple (two, three and four) releases conducted during Fusion Field Trials in September 2007 at Dugway Proving Ground, Utah. The release locations are retrieved, on average, within 25-45 m of the true sources with the distance from retrieved to true source ranging from 0 to 130 m. The release strengths are also estimated within a factor of three to the true release rates. The average deviations in retrieval of source locations are observed relatively large in two release trials in comparison to three and four release trials.

  3. Nature of the Diffuse Source and Its Central Point-like Source in SNR 0509–67.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Litke, Katrina C.; Chu, You-Hua; Holmes, Abigail

    We examine a diffuse emission region near the center of SNR 0509−67.5 to determine its nature. Within this diffuse region we observe a point-like source that is bright in the near-IR, but is not visible in the B and V bands. We consider an emission line observed at 6766 Å and the possibilities that it is Ly α , H α , and [O ii] λ 3727. We examine the spectral energy distribution (SED) of the source, comprised of Hubble Space Telescope B , V , I , J , and H bands in addition to Spitzer /IRAC 3.6, 4.5,more » 5.8, and 8 μ m bands. The peak of the SED is consistent with a background galaxy at z ≈ 0.8 ± 0.2 and a possible Balmer jump places the galaxy at z ≈ 0.9 ± 0.3. These SED considerations support the emission line’s identification as [O ii] λ 3727. We conclude that the diffuse source in SNR 0509−67.5 is a background galaxy at z ≈ 0.82. Furthermore, we identify the point-like source superposed near the center of the galaxy as its central bulge. Finally, we find no evidence for a surviving companion star, indicating a double-degenerate origin for SNR 0509−67.5.« less

  4. A First Estimate of the X-Ray Binary Frequency as a Function of Star Cluster Mass in a Single Galactic System

    NASA Astrophysics Data System (ADS)

    Clark, D. M.; Eikenberry, S. S.; Brandl, B. R.; Wilson, J. C.; Carson, J. C.; Henderson, C. P.; Hayward, T. L.; Barry, D. J.; Ptak, A. F.; Colbert, E. J. M.

    2008-05-01

    We use the previously identified 15 infrared star cluster counterparts to X-ray point sources in the interacting galaxies NGC 4038/4039 (the Antennae) to study the relationship between total cluster mass and X-ray binary number. This significant population of X-Ray/IR associations allows us to perform, for the first time, a statistical study of X-ray point sources and their environments. We define a quantity, η, relating the fraction of X-ray sources per unit mass as a function of cluster mass in the Antennae. We compute cluster mass by fitting spectral evolutionary models to Ks luminosity. Considering that this method depends on cluster age, we use four different age distributions to explore the effects of cluster age on the value of η and find it varies by less than a factor of 4. We find a mean value of η for these different distributions of η = 1.7 × 10-8 M-1⊙ with ση = 1.2 × 10-8 M-1⊙. Performing a χ2 test, we demonstrate η could exhibit a positive slope, but that it depends on the assumed distribution in cluster ages. While the estimated uncertainties in η are factors of a few, we believe this is the first estimate made of this quantity to "order of magnitude" accuracy. We also compare our findings to theoretical models of open and globular cluster evolution, incorporating the X-ray binary fraction per cluster.

  5. Efficient LIDAR Point Cloud Data Managing and Processing in a Hadoop-Based Distributed Framework

    NASA Astrophysics Data System (ADS)

    Wang, C.; Hu, F.; Sha, D.; Han, X.

    2017-10-01

    Light Detection and Ranging (LiDAR) is one of the most promising technologies in surveying and mapping city management, forestry, object recognition, computer vision engineer and others. However, it is challenging to efficiently storage, query and analyze the high-resolution 3D LiDAR data due to its volume and complexity. In order to improve the productivity of Lidar data processing, this study proposes a Hadoop-based framework to efficiently manage and process LiDAR data in a distributed and parallel manner, which takes advantage of Hadoop's storage and computing ability. At the same time, the Point Cloud Library (PCL), an open-source project for 2D/3D image and point cloud processing, is integrated with HDFS and MapReduce to conduct the Lidar data analysis algorithms provided by PCL in a parallel fashion. The experiment results show that the proposed framework can efficiently manage and process big LiDAR data.

  6. Continuous description of fluctuating eccentricities

    NASA Astrophysics Data System (ADS)

    Blaizot, Jean-Paul; Broniowski, Wojciech; Ollitrault, Jean-Yves

    2014-11-01

    We consider the initial energy density in the transverse plane of a high energy nucleus-nucleus collision as a random field ρ (x), whose probability distribution P [ ρ ], the only ingredient of the present description, encodes all possible sources of fluctuations. We argue that it is a local Gaussian, with a short-range 2-point function, and that the fluctuations relevant for the calculation of the eccentricities that drive the anisotropic flow have small relative amplitudes. In fact, this 2-point function, together with the average density, contains all the information needed to calculate the eccentricities and their variances, and we derive general model independent expressions for these quantities. The short wavelength fluctuations are shown to play no role in these calculations, except for a renormalization of the short range part of the 2-point function. As an illustration, we compare to a commonly used model of independent sources, and recover the known results of this model.

  7. Phase 3 experiments of the JAERI/USDOE collaborative program on fusion blanket neutronics. Volume 1: Experiment

    NASA Astrophysics Data System (ADS)

    Oyama, Yukio; Konno, Chikara; Ikeda, Yujiro; Maekawa, Fujio; Kosako, Kazuaki; Nakamura, Tomoo; Maekawa, Hiroshi; Youssef, Mahmoud Z.; Kumar, Anil; Abdou, Mohamed A.

    1994-02-01

    A pseudo-line source has been realized by using an accelerator based D-T point neutron source. The pseudo-line source is obtained by time averaging of continuously moving point source or by superposition of finely distributed point sources. The line source is utilized for fusion blanket neutronics experiments with an annular geometry so as to simulate a part of a tokamak reactor. The source neutron characteristics were measured for two operational modes for the line source, continuous and step-wide modes, with the activation foil and the NE213 detectors, respectively. In order to give a source condition for a successive calculational analysis on the annular blanket experiment, the neutron source characteristics was calculated by a Monte Carlo code. The reliability of the Monte Carlo calculation was confirmed by comparison with the measured source characteristics. The shape of the annular blanket system was a rectangular with an inner cavity. The annular blanket was consist of 15 mm-thick first wall (SS304) and 406 mm-thick breeder zone with Li2O at inside and Li2CO3 at outside. The line source was produced at the center of the inner cavity by moving the annular blanket system in the span of 2 m. Three annular blanket configurations were examined; the reference blanket, the blanket covered with 25 mm thick graphite armor and the armor-blanket with a large opening. The neutronics parameters of tritium production rate, neutron spectrum and activation reaction rate were measured with specially developed techniques such as multi-detector data acquisition system, spectrum weighting function method and ramp controlled high voltage system. The present experiment provides unique data for a higher step of benchmark to test a reliability of neutronics design calculation for a realistic tokamak reactor.

  8. Study of landscape patterns of variation and optimization based on non-point source pollution control in an estuary.

    PubMed

    Jiang, Mengzhen; Chen, Haiying; Chen, Qinghui; Wu, Haiyan

    2014-10-15

    Appropriate increases in the "sink" of a landscape can reduce the risk of non-point source pollution (NPSP) to the sea at relatively lower costs and at a higher efficiency. Based on high-resolution remote sensing image data taken between 2003 and 2008, we analyzed the "source" and "sink" landscape pattern variations of nitrogen and phosphorus pollutants in the Jiulongjiang estuary region. The contribution to the sea and distribution of each pollutant in the region was calculated using the LCI and mGLCI models. The results indicated that an increased amount of pollutants was contributed to the sea, and the "source" area of the nitrogen NPSP in the study area increased by 32.75 km(2). We also propose a landscape pattern optimization to reduce pollution in the Jiulongjiang estuary in 2008 through the conversion of cultivated land with slopes greater than 15° and paddy fields near rivers, and an increase in mangrove areas. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. C-point and V-point singularity lattice formation and index sign conversion methods

    NASA Astrophysics Data System (ADS)

    Kumar Pal, Sushanta; Ruchi; Senthilkumaran, P.

    2017-06-01

    The generic singularities in an ellipse field are C-points namely stars, lemons and monstars in a polarization distribution with C-point indices (-1/2), (+1/2) and (+1/2) respectively. Similar to C-point singularities, there are V-point singularities that occur in a vector field and are characterized by Poincare-Hopf index of integer values. In this paper we show that the superposition of three homogenously polarized beams in different linear states leads to the formation of polarization singularity lattice. Three point sources at the focal plane of the lens are used to create three interfering plane waves. A radial/azimuthal polarization converter (S-wave plate) placed near the focal plane modulates the polarization states of the three beams. The interference pattern is found to host C-points and V-points in a hexagonal lattice. The C-points occur at intensity maxima and V-points occur at intensity minima. Modulating the state of polarization (SOP) of three plane waves from radial to azimuthal does not essentially change the nature of polarization singularity lattice as the Poincare-Hopf index for both radial and azimuthal polarization distributions is (+1). Hence a transformation from a star to a lemon is not trivial, as such a transformation requires not a single SOP change, but a change in whole spatial SOP distribution. Further there is no change in the lattice structure and the C- and V-points appear at locations where they were present earlier. Hence to convert an interlacing star and V-point lattice into an interlacing lemon and V-point lattice, the interferometer requires modification. We show for the first time a method to change the polarity of C-point and V-point indices. This means that lemons can be converted into stars and stars can be converted into lemons. Similarly the positive V-point can be converted to negative V-point and vice versa. The intensity distribution in all these lattices is invariant as the SOPs of the three beams are changed in an orderly fashion. It shows degeneracy as long as the SOPs of the three beams are drawn from polarization distributions that have Poincare-Hopf index of same magnitude. Various topological aspects of these lattices are presented with the help of Stokes field S12, which is constructed using generalized Stokes parameters of a fully polarized light. We envisage that such polarization lattice structure may lead to novel concept of structured polarization illumination methods in super resolution microscopy.

  10. Wideband RELAX and wideband CLEAN for aeroacoustic imaging

    NASA Astrophysics Data System (ADS)

    Wang, Yanwei; Li, Jian; Stoica, Petre; Sheplak, Mark; Nishida, Toshikazu

    2004-02-01

    Microphone arrays can be used for acoustic source localization and characterization in wind tunnel testing. In this paper, the wideband RELAX (WB-RELAX) and the wideband CLEAN (WB-CLEAN) algorithms are presented for aeroacoustic imaging using an acoustic array. WB-RELAX is a parametric approach that can be used efficiently for point source imaging without the sidelobe problems suffered by the delay-and-sum beamforming approaches. WB-CLEAN does not have sidelobe problems either, but it behaves more like a nonparametric approach and can be used for both point source and distributed source imaging. Moreover, neither of the algorithms suffers from the severe performance degradations encountered by the adaptive beamforming methods when the number of snapshots is small and/or the sources are highly correlated or coherent with each other. A two-step optimization procedure is used to implement the WB-RELAX and WB-CLEAN algorithms efficiently. The performance of WB-RELAX and WB-CLEAN is demonstrated by applying them to measured data obtained at the NASA Langley Quiet Flow Facility using a small aperture directional array (SADA). Somewhat surprisingly, using these approaches, not only were the parameters of the dominant source accurately determined, but a highly correlated multipath of the dominant source was also discovered.

  11. Wideband RELAX and wideband CLEAN for aeroacoustic imaging.

    PubMed

    Wang, Yanwei; Li, Jian; Stoica, Petre; Sheplak, Mark; Nishida, Toshikazu

    2004-02-01

    Microphone arrays can be used for acoustic source localization and characterization in wind tunnel testing. In this paper, the wideband RELAX (WB-RELAX) and the wideband CLEAN (WB-CLEAN) algorithms are presented for aeroacoustic imaging using an acoustic array. WB-RELAX is a parametric approach that can be used efficiently for point source imaging without the sidelobe problems suffered by the delay-and-sum beamforming approaches. WB-CLEAN does not have sidelobe problems either, but it behaves more like a nonparametric approach and can be used for both point source and distributed source imaging. Moreover, neither of the algorithms suffers from the severe performance degradations encountered by the adaptive beamforming methods when the number of snapshots is small and/or the sources are highly correlated or coherent with each other. A two-step optimization procedure is used to implement the WB-RELAX and WB-CLEAN algorithms efficiently. The performance of WB-RELAX and WB-CLEAN is demonstrated by applying them to measured data obtained at the NASA Langley Quiet Flow Facility using a small aperture directional array (SADA). Somewhat surprisingly, using these approaches, not only were the parameters of the dominant source accurately determined, but a highly correlated multipath of the dominant source was also discovered.

  12. Mercury in mosses Hylocomium splendens (Hedw.) B.S.G. and Pleurozium schreberi (Brid.) Mitt. from Poland and Alaska: Understanding the origin of pollution sources

    USGS Publications Warehouse

    Migaszewski, Z.M.; Galuszka, A.; Dole, ogonekgowska S.; Crock, J.G.; Lamothe, P.J.

    2010-01-01

    This report shows baseline concentrations of mercury in the moss species Hylocomium splendens and Pleurozium schreberi from the Kielce area and the remaining Holy Cross Mountains (HCM) region (south-central Poland), and Wrangell-Saint Elias National Park and Preserve (Alaska) and Denali National Park and Preserve (Alaska). Like mosses from many European countries, Polish mosses were distinctly elevated in Hg, bearing a signature of cross-border atmospheric transport combined with local point sources. In contrast, Alaskan mosses showed lower Hg levels, reflecting mostly the underlying geology. Compared to HCM, Alaskan and Kielce mosses exhibited more uneven spatial distribution patterns of Hg. This variation is linked to topography and location of local point sources (Kielce) and underlying geology (Alaska). Both H. splendens and P. schreberi showed similar bioaccumulative capabilities of Hg in all four study areas. ?? 2010 Elsevier Inc.

  13. Evaluation of ground-water quality in the Santa Maria Valley, California

    USGS Publications Warehouse

    Hughes, Jerry L.

    1977-01-01

    The quality and quantity of recharge to the Santa Maria Valley, Calif., ground-water basin from natural sources, point sources, and agriculture are expressed in terms of a hydrologic budget, a solute balance, and maps showing the distribution of select chemical constituents. Point sources includes a sugar-beet refinery, oil refineries, stockyards, golf courses, poultry farms, solid-waste landfills, and municipal and industrial wastewater-treatment facilities. Pumpage has exceeded recharge by about 10,000 acre-feet per year. The result is a declining potentiometric surface with an accumulation of solutes and an increase in nitrogen in ground water. Nitrogen concentrations have reached as much as 50 milligrams per liter. In comparison to the solutes from irrigation return, natural recharge, and rain, discharge of wastewater from municipal and industrial wastewater-treatment facilities contributes less than 10 percent. The quality of treated wastewater is often lower in select chemical constituents than the receiving water. (Woodard-USGS)

  14. A program to calculate pulse transmission responses through transversely isotropic media

    NASA Astrophysics Data System (ADS)

    Li, Wei; Schmitt, Douglas R.; Zou, Changchun; Chen, Xiwei

    2018-05-01

    We provide a program (AOTI2D) to model responses of ultrasonic pulse transmission measurements through arbitrarily oriented transversely isotropic rocks. The program is built with the distributed point source method that treats the transducers as a series of point sources. The response of each point source is calculated according to the ray-tracing theory of elastic plane waves. The program could offer basic wave parameters including phase and group velocities, polarization, anisotropic reflection coefficients and directivity patterns, and model the wave fields, static wave beam, and the observed signals for pulse transmission measurements considering the material's elastic stiffnesses and orientations, sample dimensions, and the size and positions of the transmitters and the receivers. The program could be applied to exhibit the ultrasonic beam behaviors in anisotropic media, such as the skew and diffraction of ultrasonic beams, and analyze its effect on pulse transmission measurements. The program would be a useful tool to help design the experimental configuration and interpret the results of ultrasonic pulse transmission measurements through either isotropic or transversely isotropic rock samples.

  15. Note: Precise radial distribution of charged particles in a magnetic guiding field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backe, H., E-mail: backe@kph.uni-mainz.de

    2015-07-15

    Current high precision beta decay experiments of polarized neutrons, employing magnetic guiding fields in combination with position sensitive and energy dispersive detectors, resulted in a detailed study of the mono-energetic point spread function (PSF) for a homogeneous magnetic field. A PSF describes the radial probability distribution of mono-energetic electrons at the detector plane emitted from a point-like source. With regard to accuracy considerations, unwanted singularities occur as a function of the radial detector coordinate which have recently been investigated by subdividing the radial coordinate into small bins or employing analytical approximations. In this note, a series expansion of the PSFmore » is presented which can numerically be evaluated with arbitrary precision.« less

  16. Support of Multidimensional Parallelism in the OpenMP Programming Model

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Jost, Gabriele

    2003-01-01

    OpenMP is the current standard for shared-memory programming. While providing ease of parallel programming, the OpenMP programming model also has limitations which often effect the scalability of applications. Examples for these limitations are work distribution and point-to-point synchronization among threads. We propose extensions to the OpenMP programming model which allow the user to easily distribute the work in multiple dimensions and synchronize the workflow among the threads. The proposed extensions include four new constructs and the associated runtime library. They do not require changes to the source code and can be implemented based on the existing OpenMP standard. We illustrate the concept in a prototype translator and test with benchmark codes and a cloud modeling code.

  17. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems

    PubMed Central

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-01-01

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm. PMID:26985896

  18. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems.

    PubMed

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-03-12

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm.

  19. Development and Characterization of a Laser-Induced Acoustic Desorption Source.

    PubMed

    Huang, Zhipeng; Ossenbrüggen, Tim; Rubinsky, Igor; Schust, Matthias; Horke, Daniel A; Küpper, Jochen

    2018-03-20

    A laser-induced acoustic desorption source, developed for use at central facilities, such as free-electron lasers, is presented. It features prolonged measurement times and a fixed interaction point. A novel sample deposition method using aerosol spraying provides a uniform sample coverage and hence stable signal intensity. Utilizing strong-field ionization as a universal detection scheme, the produced molecular plume is characterized in terms of number density, spatial extend, fragmentation, temporal distribution, translational velocity, and translational temperature. The effect of desorption laser intensity on these plume properties is evaluated. While translational velocity is invariant for different desorption laser intensities, pointing to a nonthermal desorption mechanism, the translational temperature increases significantly and higher fragmentation is observed with increased desorption laser fluence.

  20. Reliability and longitudinal change of detrital-zircon age spectra in the Snake River system, Idaho and Wyoming: An example of reproducing the bumpy barcode

    NASA Astrophysics Data System (ADS)

    Link, Paul Karl; Fanning, C. Mark; Beranek, Luke P.

    2005-12-01

    Detrital-zircon age-spectra effectively define provenance in Holocene and Neogene fluvial sands from the Snake River system of the northern Rockies, U.S.A. SHRIMP U-Pb dates have been measured for forty-six samples (about 2700 zircon grains) of fluvial and aeolian sediment. The detrital-zircon age distributions are repeatable and demonstrate predictable longitudinal variation. By lumping multiple samples to attain populations of several hundred grains, we recognize distinctive, provenance-defining zircon-age distributions or "barcodes," for fluvial sedimentary systems of several scales, within the upper and middle Snake River system. Our detrital-zircon studies effectively define the geochronology of the northern Rocky Mountains. The composite detrital-zircon grain distribution of the middle Snake River consists of major populations of Neogene, Eocene, and Cretaceous magmatic grains plus intermediate and small grain populations of multiply recycled Grenville (˜950 to 1300 Ma) grains and Yavapai-Mazatzal province grains (˜1600 to 1800 Ma) recycled through the upper Belt Supergroup and Cretaceous sandstones. A wide range of older Paleoproterozoic and Archean grains are also present. The best-case scenario for using detrital-zircon populations to isolate provenance is when there is a point-source pluton with known age, that is only found in one location or drainage. We find three such zircon age-populations in fluvial sediments downstream from the point-source plutons: Ordovician in the southern Beaverhead Mountains, Jurassic in northern Nevada, and Oligocene in the Albion Mountains core complex of southern Idaho. Large detrital-zircon age-populations derived from regionally well-defined, magmatic or recycled sedimentary, sources also serve to delimit the provenance of Neogene fluvial systems. In the Snake River system, defining populations include those derived from Cretaceous Atlanta lobe of the Idaho batholith (80 to 100 Ma), Eocene Challis Volcanic Group and associated plutons (˜45 to 52 Ma), and Neogene rhyolitic Yellowstone-Snake River Plain volcanics (˜0 to 17 Ma). For first-order drainage basins containing these zircon-rich source terranes, or containing a point-source pluton, a 60-grain random sample is sufficient to define the dominant provenance. The most difficult age-distributions to analyze are those that contain multiple small zircon age-populations and no defining large populations. Examples of these include streams draining the Proterozoic and Paleozoic Cordilleran miogeocline in eastern Idaho and Pleistocene loess on the Snake River Plain. For such systems, large sample bases of hundreds of grains, plus the use of statistical methods, may be necessary to distinguish detrital-zircon age-spectra.

  1. Modeling diffuse phosphorus emissions to assist in best management practice designing

    NASA Astrophysics Data System (ADS)

    Kovacs, Adam; Zessner, Matthias; Honti, Mark; Clement, Adrienne

    2010-05-01

    A diffuse emission modeling tool has been developed, which is appropriate to support decision-making in watershed management. The PhosFate (Phosphorus Fate) tool allows planning best management practices (BMPs) in catchments and simulating their possible impacts on the phosphorus (P) loads. PhosFate is a simple fate model to calculate diffuse P emissions and their transport within a catchment. The model is a semi-empirical, catchment scale, distributed parameter and long-term (annual) average model. It has two main parts: (a) the emission and (b) the transport model. The main input data of the model are digital maps (elevation, soil types and landuse categories), statistical data (crop yields, animal numbers, fertilizer amounts and precipitation distribution) and point information (precipitation, meteorology, soil humus content, point source emissions and reservoir data). The emission model calculates the diffuse P emissions at their source. It computes the basic elements of the hydrology as well as the soil loss. The model determines the accumulated P surplus of the topsoil and distinguishes the dissolved and the particulate P forms. Emissions are calculated according to the different pathways (surface runoff, erosion and leaching). The main outputs are the spatial distribution (cell values) of the runoff components, the soil loss and the P emissions within the catchment. The transport model joins the independent cells based on the flow tree and it follows the further fate of emitted P from each cell to the catchment outlets. Surface runoff and P fluxes are accumulated along the tree and the field and in-stream retention of the particulate forms are computed. In case of base flow and subsurface P loads only the channel transport is taken into account due to the less known hydrogeological conditions. During the channel transport, point sources and reservoirs are also considered. Main results of the transport algorithm are the discharge, dissolved and sediment-bounded P load values at any arbitrary point within the catchment. Finally, a simple design procedure has been built up to plan BMPs in the catchments and simulate their possible impacts on diffuse P fluxes as well as calculate their approximately costs. Both source and transport controlling measures have been involved into the planning procedure. The model also allows examining the impacts of alterations of fertilizer application, point source emissions as well as the climate change on the river loads. Besides this, a simple optimization algorithm has been developed to select the most effective source areas (real hot spots), which should be targeted by the interventions. The fate model performed well in Hungarian pilot catchments. Using the calibrated and validated model, different management scenarios were worked out and their effects and costs evaluated and compared to each other. The results show that the approach is suitable to effectively design BMP measures at local scale. Combinative application of the source and transport controlling BMPs can result in high P reduction efficiency. Optimization of the interventions can remarkably reduce the area demand of the necessary BMPs, consequently the establishment costs can be decreased. The model can be coupled with a larger scale catchment model to form a "screening and planning" modeling system.

  2. The trans-continental distributions of pentachlorophenol and pentachloroanisole in pine needles indicate separate origins.

    PubMed

    Kylin, Henrik; Svensson, Teresia; Jensen, Sören; Strachan, William M J; Franich, Robert; Bouwman, Hindrik

    2017-10-01

    The production and use of pentachlorophenol (PCP) was recently prohibited/restricted by the Stockholm Convention on persistent organic pollutants (POPs), but environmental data are few and of varying quality. We here present the first extensive dataset of the continent-wide (Eurasia and Canada) occurrence of PCP and its methylation product pentachloroanisole (PCA) in the environment, specifically in pine needles. The highest concentrations of PCP were found close to expected point sources, while PCA chiefly shows a northern and/or coastal distribution not correlating with PCP distribution. Although long-range transport and environmental methylation of PCP or formation from other precursors cannot be excluded, the distribution patterns suggest that such processes may not be the only source of PCA to remote regions and unknown sources should be sought. We suggest that natural sources, e.g., chlorination of organic matter in Boreal forest soils enhanced by chloride deposition from marine sources, should be investigated as a possible partial explanation of the observed distributions. The results show that neither PCA nor total PCP (ΣPCP = PCP + PCA) should be used to approximate the concentrations of PCP; PCP and PCA must be determined and quantified separately to understand their occurrence and fate in the environment. The background work shows that the accumulation of airborne POPs in plants is a complex process. The variations in life cycles and physiological adaptations have to be taken into account when using plants to evaluate the concentrations of POPs in remote areas. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Wave propagation in anisotropic medium due to an oscillatory point source with application to unidirectional composites

    NASA Technical Reports Server (NTRS)

    Williams, J. H., Jr.; Marques, E. R. C.; Lee, S. S.

    1986-01-01

    The far-field displacements in an infinite transversely isotropic elastic medium subjected to an oscillatory concentrated force are derived. The concepts of velocity surface, slowness surface and wave surface are used to describe the geometry of the wave propagation process. It is shown that the decay of the wave amplitudes depends not only on the distance from the source (as in isotropic media) but also depends on the direction of the point of interest from the source. As an example, the displacement field is computed for a laboratory fabricated unidirectional fiberglass epoxy composite. The solution for the displacements is expressed as an amplitude distribution and is presented in polar diagrams. This analysis has potential usefulness in the acoustic emission (AE) and ultrasonic nondestructive evaluation of composite materials. For example, the transient localized disturbances which are generally associated with AE sources can be modeled via this analysis. In which case, knowledge of the displacement field which arrives at a receiving transducer allows inferences regarding the strength and orientation of the source, and consequently perhaps the degree of damage within the composite.

  4. Full-wave generalizations of the fundamental Gaussian beam.

    PubMed

    Seshadri, S R

    2009-12-01

    The basic full wave corresponding to the fundamental Gaussian beam was discovered for the outwardly propagating wave in a half-space by the introduction of a source in the complex space. There is a class of extended full waves all of which reduce to the same fundamental Gaussian beam in the appropriate limit. For the extended full Gaussian waves that include the basic full Gaussian wave as a special case, the sources are in the complex space on different planes transverse to the propagation direction. The sources are cylindrically symmetric Gaussian distributions centered at the origin of the transverse planes, the axis of symmetry being the propagation direction. For the special case of the basic full Gaussian wave, the source is a point source. The radiation intensity of the extended full Gaussian waves is determined and their characteristics are discussed and compared with those of the fundamental Gaussian beam. The extended full Gaussian waves are also obtained for the oppositely propagating outwardly directed waves in the second half-space. The radiation intensity distributions in the two half-spaces have reflection symmetry about the midplane. The radiation intensity distributions of the various extended full Gaussian waves are not significantly different. The power carried by the extended full Gaussian waves is evaluated and compared with that of the fundamental Gaussian beam.

  5. Extending Marine Species Distribution Maps Using Non-Traditional Sources

    PubMed Central

    Moretzsohn, Fabio; Gibeaut, James

    2015-01-01

    Abstract Background Traditional sources of species occurrence data such as peer-reviewed journal articles and museum-curated collections are included in species databases after rigorous review by species experts and evaluators. The distribution maps created in this process are an important component of species survival evaluations, and are used to adapt, extend and sometimes contract polygons used in the distribution mapping process. New information During an IUCN Red List Gulf of Mexico Fishes Assessment Workshop held at The Harte Research Institute for Gulf of Mexico Studies, a session included an open discussion on the topic of including other sources of species occurrence data. During the last decade, advances in portable electronic devices and applications enable 'citizen scientists' to record images, location and data about species sightings, and submit that data to larger species databases. These applications typically generate point data. Attendees of the workshop expressed an interest in how that data could be incorporated into existing datasets, how best to ascertain the quality and value of that data, and what other alternate data sources are available. This paper addresses those issues, and provides recommendations to ensure quality data use. PMID:25941453

  6. A computer program to evaluate optical systems

    NASA Technical Reports Server (NTRS)

    Innes, D.

    1972-01-01

    A computer program is used to evaluate a 25.4 cm X-ray telescope at a field angle of 20 minutes of arc by geometrical analysis. The object is regarded as a point source of electromagnetic radiation, and the optical surfaces are treated as boundary conditions in the solution of the electromagnetic wave propagation equation. The electric field distribution is then determined in the region of the image and the intensity distribution inferred. A comparison of wave analysis results and photographs taken through the telescope shows excellent agreement.

  7. Skin dose from radionuclide contamination on clothing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, D.C.; Hussein, E.M.A.; Yuen, P.S.

    1997-06-01

    Skin dose due to radio nuclide contamination on clothing is calculated by Monte Carlo simulation of electron and photon radiation transport. Contamination due to a hot particle on some selected clothing geometries of cotton garment is simulated. The effect of backscattering in the surrounding air is taken into account. For each combination of source-clothing geometry, the dose distribution function in the skin, including the dose at tissue depths of 7 mg cm{sup -2} and 1,000 Mg cm{sup -2}, is calculated by simulating monoenergetic photon and electron sources. Skin dose due to contamination by a radionuclide is then determined by propermore » weighting of & monoenergetic dose distribution functions. The results are compared with the VARSKIN point-kernel code for some radionuclides, indicating that the latter code tends to under-estimate the dose for gamma and high energy beta sources while it overestimates skin dose for low energy beta sources. 13 refs., 4 figs., 2 tabs.« less

  8. Status, upgrades, and advances of RTS2: the open source astronomical observatory manager

    NASA Astrophysics Data System (ADS)

    Kubánek, Petr

    2016-07-01

    RTS2 is an open source observatory control system. Being developed from early 2000, it continue to receive new features in last two years. RTS2 is a modulat, network-based distributed control system, featuring telescope drivers with advanced tracking and pointing capabilities, fast camera drivers and high level modules for "business logic" of the observatory, connected to a SQL database. Running on all continents of the planet, it accumulated a lot to control parts or full observatory setups.

  9. Heavy metals in marine coastal sediments: assessing sources, fluxes, history and trends.

    PubMed

    Frignani, Mauro; Bellucci, Luca Giorgio

    2004-01-01

    Examples are presented from the Adriatic Sea, the Ligurian Sea and the Venice Lagoon to illustrate different approaches to the study of anthropogenic metals in marine coastal sediments. These examples refer to studies of areal distribution and transport mechanisms, individuation of the sources, sediment dating, chronology of the fluxes, present and past trends. In particular, some of the findings achieved in studying the Venice Lagoon are discussed from the point of view of anthropogenic changes both in sediment composition and contaminant fluxes.

  10. 3DFEMWATER: A three-dimensional finite element model of water flow through saturated-unsaturated media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeh, G.T.

    1987-08-01

    The 3DFEMWATER model is designed to treat heterogeneous and anisotropic media consisting of as many geologic formations as desired, consider both distributed and point sources/sinks that are spatially and temporally dependent, accept the prescribed initial conditions or obtain them by simulating a steady state version of the system under consideration, deal with a transient head distributed over the Dirichlet boundary, handle time-dependent fluxes due to pressure gradient varying along the Neumann boundary, treat time-dependent total fluxes distributed over the Cauchy boundary, automatically determine variable boundary conditions of evaporation, infiltration, or seepage on the soil-air interface, include the off-diagonal hydraulic conductivitymore » components in the modified Richards equation for dealing with cases when the coordinate system does not coincide with the principal directions of the hydraulic conductivity tensor, give three options for estimating the nonlinear matrix, include two options (successive subregion block iterations and successive point interactions) for solving the linearized matrix equations, automatically reset time step size when boundary conditions or source/sinks change abruptly, and check the mass balance computation over the entire region for every time step. The model is verified with analytical solutions or other numerical models for three examples.« less

  11. Analysis of ultrasonically rotating droplet using moving particle semi-implicit and distributed point source methods

    NASA Astrophysics Data System (ADS)

    Wada, Yuji; Yuge, Kohei; Tanaka, Hiroki; Nakamura, Kentaro

    2016-07-01

    Numerical analysis of the rotation of an ultrasonically levitated droplet with a free surface boundary is discussed. The ultrasonically levitated droplet is often reported to rotate owing to the surface tangential component of acoustic radiation force. To observe the torque from an acoustic wave and clarify the mechanism underlying the phenomena, it is effective to take advantage of numerical simulation using the distributed point source method (DPSM) and moving particle semi-implicit (MPS) method, both of which do not require a calculation grid or mesh. In this paper, the numerical treatment of the viscoacoustic torque, which emerges from the viscous boundary layer and governs the acoustical droplet rotation, is discussed. The Reynolds stress traction force is calculated from the DPSM result using the idea of effective normal particle velocity through the boundary layer and input to the MPS surface particles. A droplet levitated in an acoustic chamber is simulated using the proposed calculation method. The droplet is vertically supported by a plane standing wave from an ultrasonic driver and subjected to a rotating sound field excited by two acoustic sources on the side wall with different phases. The rotation of the droplet is successfully reproduced numerically and its acceleration is discussed and compared with those in the literature.

  12. Advection-diffusion model for the simulation of air pollution distribution from a point source emission

    NASA Astrophysics Data System (ADS)

    Ulfah, S.; Awalludin, S. A.; Wahidin

    2018-01-01

    Advection-diffusion model is one of the mathematical models, which can be used to understand the distribution of air pollutant in the atmosphere. It uses the 2D advection-diffusion model with time-dependent to simulate air pollution distribution in order to find out whether the pollutants are more concentrated at ground level or near the source of emission under particular atmospheric conditions such as stable, unstable, and neutral conditions. Wind profile, eddy diffusivity, and temperature are considered in the model as parameters. The model is solved by using explicit finite difference method, which is then visualized by a computer program developed using Lazarus programming software. The results show that the atmospheric conditions alone influencing the level of concentration of pollutants is not conclusive as the parameters in the model have their own effect on each atmospheric condition.

  13. Patterns and age distribution of ground-water flow to streams

    USGS Publications Warehouse

    Modica, E.; Reilly, T.E.; Pollock, D.W.

    1997-01-01

    Simulations of ground-water flow in a generic aquifer system were made to characterize the topology of ground-water flow in the stream subsystem and to evaluate its relation to deeper ground-water flow. The flow models are patterned after hydraulic characteristics of aquifers of the Atlantic Coastal Plain and are based on numerical solutions to three-dimensional, steady-state, unconfined flow. The models were used to evaluate the effects of aquifer horizontal-to-vertical hydraulic conductivity ratios, aquifer thickness, and areal recharge rates on flow in the stream subsystem. A particle tracker was used to determine flow paths in a stream subsystem, to establish the relation between ground-water seepage to points along a simulated stream and its source area of flow, and to determine ground-water residence time in stream subsystems. In a geometrically simple aquifer system with accretion, the source area of flow to streams resembles an elongated ellipse that tapers in the downgradient direction. Increased recharge causes an expansion of the stream subsystem. The source area of flow to the stream expands predominantly toward the stream headwaters. Baseflow gain is also increased along the reach of the stream. A thin aquifer restricts ground-water flow and causes the source area of flow to expand near stream headwaters and also shifts the start-of-flow to the drainage basin divide. Increased aquifer anisotropy causes a lateral expansion of the source area of flow to streams. Ground-water seepage to the stream channel originates both from near- and far-recharge locations. The range in the lengths of flow paths that terminate at a point on a stream increase in the downstream direction. Consequently, the age distribution of ground water that seeps into the stream is skewed progressively older with distance downstream. Base flow ia an integration of ground water with varying age and potentially different water quality, depending on the source within the drainage basin. The quantitative results presented indicate that this integration can have a wide and complex residence time range and source distribution.

  14. DISTRIBUTION AND UTILISATION OF IVERMECTIN (MECTIZAN): A CHEMOTHERAPEUTIC APPROACH TO THE CONTROL OF ONCHOCERCIASIS IN OLD OHAOZARA LGA, EBONYI STATE, EASTERN NIGERIA.

    PubMed

    Okpara, Elom Michael; Mnaemeka, Alo Moses; Iyioku, Ugah Uchenna; Udoh, Usanga Victor

    2015-12-01

    Onchocerciasis (river blindness) is a devastating, debilitating Stigmatising and incapacitating parasitic disease that is endemic in tropical and subtropical regions of the world, including Nigeria. Mass distribution of ivermectin (Mectizan) to the endemic parts of the world was initiated by the Onchocerciasis Control Programmes (OCPs). Absolute compliance to the regimen for up to 15 years has been reported to be effective in the control of the disease. The study was carried out in Ohaozara LGA, Onicha LGA and Ivo LGA. The three (3) LGAs made up the defunct Old Ohaozara LGA. A structured questionnaire was used to generate information on knowledge of Onchocerciasis and on the use of ivermectin by the inhabitants of the communities of the study areas. The distribution coverage of ivermectin in the study areas dating from 2010 to 2014 was ascertained with drug distribution charts obtained from Ebonyi State Health Management Board (ESHMB), Abakaliki (the point source of distribution in the state), and from the health centres in communities of old Ohaozara LGA (the service delivery points (SDPs) to inhabitants of the communities. Data was analysed using descriptive statistics. Utilization of the regimen was ascertained by determining the actual number of tablets of mectizan that was administered to the patients at the various health cenrtes (service delivery points (SDPs) in the communities. The percentage utilization of the regimen was determined by dividing the number of mectizan tablets administered to the patients at SDPs with the number of mectizan tablets supplied from state point source of distribution and multiplying by 100. A total of 347, 299 out of 1, 919135 tablets of mectizan supplied to the study areas from 2010 to 2014 were actually utilized, forming an overall percentage utilization of 18.10%. There was adequate supply but very poor utilization of the regimen. The poor utilization resulted from factors including locating of health centres very far from homes of some of the rural villagers, non-yearly compliance with regimen administration, poor health sensitization and education and lack of incentives orpoor incentives to the village-based health workers (VBHWs). Intensification of efforts to cover the lapses in the utilization of the regimen is advocated for a more effective control of the disease.

  15. Mesh-free distributed point source method for modeling viscous fluid motion between disks vibrating at ultrasonic frequency.

    PubMed

    Wada, Yuji; Kundu, Tribikram; Nakamura, Kentaro

    2014-08-01

    The distributed point source method (DPSM) is extended to model wave propagation in viscous fluids. Appropriate estimation on attenuation and boundary layer formation due to fluid viscosity is necessary for the ultrasonic devices used for acoustic streaming or ultrasonic levitation. The equations for DPSM modeling in viscous fluids are derived in this paper by decomposing the linearized viscous fluid equations into two components-dilatational and rotational components. By considering complex P- and S-wave numbers, the acoustic fields in viscous fluids can be calculated following similar calculation steps that are used for wave propagation modeling in solids. From the calculations reported the precision of DPSM is found comparable to that of the finite element method (FEM) for a fundamental ultrasonic field problem. The particle velocity parallel to the two bounding surfaces of the viscous fluid layer between two rigid plates (one in motion and one stationary) is calculated. The finite element results agree well with the DPSM results that were generated faster than the transient FEM results.

  16. Realtime Gas Emission Monitoring at Hazardous Sites Using a Distributed Point-Source Sensing Infrastructure

    PubMed Central

    Manes, Gianfranco; Collodi, Giovanni; Gelpi, Leonardo; Fusco, Rosanna; Ricci, Giuseppe; Manes, Antonio; Passafiume, Marco

    2016-01-01

    This paper describes a distributed point-source monitoring platform for gas level and leakage detection in hazardous environments. The platform, based on a wireless sensor network (WSN) architecture, is organised into sub-networks to be positioned in the plant’s critical areas; each sub-net includes a gateway unit wirelessly connected to the WSN nodes, hence providing an easily deployable, stand-alone infrastructure featuring a high degree of scalability and reconfigurability. Furthermore, the system provides automated calibration routines which can be accomplished by non-specialized maintenance operators without system reliability reduction issues. Internet connectivity is provided via TCP/IP over GPRS (Internet standard protocols over mobile networks) gateways at a one-minute sampling rate. Environmental and process data are forwarded to a remote server and made available to authenticated users through a user interface that provides data rendering in various formats and multi-sensor data fusion. The platform is able to provide real-time plant management with an effective; accurate tool for immediate warning in case of critical events. PMID:26805832

  17. Evaluation of sediment contamination by monoaromatic hydrocarbons in the coastal lagoons of Gulf of Saros, NE Aegean Sea.

    PubMed

    Ünlü, Selma; Alpar, Bedri

    2017-05-15

    The concentrations and distribution of monoaromatic hydrocarbons (benzene, toluene, ethyl benzene and the sum of m-, p- and o-, xylenes) were determined in the sediments of coastal lagoons of the Gulf of Saros, using a static headspace GC-MS. The total concentrations of BTEX compounds ranged from 368.5 to below detection limit 0.6μgkg -1 dw, with a mean value of 61.5μgkg -1 dw. The light aromatic fraction of m-, p-xylene was the most abundant compound (57.1% in average), and followed by toluene (38.1%)>ethylbenzene (4.1%)>o-xylene (2.5%)>benzene (1.1%). The factor analysis indicated that the levels and distribution of BTEX compounds depend on the type of contaminant source (mobile/point), absorbance of compounds in sediment, and mobility of benzene compound and degradation processes. Point sources are mainly related to agricultural facilities and port activities while the dispersion of compounds are related with their solubility, volatility and effect of sea/saline waters on lagoons. Copyright © 2017. Published by Elsevier Ltd.

  18. Energy storage requirements of dc microgrids with high penetration renewables under droop control

    DOE PAGES

    Weaver, Wayne W.; Robinett, Rush D.; Parker, Gordon G.; ...

    2015-01-09

    Energy storage is a important design component in microgrids with high penetration renewable sources to maintain the system because of the highly variable and sometimes stochastic nature of the sources. Storage devices can be distributed close to the sources and/or at the microgrid bus. In addition, storage requirements can be minimized with a centralized control architecture, but this creates a single point of failure. Distributed droop control enables a completely decentralized architecture but, the energy storage optimization becomes more difficult. Our paper presents an approach to droop control that enables the local and bus storage requirements to be determined. Givenmore » a priori knowledge of the design structure of a microgrid and the basic cycles of the renewable sources, we found that the droop settings of the sources are such that they minimize both the bus voltage variations and overall energy storage capacity required in the system. This approach can be used in the design phase of a microgrid with a decentralized control structure to determine appropriate droop settings as well as the sizing of energy storage devices.« less

  19. On Road Study of Colorado Front Range Greenhouse Gases Distribution and Sources

    NASA Astrophysics Data System (ADS)

    Petron, G.; Hirsch, A.; Trainer, M. K.; Karion, A.; Kofler, J.; Sweeney, C.; Andrews, A.; Kolodzey, W.; Miller, B. R.; Miller, L.; Montzka, S. A.; Kitzis, D. R.; Patrick, L.; Frost, G. J.; Ryerson, T. B.; Robers, J. M.; Tans, P.

    2008-12-01

    The Global Monitoring Division and Chemical Sciences Division of the NOAA Earth System Research Laboratory have teamed up over the summer 2008 to experiment with a new measurement strategy to characterize greenhouse gases distribution and sources in the Colorado Front Range. Combining expertise in greenhouse gases measurements and in local to regional scales air quality study intensive campaigns, we have built the 'Hybrid Lab'. A continuous CO2 and CH4 cavity ring down spectroscopic analyzer (Picarro, Inc.), a CO gas-filter correlation instrument (Thermo Environmental, Inc.) and a continuous UV absorption ozone monitor (2B Technologies, Inc., model 202SC) have been installed securely onboard a 2006 Toyota Prius Hybrid vehicle with an inlet bringing in outside air from a few meters above the ground. To better characterize point and distributed sources, air samples were taken with a Portable Flask Package (PFP) for later multiple species analysis in the lab. A GPS unit hooked up to the ozone analyzer and another one installed on the PFP kept track of our location allowing us to map measured concentrations on the driving route using Google Earth. The Hybrid Lab went out for several drives in the vicinity of the NOAA Boulder Atmospheric Observatory (BAO) tall tower located in Erie, CO and covering areas from Boulder, Denver, Longmont, Fort Collins and Greeley. Enhancements in CO2, CO and destruction of ozone mainly reflect emissions from traffic. Methane enhancements however are clearly correlated with nearby point sources (landfill, feedlot, natural gas compressor ...) or with larger scale air masses advected from the NE Colorado, where oil and gas drilling operations are widespread. The multiple species analysis (hydrocarbons, CFCs, HFCs) of the air samples collected along the way bring insightful information about the methane sources at play. We will present results of the analysis and interpretation of the Hybrid Lab Front Range Study and conclude with perspectives on how we will adapt the measurement strategy to study CO2 anthropogenic emissions in Denver Basin.

  20. Effect of an overhead shield on gamma-ray skyshine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stedry, M.H.; Shultis, J.K.; Faw, R.E.

    1996-06-01

    A hybrid Monte Carlo and integral line-beam method is used to determine the effect of a horizontal slab shield above a gamma-ray source on the resulting skyshine doses. A simplified Monte Carlo procedure is used to determine the energy and angular distribution of photons escaping the source shield into the atmosphere. The escaping photons are then treated as a bare, point, skyshine source, and the integral line-beam method is used to estimate the skyshine dose at various distances from the source. From results for arbitrarily collimated and shielded sources, the skyshine dose is found to depend primarily on the mean-free-pathmore » thickness of the shield and only very weakly on the shield material.« less

  1. The calculating study of the moisture transfer influence at the temperature field in a porous wet medium with internal heat sources

    NASA Astrophysics Data System (ADS)

    Kuzevanov, V. S.; Garyaev, A. B.; Zakozhurnikova, G. S.; Zakozhurnikov, S. S.

    2017-11-01

    A porous wet medium with solid and gaseous components, with distributed or localized heat sources was considered. The regimes of temperature changes at the heating at various initial material moisture were studied. Mathematical model was developed applied to the investigated wet porous multicomponent medium with internal heat sources, taking into account the transfer of the heat by heat conductivity with variable thermal parameters and porosity, heat transfer by radiation, chemical reactions, drying and moistening of solids, heat and mass transfer of volatile products of chemical reactions by flows filtration, transfer of moisture. The algorithm of numerical calculation and the computer program that implements the proposed mathematical model, allowing to study the dynamics of warming up at a local or distributed heat release, in particular the impact of the transfer of moisture in the medium on the temperature field were created. Graphs of temperature change were obtained at different points of the graphics with different initial moisture. Conclusions about the possible control of the regimes of heating a solid porous body by the initial moisture distribution were made.

  2. Excitation of high-frequency electromagnetic waves by energetic electrons with a loss cone distribution in a field-aligned potential drop

    NASA Technical Reports Server (NTRS)

    Fung, Shing F.; Vinas, Adolfo F.

    1994-01-01

    The electron cyclotron maser instability (CMI) driven by momentum space anisotropy (df/dp (sub perpendicular) greater than 0) has been invoked to explain many aspects, such as the modes of propagation, harmonic emissions, and the source characteristics of the auroral kilometric radiation (AKR). Recent satellite observations of AKR sources indicate that the source regions are often imbedded within the auroral acceleration region characterized by the presence of a field-aligned potential drop. In this paper we investigate the excitation of the fundamental extraordinary mode radiation due to the accelerated electrons. The momentum space distribution of these energetic electrons is modeled by a realistic upward loss cone as modified by the presence of a parallel potential drop below the observation point. On the basis of linear growth rate calculations we present the emission characteristics, such as the frequency spectrum and the emission angular distribution as functions of the plasma parameters. We will discuss the implication of our results on the generation of the AKR from the edges of the auroral density cavities.

  3. Novel fusion for hybrid optical/microcomputed tomography imaging based on natural light surface reconstruction and iterated closest point

    NASA Astrophysics Data System (ADS)

    Ning, Nannan; Tian, Jie; Liu, Xia; Deng, Kexin; Wu, Ping; Wang, Bo; Wang, Kun; Ma, Xibo

    2014-02-01

    In mathematics, optical molecular imaging including bioluminescence tomography (BLT), fluorescence tomography (FMT) and Cerenkov luminescence tomography (CLT) are concerned with a similar inverse source problem. They all involve the reconstruction of the 3D location of a single/multiple internal luminescent/fluorescent sources based on 3D surface flux distribution. To achieve that, an accurate fusion between 2D luminescent/fluorescent images and 3D structural images that may be acquired form micro-CT, MRI or beam scanning is extremely critical. However, the absence of a universal method that can effectively convert 2D optical information into 3D makes the accurate fusion challengeable. In this study, to improve the fusion accuracy, a new fusion method for dual-modality tomography (luminescence/fluorescence and micro-CT) based on natural light surface reconstruction (NLSR) and iterated closest point (ICP) was presented. It consisted of Octree structure, exact visual hull from marching cubes and ICP. Different from conventional limited projection methods, it is 360° free-space registration, and utilizes more luminescence/fluorescence distribution information from unlimited multi-orientation 2D optical images. A mouse mimicking phantom (one XPM-2 Phantom Light Source, XENOGEN Corporation) and an in-vivo BALB/C mouse with implanted one luminescent light source were used to evaluate the performance of the new fusion method. Compared with conventional fusion methods, the average error of preset markers was improved by 0.3 and 0.2 pixels from the new method, respectively. After running the same 3D internal light source reconstruction algorithm of the BALB/C mouse, the distance error between the actual and reconstructed internal source was decreased by 0.19 mm.

  4. Do gamma-ray burst sources repeat?

    NASA Technical Reports Server (NTRS)

    Meegan, Charles A.; Hartmann, Dieter H.; Brainerd, J. J.; Briggs, Michael S.; Paciesas, William S.; Pendleton, Geoffrey; Kouveliotou, Chryssa; Fishman, Gerald; Blumenthal, George; Brock, Martin

    1995-01-01

    The demonstration of repeated gamma-ray bursts from an individual source would severely constrain burst source models. Recent reports (Quashnock and Lamb, 1993; Wang and Lingenfelter, 1993) of evidence for repetition in the first BATSE burst catalog have generated renewed interest in this issue. Here, we analyze the angular distribution of 585 bursts of the second BATSE catalog (Meegan et al., 1994). We search for evidence of burst recurrence using the nearest and farthest neighbor statistic and the two-point angular correlation function. We find the data to be consistent with the hypothesis that burst sources do not repeat; however, a repeater fraction of up to about 20% of the observed bursts cannot be excluded.

  5. A quantitative three-dimensional dose attenuation analysis around Fletcher-Suit-Delclos due to stainless steel tube for high-dose-rate brachytherapy by Monte Carlo calculations.

    PubMed

    Parsai, E Ishmael; Zhang, Zhengdong; Feldmeier, John J

    2009-01-01

    The commercially available brachytherapy treatment-planning systems today, usually neglects the attenuation effect from stainless steel (SS) tube when Fletcher-Suit-Delclos (FSD) is used in treatment of cervical and endometrial cancers. This could lead to potential inaccuracies in computing dwell times and dose distribution. A more accurate analysis quantifying the level of attenuation for high-dose-rate (HDR) iridium 192 radionuclide ((192)Ir) source is presented through Monte Carlo simulation verified by measurement. In this investigation a general Monte Carlo N-Particles (MCNP) transport code was used to construct a typical geometry of FSD through simulation and compare the doses delivered to point A in Manchester System with and without the SS tubing. A quantitative assessment of inaccuracies in delivered dose vs. the computed dose is presented. In addition, this investigation expanded to examine the attenuation-corrected radial and anisotropy dose functions in a form parallel to the updated AAPM Task Group No. 43 Report (AAPM TG-43) formalism. This will delineate quantitatively the inaccuracies in dose distributions in three-dimensional space. The changes in dose deposition and distribution caused by increased attenuation coefficient resulted from presence of SS are quantified using MCNP Monte Carlo simulations in coupled photon/electron transport. The source geometry was that of the Vari Source wire model VS2000. The FSD was that of the Varian medical system. In this model, the bending angles of tandem and colpostats are 15 degrees and 120 degrees , respectively. We assigned 10 dwell positions to the tandem and 4 dwell positions to right and left colpostats or ovoids to represent a typical treatment case. Typical dose delivered to point A was determined according to Manchester dosimetry system. Based on our computations, the reduction of dose to point A was shown to be at least 3%. So this effect presented by SS-FSD systems on patient dose is of concern.

  6. Magneto-optical visualization of three spatial components of inhomogeneous stray fields

    NASA Astrophysics Data System (ADS)

    Ivanov, V. E.

    2012-08-01

    The article deals with the physical principles of magneto-optical visualization (MO) of three spatial components of inhomogeneous stray fields with the help of FeCo metal indicator films in the longitudinal Kerr effect geometry. The inhomogeneous field is created by permanent magnets. Both p- and s-polarization light is used for obtaining MO images with their subsequent summing, subtracting and digitizing. As a result, the MO images and corresponding intensity coordinate dependences reflecting the distributions of the horizontal and vertical magnetization components in pure form have been obtained. Modeling of both the magnetization distribution in the indicator film and the corresponding MO images shows that corresponding to polar sensitivity the intensity is proportional to the normal field component, which permits normal field component mapping. Corresponding to longitudinal sensitivity, the intensity of the MO images reflects the angular distribution of the planar field component. MO images have singular points in which the planar component is zero and their movement under an externally homogeneous planar field permits obtaining of additional information on the two planar components of the field under study. The intensity distribution character in the vicinity of sources and sinks (singular points) remains the same under different orientations of the light incidence plane. The change of incident plane orientation by π/2 alters the distribution pattern in the vicinity of the saddle points.

  7. Transient Point Infiltration In The Unsaturated Zone

    NASA Astrophysics Data System (ADS)

    Buecker-Gittel, M.; Mohrlok, U.

    The risk assessment of leaking sewer pipes gets more and more important due to urban groundwater management and environmental as well as health safety. This requires the quantification and balancing of transport and transformation processes based on the water flow in the unsaturated zone. The water flow from a single sewer leakage could be described as a point infiltration with time varying hydraulic conditions externally and internally. External variations are caused by the discharge in the sewer pipe as well as the state of the leakage itself. Internal variations are the results of microbiological clogging effects associated with the transformation processes. Technical as well as small scale laboratory experiments were conducted in order to investigate the water transport from an transient point infiltration. From the technical scale experiment there was evidence that the water flow takes place under transient conditions when sewage infiltrates into an unsaturated soil. Whereas the small scale experiments investigated the hydraulics of the water transport and the associated so- lute and particle transport in unsaturated soils in detail. The small scale experiment was a two-dimensional representation of such a point infiltration source where the distributed water transport could be measured by several tensiometers in the soil as well as by a selective measurement of the discharge at the bottom of the experimental setup. Several series of experiments were conducted varying the boundary and initial con- ditions in order to derive the important parameters controlling the infiltration of pure water from the point source. The results showed that there is a significant difference between the infiltration rate in the point source and the discharge rate at the bottom, that could be explained by storage processes due to an outflow resistance at the bottom. This effect is overlayn by a decreasing water content decreases over time correlated with a decreasing infiltration rate. As expected the initial conditions mainly affects the time scale for the water transport. Additionally, the influence of preferential flow paths on the discharge distribution could be found due to the heterogenieties caused by the filling and compaction process of the sandy soil.

  8. Objective estimates of mantle 3He in the ocean and implications for constraining the deep ocean circulation

    NASA Astrophysics Data System (ADS)

    Holzer, Mark; DeVries, Timothy; Bianchi, Daniele; Newton, Robert; Schlosser, Peter; Winckler, Gisela

    2017-01-01

    Hydrothermal vents along the ocean's tectonic ridge systems inject superheated water and large amounts of dissolved metals that impact the deep ocean circulation and the oceanic cycling of trace metals. The hydrothermal fluid contains dissolved mantle helium that is enriched in 3He relative to the atmosphere, providing an isotopic tracer of the ocean's deep circulation and a marker of hydrothermal sources. This work investigates the potential for the 3He/4He isotope ratio to constrain the ocean's mantle 3He source and to provide constraints on the ocean's deep circulation. We use an ensemble of 11 data-assimilated steady-state ocean circulation models and a mantle helium source based on geographically varying sea-floor spreading rates. The global source distribution is partitioned into 6 regions, and the vertical profile and source amplitude of each region are varied independently to determine the optimal 3He source distribution that minimizes the mismatch between modeled and observed δ3He. In this way, we are able to fit the observed δ3He distribution to within a relative error of ∼15%, with a global 3He source that ranges from 640 to 850 mol yr-1, depending on circulation. The fit captures the vertical and interbasin gradients of the δ3He distribution very well and reproduces its jet-sheared saddle point in the deep equatorial Pacific. This demonstrates that the data-assimilated models have much greater fidelity to the deep ocean circulation than other coarse-resolution ocean models. Nonetheless, the modelled δ3He distributions still display some systematic biases, especially in the deep North Pacific where δ3He is overpredicted by our models, and in the southeastern tropical Pacific, where observed westward-spreading δ3He plumes are not well captured. Sources inferred by the data-assimilated transport with and without isopycnally aligned eddy diffusivity differ widely in the Southern Ocean, in spite of the ability to match the observed distributions of CFCs and radiocarbon for either eddy parameterization.

  9. Electrical distribution studies for the 200 Area tank farms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisler, J.B.

    1994-08-26

    This is an engineering study providing reliability numbers for various design configurations as well as computer analyses (Captor/Dapper) of the existing distribution system to the 480V side of the unit substations. The objective of the study was to assure the adequacy of the existing electrical system components from the connection at the high voltage supply point through the transformation and distribution equipment to the point where it is reduced to its useful voltage level. It also was to evaluate the reasonableness of proposed solutions of identified deficiencies and recommendations of possible alternate solutions. The electrical utilities are normally considered themore » most vital of the utility systems on a site because all other utility systems depend on electrical power. The system accepts electric power from the external sources, reduces it to a lower voltage, and distributes it to end-use points throughout the site. By classic definition, all utility systems extend to a point 5 feet from the facility perimeter. An exception is made to this definition for the electric utilities at this site. The electrical Utility System ends at the low voltage section of the unit substation, which reduces the voltage from 13.8 kV to 2,400, 480, 277/480 or 120/208 volts. These transformers are located at various distances from existing facilities. The adequacy of the distribution system which transports the power from the main substation to the individual area substations and other load centers is evaluated and factored into the impact of the future load forecast.« less

  10. Simulation and source identification of X-ray contrast media in the water cycle of Berlin.

    PubMed

    Knodel, J; Geissen, S-U; Broll, J; Dünnbier, U

    2011-11-01

    This article describes the development of a model to simulate the fate of iodinated X-ray contrast media (XRC) in the water cycle of the German capital, Berlin. It also handles data uncertainties concerning the different amounts and sources of input for XRC via source densities in single districts for the XRC usage by inhabitants, hospitals, and radiologists. As well, different degradation rates for the behavior of the adsorbable organic iodine (AOI) were investigated in single water compartments. The introduced model consists of mass balances and includes, in addition to naturally branched bodies of water, the water distribution network between waterways and wastewater treatment plants, which are coupled to natural surface waters at numerous points. Scenarios were calculated according to the data uncertainties that were statistically evaluated to identify the scenario with the highest agreement among the provided measurement data. The simulation of X-ray contrast media in the water cycle of Berlin showed that medical institutions have to be considered as point sources for congested urban areas due to their high levels of X-ray contrast media emission. The calculations identified hospitals, represented by their capacity (number of hospital beds), as the most relevant point sources, while the inhabitants served as important diffusive sources. Deployed for almost inert substances like contrast media, the model can be used for qualitative statements and, therefore, as a decision-support tool. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Chandra Detection of Intracluster X-Ray sources in Virgo

    NASA Astrophysics Data System (ADS)

    Hou, Meicun; Li, Zhiyuan; Peng, Eric W.; Liu, Chengze

    2017-09-01

    We present a survey of X-ray point sources in the nearest and dynamically young galaxy cluster, Virgo, using archival Chandra observations that sample the vicinity of 80 early-type member galaxies. The X-ray source populations at the outskirts of these galaxies are of particular interest. We detect a total of 1046 point sources (excluding galactic nuclei) out to a projected galactocentric radius of ˜40 kpc and down to a limiting 0.5-8 keV luminosity of ˜ 2× {10}38 {erg} {{{s}}}-1. Based on the cumulative spatial and flux distributions of these sources, we statistically identify ˜120 excess sources that are not associated with the main stellar content of the individual galaxies, nor with the cosmic X-ray background. This excess is significant at a 3.5σ level, when Poisson error and cosmic variance are taken into account. On the other hand, no significant excess sources are found at the outskirts of a control sample of field galaxies, suggesting that at least some fraction of the excess sources around the Virgo galaxies are truly intracluster X-ray sources. Assisted with ground-based and HST optical imaging of Virgo, we discuss the origins of these intracluster X-ray sources, in terms of supernova-kicked low-mass X-ray binaries (LMXBs), globular clusters, LMXBs associated with the diffuse intracluster light, stripped nucleated dwarf galaxies and free-floating massive black holes.

  12. "Stereo Compton cameras" for the 3-D localization of radioisotopes

    NASA Astrophysics Data System (ADS)

    Takeuchi, K.; Kataoka, J.; Nishiyama, T.; Fujita, T.; Kishimoto, A.; Ohsuka, S.; Nakamura, S.; Adachi, S.; Hirayanagi, M.; Uchiyama, T.; Ishikawa, Y.; Kato, T.

    2014-11-01

    The Compton camera is a viable and convenient tool used to visualize the distribution of radioactive isotopes that emit gamma rays. After the nuclear disaster in Fukushima in 2011, there is a particularly urgent need to develop "gamma cameras", which can visualize the distribution of such radioisotopes. In response, we propose a portable Compton camera, which comprises 3-D position-sensitive GAGG scintillators coupled with thin monolithic MPPC arrays. The pulse-height ratio of two MPPC-arrays allocated at both ends of the scintillator block determines the depth of interaction (DOI), which dramatically improves the position resolution of the scintillation detectors. We report on the detailed optimization of the detector design, based on Geant4 simulation. The results indicate that detection efficiency reaches up to 0.54%, or more than 10 times that of other cameras being tested in Fukushima, along with a moderate angular resolution of 8.1° (FWHM). By applying the triangular surveying method, we also propose a new concept for the stereo measurement of gamma rays by using two Compton cameras, thus enabling the 3-D positional measurement of radioactive isotopes for the first time. From one point source simulation data, we ensured that the source position and the distance to the same could be determined typically to within 2 meters' accuracy and we also confirmed that more than two sources are clearly separated by the event selection from two point sources of simulation data.

  13. Scattering and the Point Spread Function of the New Generation Space Telescope

    NASA Technical Reports Server (NTRS)

    Schreur, Julian J.

    1996-01-01

    Preliminary design work on the New Generation Space Telescope (NGST) is currently under way. This telescope is envisioned as a lightweight, deployable Cassegrain reflector with an aperture of 8 meters, and an effective focal length of 80 meters. It is to be folded into a small-diameter package for launch by an Atlas booster, and unfolded in orbit. The primary is to consist of an octagon with a hole at the center, and with eight segments arranged in a flower petal configuration about the octagon. The comers of the petal-shaped segments are to be trimmed so that the package will fit atop the Atlas booster. This mirror, along with its secondary will focus the light from a point source into an image which is spread from a point by diffraction effects, figure errors, and scattering of light from the surface. The distribution of light in the image of a point source is called a point spread function (PSF). The obstruction of the incident light by the secondary mirror and its support structure, the trimmed corners of the petals, and the grooves between the segments all cause the diffraction pattern characterizing an ideal point spread function to be changed, with the trimmed comers causing the rings of the Airy pattern to become broken up, and the linear grooves causing diffraction spikes running radially away from the central spot, or Airy disk. Any figure errors the mirror segments may have, or any errors in aligning the petals with the central octagon will also spread the light out from the ideal point spread function. A point spread function for a mirror the size of the NGST and having an incident wavelength of 900 nm is considered. Most of the light is confined in a circle with a diameter of 0.05 arc seconds. The ring pattern ranges in intensity from 10(exp -2) near the center to 10(exp -6) near the edge of the plotted field, and can be clearly discerned in a log plot of the intensity. The total fraction of the light scattered from this point spread function is called the total integrated scattering (TIS), and the fraction remaining is called the Strehl ratio. The angular distribution of the scattered light is called the angle resolved scattering (ARS), and it shows a strong spike centered on a scattering angle of zero, and a broad , less intense distribution at larger angles. It is this scattered light, and its effect on the point spread function which is the focus of this study.

  14. Application of Monte Carlo Method for Evaluation of Uncertainties of ITS-90 by Standard Platinum Resistance Thermometer

    NASA Astrophysics Data System (ADS)

    Palenčár, Rudolf; Sopkuliak, Peter; Palenčár, Jakub; Ďuriš, Stanislav; Suroviak, Emil; Halaj, Martin

    2017-06-01

    Evaluation of uncertainties of the temperature measurement by standard platinum resistance thermometer calibrated at the defining fixed points according to ITS-90 is a problem that can be solved in different ways. The paper presents a procedure based on the propagation of distributions using the Monte Carlo method. The procedure employs generation of pseudo-random numbers for the input variables of resistances at the defining fixed points, supposing the multivariate Gaussian distribution for input quantities. This allows taking into account the correlations among resistances at the defining fixed points. Assumption of Gaussian probability density function is acceptable, with respect to the several sources of uncertainties of resistances. In the case of uncorrelated resistances at the defining fixed points, the method is applicable to any probability density function. Validation of the law of propagation of uncertainty using the Monte Carlo method is presented on the example of specific data for 25 Ω standard platinum resistance thermometer in the temperature range from 0 to 660 °C. Using this example, we demonstrate suitability of the method by validation of its results.

  15. A simplified approach to analyze the effectiveness of NO2 and SO2 emission reduction of coal-fired power plant from OMI retrievals

    NASA Astrophysics Data System (ADS)

    Bai, Yang; Wu, Lixin; Zhou, Yuan; Li, Ding

    2017-04-01

    Nitrogen oxides (NOX) and sulfur dioxide (SO2) emissions from coal combustion, which is oxidized quickly in the atmosphere resulting in secondary aerosol formation and acid deposition, are the main resource causing China's regional fog-haze pollution. Extensive literature has estimated quantitatively the lifetimes and emissions of NO2 and SO2 for large point sources such as coal-fired power plants and cities using satellite measurements. However, rare of these methods is suitable for sources located in a heterogeneously polluted background. In this work, we present a simplified emission effective radius extraction model for point source to study the NO2 and SO2 reduction trend in China with complex polluted sources. First, to find out the time range during which actual emissions could be derived from satellite observations, the spatial distribution characteristics of mean daily, monthly, seasonal and annual concentration of OMI NO2 and SO2 around a single power plant were analyzed and compared. Then, a 100 km × 100 km geographical grid with a 1 km step was established around the source and the mean concentration of all satellite pixels covered in each grid point is calculated by the area weight pixel-averaging approach. The emission effective radius is defined by the concentration gradient values near the power plant. Finally, the developed model is employed to investigate the characteristic and evolution of NO2 and SO2 emissions and verify the effectiveness of flue gas desulfurization (FGD) and selective catalytic reduction (SCR) devices applied in coal-fired power plants during the period of 10 years from 2006 to 2015. It can be observed that the the spatial distribution pattern of NO2 and SO2 concentration in the vicinity of large coal-burning source was not only affected by the emission of coal-burning itself, but also closely related to the process of pollutant transmission and diffusion caused by meteorological factors in different seasons. Our proposed model can be used to identify the effective operation time of FGD and SCR equipped in coal-fired power plant.

  16. Assessment of the point-source method for estimating dose rates to members of the public from exposure to patients with 131I thyroid treatment

    DOE PAGES

    Dewji, Shaheen Azim; Bellamy, Michael B.; Hertel, Nolan E.; ...

    2015-09-01

    The U.S. Nuclear Regulatory Commission (USNRC) initiated a contract with Oak Ridge National Laboratory (ORNL) to calculate radiation dose rates to members of the public that may result from exposure to patients recently administered iodine-131 ( 131I) as part of medical therapy. The main purpose was to compare dose rate estimates based on a point source and target with values derived from more realistic simulations that considered the time-dependent distribution of 131I in the patient and attenuation of emitted photons by the patient’s tissues. The external dose rate estimates were derived using Monte Carlo methods and two representations of themore » Phantom with Movable Arms and Legs, previously developed by ORNL and the USNRC, to model the patient and a nearby member of the public. Dose rates to tissues and effective dose rates were calculated for distances ranging from 10 to 300 cm between the phantoms and compared to estimates based on the point-source method, as well as to results of previous studies that estimated exposure from 131I patients. The point-source method overestimates dose rates to members of the public in very close proximity to an 131I patient but is a broadly accurate method of dose rate estimation at separation distances of 300 cm or more at times closer to administration.« less

  17. Geometric Characterization of Multi-Axis Multi-Pinhole SPECT

    PubMed Central

    DiFilippo, Frank P.

    2008-01-01

    A geometric model and calibration process are developed for SPECT imaging with multiple pinholes and multiple mechanical axes. Unlike the typical situation where pinhole collimators are mounted directly to rotating gamma ray detectors, this geometric model allows for independent rotation of the detectors and pinholes, for the case where the pinhole collimator is physically detached from the detectors. This geometric model is applied to a prototype small animal SPECT device with a total of 22 pinholes and which uses dual clinical SPECT detectors. All free parameters in the model are estimated from a calibration scan of point sources and without the need for a precision point source phantom. For a full calibration of this device, a scan of four point sources with 360° rotation is suitable for estimating all 95 free parameters of the geometric model. After a full calibration, a rapid calibration scan of two point sources with 180° rotation is suitable for estimating the subset of 22 parameters associated with repositioning the collimation device relative to the detectors. The high accuracy of the calibration process is validated experimentally. Residual differences between predicted and measured coordinates are normally distributed with 0.8 mm full width at half maximum and are estimated to contribute 0.12 mm root mean square to the reconstructed spatial resolution. Since this error is small compared to other contributions arising from the pinhole diameter and the detector, the accuracy of the calibration is sufficient for high resolution small animal SPECT imaging. PMID:18293574

  18. Lessons Learned from OMI Observations of Point Source SO2 Pollution

    NASA Technical Reports Server (NTRS)

    Krotkov, N.; Fioletov, V.; McLinden, Chris

    2011-01-01

    The Ozone Monitoring Instrument (OMI) on NASA Aura satellite makes global daily measurements of the total column of sulfur dioxide (SO2), a short-lived trace gas produced by fossil fuel combustion, smelting, and volcanoes. Although anthropogenic SO2 signals may not be detectable in a single OMI pixel, it is possible to see the source and determine its exact location by averaging a large number of individual measurements. We describe new techniques for spatial and temporal averaging that have been applied to the OMI SO2 data to determine the spatial distributions or "fingerprints" of SO2 burdens from top 100 pollution sources in North America. The technique requires averaging of several years of OMI daily measurements to observe SO2 pollution from typical anthropogenic sources. We found that the largest point sources of SO2 in the U.S. produce elevated SO2 values over a relatively small area - within 20-30 km radius. Therefore, one needs higher than OMI spatial resolution to monitor typical SO2 sources. TROPOMI instrument on the ESA Sentinel 5 precursor mission will have improved ground resolution (approximately 7 km at nadir), but is limited to once a day measurement. A pointable geostationary UVB spectrometer with variable spatial resolution and flexible sampling frequency could potentially achieve the goal of daily monitoring of SO2 point sources and resolve downwind plumes. This concept of taking the measurements at high frequency to enhance weak signals needs to be demonstrated with a GEOCAPE precursor mission before 2020, which will help formulating GEOCAPE measurement requirements.

  19. Mathematical Fluid Dynamics of Plasma Flow Control Over High Speed Wings

    DTIC Science & Technology

    2009-02-01

    decreased voltage; e= 8, d= 1 mm. electrode u fe ^mmm^^n/* Fyd electrode Fig. 23 Schematics of momentum and heat source distributions for...For a>25°, the influence of DBD on the vortex breakdown is not so clear, because the breakdown point is very close to the wing apex in all three

  20. High energy photon and particle luminosity from active nuclei

    NASA Technical Reports Server (NTRS)

    Eilek, J. A.; Caroff, L. J.; Noerdlinger, P. D.; Dove, M. E.

    1986-01-01

    This paper describes a numerical calculation which follows the evolution of an initial photon and particle spectrum in an expanding, relativistic wind or jet, describes in particular the quasi-equilibrium distribution found for initial optical depths above 100 or so, and points out that this calculation may be relevant for the situation in luminous, compact nuclear sources.

  1. Physics of vascular brachytherapy.

    PubMed

    Jani, S K

    1999-08-01

    Basic physics plays an important role in understanding the clinical utility of radioisotopes in brachytherapy. Vascular brachytherapy is a very unique application of localized radiation in that dose levels very close to the source are employed to treat tissues within the arterial wall. This article covers basic physics of radioactivity and differentiates between beta and gamma radiations. Physical parameters such as activity, half-life, exposure and absorbed dose have been explained. Finally, the dose distribution around a point source and a linear source is described. The principles of basic physics are likely to play an important role in shaping the emerging technology and its application in vascular brachytherapy.

  2. The effects of correlated noise in phased-array observations of radio sources

    NASA Technical Reports Server (NTRS)

    Dewey, Rachel J.

    1994-01-01

    Arrays of radio telescopes are now routinely used to provide increased signal-to-noise when observing faint point sources. However, calculation of the achievable sensitivity is complicated if there are sources in the field of view other than the target source. These additional sources not only increase the system temperatures of the individual antennas, but may also contribute significant 'correlated noise' to the effective system temperature of the array. This problem has been of particular interest in the context of tracking spacecraft in the vicinity of radio-bright planets (e.g., Galileo at Jupiter), but it has broader astronomical relevance as well. This paper presents a general formulation of the problem, for the case of a point-like target source in the presence of an additional radio source of arbitrary brightness distribution. We re-derive the well known result that, in the absence of any background sources, a phased array of N indentical antennas is a factor of N more sensitive than a single antenna. We also show that an unphased array of N identical antennas is, on average, no more sensitive than a single antenna if the signals from the individual antennas are combined prior to detection. In the case where a background source is present we show that the effects of correlated noise are highly geometry dependent, and for some astronomical observations may cause significant fluctuations in the array's effective system temperature.

  3. A General Formulation of the Source Confusion Statistics and Application to Infrared Galaxy Surveys

    NASA Astrophysics Data System (ADS)

    Takeuchi, Tsutomu T.; Ishii, Takako T.

    2004-03-01

    Source confusion has been a long-standing problem in the astronomical history. In the previous formulation of the confusion problem, sources are assumed to be distributed homogeneously on the sky. This fundamental assumption is, however, not realistic in many applications. In this work, by making use of the point field theory, we derive general analytic formulae for the confusion problems with arbitrary distribution and correlation functions. As a typical example, we apply these new formulae to the source confusion of infrared galaxies. We first calculate the confusion statistics for power-law galaxy number counts as a test case. When the slope of differential number counts, γ, is steep, the confusion limits become much brighter and the probability distribution function (PDF) of the fluctuation field is strongly distorted. Then we estimate the PDF and confusion limits based on the realistic number count model for infrared galaxies. The gradual flattening of the slope of the source counts makes the clustering effect rather mild. Clustering effects result in an increase of the limiting flux density with ~10%. In this case, the peak probability of the PDF decreases up to ~15% and its tail becomes heavier. Although the effects are relatively small, they will be strong enough to affect the estimation of galaxy evolution from number count or fluctuation statistics. We also comment on future submillimeter observations.

  4. Application of Second-Moment Source Analysis to Three Problems in Earthquake Forecasting

    NASA Astrophysics Data System (ADS)

    Donovan, J.; Jordan, T. H.

    2011-12-01

    Though earthquake forecasting models have often represented seismic sources as space-time points (usually hypocenters), a more complete hazard analysis requires the consideration of finite-source effects, such as rupture extent, orientation, directivity, and stress drop. The most compact source representation that includes these effects is the finite moment tensor (FMT), which approximates the degree-two polynomial moments of the stress glut by its projection onto the seismic (degree-zero) moment tensor. This projection yields a scalar space-time source function whose degree-one moments define the centroid moment tensor (CMT) and whose degree-two moments define the FMT. We apply this finite-source parameterization to three forecasting problems. The first is the question of hypocenter bias: can we reject the null hypothesis that the conditional probability of hypocenter location is uniformly distributed over the rupture area? This hypothesis is currently used to specify rupture sets in the "extended" earthquake forecasts that drive simulation-based hazard models, such as CyberShake. Following McGuire et al. (2002), we test the hypothesis using the distribution of FMT directivity ratios calculated from a global data set of source slip inversions. The second is the question of source identification: given an observed FMT (and its errors), can we identify it with an FMT in the complete rupture set that represents an extended fault-based rupture forecast? Solving this problem will facilitate operational earthquake forecasting, which requires the rapid updating of earthquake triggering and clustering models. Our proposed method uses the second-order uncertainties as a norm on the FMT parameter space to identify the closest member of the hypothetical rupture set and to test whether this closest member is an adequate representation of the observed event. Finally, we address the aftershock excitation problem: given a mainshock, what is the spatial distribution of aftershock probabilities? The FMT representation allows us to generalize the models typically used for this purpose (e.g., marked point process models, such as ETAS), which will again be necessary in operational earthquake forecasting. To quantify aftershock probabilities, we compare mainshock FMTs with the first and second spatial moments of weighted aftershock hypocenters. We will describe applications of these results to the Uniform California Earthquake Rupture Forecast, version 3, which is now under development by the Working Group on California Earthquake Probabilities.

  5. Relating Land Use and Human Intra-City Mobility

    PubMed Central

    Lee, Minjin; Holme, Petter

    2015-01-01

    Understanding human mobility patterns—how people move in their everyday lives—is an interdisciplinary research field. It is a question with roots back to the 19th century that has been dramatically revitalized with the recent increase in data availability. Models of human mobility often take the population distribution as a starting point. Another, sometimes more accurate, data source is land-use maps. In this paper, we discuss how the intra-city movement patterns, and consequently population distribution, can be predicted from such data sources. As a link between land use and mobility, we show that the purposes of people’s trips are strongly correlated with the land use of the trip’s origin and destination. We calibrate, validate and discuss our model using survey data. PMID:26445147

  6. NPTFit: A Code Package for Non-Poissonian Template Fitting

    NASA Astrophysics Data System (ADS)

    Mishra-Sharma, Siddharth; Rodd, Nicholas L.; Safdi, Benjamin R.

    2017-06-01

    We present NPTFit, an open-source code package, written in Python and Cython, for performing non-Poissonian template fits (NPTFs). The NPTF is a recently developed statistical procedure for characterizing the contribution of unresolved point sources (PSs) to astrophysical data sets. The NPTF was first applied to Fermi gamma-ray data to provide evidence that the excess of ˜GeV gamma-rays observed in the inner regions of the Milky Way likely arises from a population of sub-threshold point sources, and the NPTF has since found additional applications studying sub-threshold extragalactic sources at high Galactic latitudes. The NPTF generalizes traditional astrophysical template fits to allow for the ability to search for populations of unresolved PSs that may follow a given spatial distribution. NPTFit builds upon the framework of the fluctuation analyses developed in X-ray astronomy, thus it likely has applications beyond those demonstrated with gamma-ray data. The NPTFit package utilizes novel computational methods to perform the NPTF efficiently. The code is available at http://github.com/bsafdi/NPTFit and up-to-date and extensive documentation may be found at http://nptfit.readthedocs.io.

  7. The Raptor Real-Time Processing Architecture

    NASA Astrophysics Data System (ADS)

    Galassi, M.; Starr, D.; Wozniak, P.; Brozdin, K.

    The primary goal of Raptor is ambitious: to identify interesting optical transients from very wide field of view telescopes in real time, and then to quickly point the higher resolution Raptor ``fovea'' cameras and spectrometer to the location of the optical transient. The most interesting of Raptor's many applications is the real-time search for orphan optical counterparts of Gamma Ray Bursts. The sequence of steps (data acquisition, basic calibration, source extraction, astrometry, relative photometry, the smarts of transient identification and elimination of false positives, telescope pointing feedback, etc.) is implemented with a ``component'' approach. All basic elements of the pipeline functionality have been written from scratch or adapted (as in the case of SExtractor for source extraction) to form a consistent modern API operating on memory resident images and source lists. The result is a pipeline which meets our real-time requirements and which can easily operate as a monolithic or distributed processing system. Finally, the Raptor architecture is entirely based on free software (sometimes referred to as ``open source'' software). In this paper we also discuss the interplay between various free software technologies in this type of astronomical problem.

  8. Raptor -- Mining the Sky in Real Time

    NASA Astrophysics Data System (ADS)

    Galassi, M.; Borozdin, K.; Casperson, D.; McGowan, K.; Starr, D.; White, R.; Wozniak, P.; Wren, J.

    2004-06-01

    The primary goal of Raptor is ambitious: to identify interesting optical transients from very wide field of view telescopes in real time, and then to quickly point the higher resolution Raptor ``fovea'' cameras and spectrometer to the location of the optical transient. The most interesting of Raptor's many applications is the real-time search for orphan optical counterparts of Gamma Ray Bursts. The sequence of steps (data acquisition, basic calibration, source extraction, astrometry, relative photometry, the smarts of transient identification and elimination of false positives, telescope pointing feedback...) is implemented with a ``component'' aproach. All basic elements of the pipeline functionality have been written from scratch or adapted (as in the case of SExtractor for source extraction) to form a consistent modern API operating on memory resident images and source lists. The result is a pipeline which meets our real-time requirements and which can easily operate as a monolithic or distributed processing system. Finally: the Raptor architecture is entirely based on free software (sometimes referred to as "open source" software). In this paper we also discuss the interplay between various free software technologies in this type of astronomical problem.

  9. A Smart Power Electronic Multiconverter for the Residential Sector.

    PubMed

    Guerrero-Martinez, Miguel Angel; Milanes-Montero, Maria Isabel; Barrero-Gonzalez, Fermin; Miñambres-Marcos, Victor Manuel; Romero-Cadaval, Enrique; Gonzalez-Romera, Eva

    2017-05-26

    The future of the grid includes distributed generation and smart grid technologies. Demand Side Management (DSM) systems will also be essential to achieve a high level of reliability and robustness in power systems. To do that, expanding the Advanced Metering Infrastructure (AMI) and Energy Management Systems (EMS) are necessary. The trend direction is towards the creation of energy resource hubs, such as the smart community concept. This paper presents a smart multiconverter system for residential/housing sector with a Hybrid Energy Storage System (HESS) consisting of supercapacitor and battery, and with local photovoltaic (PV) energy source integration. The device works as a distributed energy unit located in each house of the community, receiving active power set-points provided by a smart community EMS. This central EMS is responsible for managing the active energy flows between the electricity grid, renewable energy sources, storage equipment and loads existing in the community. The proposed multiconverter is responsible for complying with the reference active power set-points with proper power quality; guaranteeing that the local PV modules operate with a Maximum Power Point Tracking (MPPT) algorithm; and extending the lifetime of the battery thanks to a cooperative operation of the HESS. A simulation model has been developed in order to show the detailed operation of the system. Finally, a prototype of the multiconverter platform has been implemented and some experimental tests have been carried out to validate it.

  10. A Smart Power Electronic Multiconverter for the Residential Sector

    PubMed Central

    Guerrero-Martinez, Miguel Angel; Milanes-Montero, Maria Isabel; Barrero-Gonzalez, Fermin; Miñambres-Marcos, Victor Manuel; Romero-Cadaval, Enrique; Gonzalez-Romera, Eva

    2017-01-01

    The future of the grid includes distributed generation and smart grid technologies. Demand Side Management (DSM) systems will also be essential to achieve a high level of reliability and robustness in power systems. To do that, expanding the Advanced Metering Infrastructure (AMI) and Energy Management Systems (EMS) are necessary. The trend direction is towards the creation of energy resource hubs, such as the smart community concept. This paper presents a smart multiconverter system for residential/housing sector with a Hybrid Energy Storage System (HESS) consisting of supercapacitor and battery, and with local photovoltaic (PV) energy source integration. The device works as a distributed energy unit located in each house of the community, receiving active power set-points provided by a smart community EMS. This central EMS is responsible for managing the active energy flows between the electricity grid, renewable energy sources, storage equipment and loads existing in the community. The proposed multiconverter is responsible for complying with the reference active power set-points with proper power quality; guaranteeing that the local PV modules operate with a Maximum Power Point Tracking (MPPT) algorithm; and extending the lifetime of the battery thanks to a cooperative operation of the HESS. A simulation model has been developed in order to show the detailed operation of the system. Finally, a prototype of the multiconverter platform has been implemented and some experimental tests have been carried out to validate it. PMID:28587131

  11. Understanding the Star Formation Process in the Filamentary Dark Cloud GF 9: Near-Infrared Observations

    NASA Technical Reports Server (NTRS)

    Ciardi, David R.; Woodward, Charles E.; Clemens, Dan P.; Harker, David E.; Rudy, Richard J.

    1998-01-01

    We have performed a near-infrared JHK survey of a dense core and a diffuse filament region within the filamentary dark cloud GF 9 (LDN 1082). The core region is associated with the IRAS point source PSC 20503+6006 and is suspected of being a site of star formation. The diffuse filament region has no associated IRAS point sources and is likely quiescent. We find that neither the core nor the filament region appears to contain a Class I or Class II young stellar object. As traced by the dust extinction, the core and filament regions contain 26 and 22 solar mass, respectively, with an average H2 volume density for both regions of approximately 2500/cu cm. The core region contains a centrally condensed extinction maximum with a peak extinction of A(sub v) greater than or approximately equal to 10 mag that appears to be associated with the IRAS point source. The average H2 volume density of the extinction core is approximately 8000/cu cm. The dust within the filament, however, shows no sign of a central condensation and is consistent with a uniform-density cylindrical distribution.

  12. Single Crystal Diamond Needle as Point Electron Source.

    PubMed

    Kleshch, Victor I; Purcell, Stephen T; Obraztsov, Alexander N

    2016-10-12

    Diamond has been considered to be one of the most attractive materials for cold-cathode applications during past two decades. However, its real application is hampered by the necessity to provide appropriate amount and transport of electrons to emitter surface which is usually achieved by using nanometer size or highly defective crystallites having much lower physical characteristics than the ideal diamond. Here, for the first time the use of single crystal diamond emitter with high aspect ratio as a point electron source is reported. Single crystal diamond needles were obtained by selective oxidation of polycrystalline diamond films produced by plasma enhanced chemical vapor deposition. Field emission currents and total electron energy distributions were measured for individual diamond needles as functions of extraction voltage and temperature. The needles demonstrate current saturation phenomenon and sensitivity of emission to temperature. The analysis of the voltage drops measured via electron energy analyzer shows that the conduction is provided by the surface of the diamond needles and is governed by Poole-Frenkel transport mechanism with characteristic trap energy of 0.2-0.3 eV. The temperature-sensitive FE characteristics of the diamond needles are of great interest for production of the point electron beam sources and sensors for vacuum electronics.

  13. Single Crystal Diamond Needle as Point Electron Source

    NASA Astrophysics Data System (ADS)

    Kleshch, Victor I.; Purcell, Stephen T.; Obraztsov, Alexander N.

    2016-10-01

    Diamond has been considered to be one of the most attractive materials for cold-cathode applications during past two decades. However, its real application is hampered by the necessity to provide appropriate amount and transport of electrons to emitter surface which is usually achieved by using nanometer size or highly defective crystallites having much lower physical characteristics than the ideal diamond. Here, for the first time the use of single crystal diamond emitter with high aspect ratio as a point electron source is reported. Single crystal diamond needles were obtained by selective oxidation of polycrystalline diamond films produced by plasma enhanced chemical vapor deposition. Field emission currents and total electron energy distributions were measured for individual diamond needles as functions of extraction voltage and temperature. The needles demonstrate current saturation phenomenon and sensitivity of emission to temperature. The analysis of the voltage drops measured via electron energy analyzer shows that the conduction is provided by the surface of the diamond needles and is governed by Poole-Frenkel transport mechanism with characteristic trap energy of 0.2-0.3 eV. The temperature-sensitive FE characteristics of the diamond needles are of great interest for production of the point electron beam sources and sensors for vacuum electronics.

  14. Numerical convergence and validation of the DIMP inverse particle transport model

    DOE PAGES

    Nelson, Noel; Azmy, Yousry

    2017-09-01

    The data integration with modeled predictions (DIMP) model is a promising inverse radiation transport method for solving the special nuclear material (SNM) holdup problem. Unlike previous methods, DIMP is a completely passive nondestructive assay technique that requires no initial assumptions regarding the source distribution or active measurement time. DIMP predicts the most probable source location and distribution through Bayesian inference and quasi-Newtonian optimization of predicted detector re-sponses (using the adjoint transport solution) with measured responses. DIMP performs well with for-ward hemispherical collimation and unshielded measurements, but several considerations are required when using narrow-view collimated detectors. DIMP converged well to themore » correct source distribution as the number of synthetic responses increased. DIMP also performed well for the first experimental validation exercise after applying a collimation factor, and sufficiently reducing the source search vol-ume's extent to prevent the optimizer from getting stuck in local minima. DIMP's simple point detector response function (DRF) is being improved to address coplanar false positive/negative responses, and an angular DRF is being considered for integration with the next version of DIMP to account for highly collimated responses. Overall, DIMP shows promise for solving the SNM holdup inverse problem, especially once an improved optimization algorithm is implemented.« less

  15. Conducting Privacy-Preserving Multivariable Propensity Score Analysis When Patient Covariate Information Is Stored in Separate Locations.

    PubMed

    Bohn, Justin; Eddings, Wesley; Schneeweiss, Sebastian

    2017-03-15

    Distributed networks of health-care data sources are increasingly being utilized to conduct pharmacoepidemiologic database studies. Such networks may contain data that are not physically pooled but instead are distributed horizontally (separate patients within each data source) or vertically (separate measures within each data source) in order to preserve patient privacy. While multivariable methods for the analysis of horizontally distributed data are frequently employed, few practical approaches have been put forth to deal with vertically distributed health-care databases. In this paper, we propose 2 propensity score-based approaches to vertically distributed data analysis and test their performance using 5 example studies. We found that these approaches produced point estimates close to what could be achieved without partitioning. We further found a performance benefit (i.e., lower mean squared error) for sequentially passing a propensity score through each data domain (called the "sequential approach") as compared with fitting separate domain-specific propensity scores (called the "parallel approach"). These results were validated in a small simulation study. This proof-of-concept study suggests a new multivariable analysis approach to vertically distributed health-care databases that is practical, preserves patient privacy, and warrants further investigation for use in clinical research applications that rely on health-care databases. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. The Distribution of Solar Wind Speeds During Solar Minimum: Calibration for Numerical Solar Wind Modeling Constraints on the Source of the Slow Solar Wind (Postprint)

    DTIC Science & Technology

    2012-03-05

    subsonic corona below the critical point, resulting in an increased scale height and mass flux, while keeping the kinetic energy of the flow fairly...Approved for public release; distribution is unlimited. tubes with small expansion factors the heating occurs in the supersonic corona, where the energy ...goes into the kinetic energy of the solar wind, increasing the flow speed [Leer and Holzer, 1980; Pneuman, 1980]. Using this model and a sim- plified

  17. Opendf - An Implementation of the Dual Fermion Method for Strongly Correlated Systems

    NASA Astrophysics Data System (ADS)

    Antipov, Andrey E.; LeBlanc, James P. F.; Gull, Emanuel

    The dual fermion method is a multiscale approach for solving lattice problems of interacting strongly correlated systems. In this paper, we present the opendfcode, an open-source implementation of the dual fermion method applicable to fermionic single- orbital lattice models in dimensions D = 1, 2, 3 and 4. The method is built on a dynamical mean field starting point, which neglects all local correlations, and perturbatively adds spatial correlations. Our code is distributed as an open-source package under the GNU public license version 2.

  18. Diagnosing the Fine Structure of Electron Energy Within the ECRIT Ion Source

    NASA Astrophysics Data System (ADS)

    Jin, Yizhou; Yang, Juan; Tang, Mingjie; Luo, Litao; Feng, Bingbing

    2016-07-01

    The ion source of the electron cyclotron resonance ion thruster (ECRIT) extracts ions from its ECR plasma to generate thrust, and has the property of low gas consumption (2 sccm, standard-state cubic centimeter per minute) and high durability. Due to the indispensable effects of the primary electron in gas discharge, it is important to experimentally clarify the electron energy structure within the ion source of the ECRIT through analyzing the electron energy distribution function (EEDF) of the plasma inside the thruster. In this article the Langmuir probe diagnosing method was used to diagnose the EEDF, from which the effective electron temperature, plasma density and the electron energy probability function (EEPF) were deduced. The experimental results show that the magnetic field influences the curves of EEDF and EEPF and make the effective plasma parameter nonuniform. The diagnosed electron temperature and density from sample points increased from 4 eV/2×1016 m-3 to 10 eV/4×1016 m-3 with increasing distances from both the axis and the screen grid of the ion source. Electron temperature and density peaking near the wall coincided with the discharge process. However, a double Maxwellian electron distribution was unexpectedly observed at the position near the axis of the ion source and about 30 mm from the screen grid. Besides, the double Maxwellian electron distribution was more likely to emerge at high power and a low gas flow rate. These phenomena were believed to relate to the arrangements of the gas inlets and the magnetic field where the double Maxwellian electron distribution exits. The results of this research may enhance the understanding of the plasma generation process in the ion source of this type and help to improve its performance. supported by National Natural Science Foundation of China (No. 11475137)

  19. Modeling of mineral dust in the atmosphere: Sources, transport, and optical thickness

    NASA Technical Reports Server (NTRS)

    Tegen, Ina; Fung, Inez

    1994-01-01

    A global three-dimensional model of the atmospheric mineral dust cycle is developed for the study of its impact on the radiative balance of the atmosphere. The model includes four size classes of minearl dust, whose source distributions are based on the distributions of vegetation, soil texture and soil moisture. Uplift and deposition are parameterized using analyzed winds and rainfall statistics that resolve high-frequency events. Dust transport in the atmosphere is simulated with the tracer transport model of the Goddard Institute for Space Studies. The simulated seasonal variations of dust concentrations show general reasonable agreement with the observed distributions, as do the size distributions at several observing sites. The discrepancies between the simulated and the observed dust concentrations point to regions of significant land surface modification. Monthly distribution of aerosol optical depths are calculated from the distribution of dust particle sizes. The maximum optical depth due to dust is 0.4-0.5 in the seasonal mean. The main uncertainties, about a factor of 3-5, in calculating optical thicknesses arise from the crude resolution of soil particle sizes, from insufficient constraint by the total dust loading in the atmosphere, and from our ignorance about adhesion, agglomeration, uplift, and size distributions of fine dust particles (less than 1 micrometer).

  20. Distribution of trace elements in the coastal sea sediments of Maslinica Bay, Croatia

    NASA Astrophysics Data System (ADS)

    Mikulic, Nenad; Orescanin, Visnja; Elez, Loris; Pavicic, Ljiljana; Pezelj, Durdica; Lovrencic, Ivanka; Lulic, Stipe

    2008-02-01

    Spatial distributions of trace elements in the coastal sea sediments and water of Maslinica Bay (Southern Adriatic), Croatia and possible changes in marine flora and foraminifera communities due to pollution were investigated. Macro, micro and trace elements’ distributions in five granulometric fractions were determined for each sediment sample. Bulk sediment samples were also subjected to leaching tests. Elemental concentrations in sediments, sediment extracts and seawater were measured by source excited energy dispersive X-ray fluorescence (EDXRF). Concentrations of the elements Cr, Cu, Zn, and Pb in bulk sediment samples taken in the Maslinica Bay were from 2.1 to over six times enriched when compared with the background level determined for coarse grained carbonate sediments. A low degree of trace elements leaching determined for bulk sediments pointed to strong bonding of trace elements to sediment mineral phases. The analyses of marine flora pointed to higher eutrophication, which disturbs the balance between communities and natural habitats.

  1. Distribution of agrochemicals in the lower Mississippi River and its tributaries

    USGS Publications Warehouse

    Pereira, W.E.; Rostad, C.E.; Leiker, T.J.

    1990-01-01

    The Mississippi River and its tributaries drain extensive agricultural regions of the Mid-Continental United States. Millions of pounds of herbicides are applied annually in these areas to improve crop yields. Many of these compounds are transported into the river from point and nonpoint sources, and eventually are discharged into the Gulf of Mexico. Studies being conducted by the U.S. Geological Survey along the lower Mississippi River and its major tributaries, representing a 2000 km river reach, have confirmed that several triazine and acetanilide herbicides and their degradation products are ubiquitous in this riverine system. These compounds include atrazine and its degradation products desethyl and desisopropylatrazine, cyanazine, simazine, metolachlor, and alachlor and its degradation products 2-chloro-2',6'-diethylacetanilide, 2-hydroxy-2',6-diethylacetanilide and 2,6-diethylaniline. Loads of these compounds were determined at 16 different sampling stations. Stream-load calculations provided information concerning (a) conservative or nonconservative behavior of herbicides; (b) point sources or nonpoint sources; (c) validation of sampling techniques; and (d) transport past each sampling station.

  2. Influence of rainfall data scarcity on non-point source pollution prediction: Implications for physically based models

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Xu, Jiajia; Wang, Guobo; Liu, Hongbin; Zhai, Limei; Li, Shuang; Sun, Cheng; Shen, Zhenyao

    2018-07-01

    Hydrological and non-point source pollution (H/NPS) predictions in ungagged basins have become the key problem for watershed studies, especially for those large-scale catchments. However, few studies have explored the comprehensive impacts of rainfall data scarcity on H/NPS predictions. This study focused on: 1) the effects of rainfall spatial scarcity (by removing 11%-67% of stations based on their locations) on the H/NPS results; and 2) the impacts of rainfall temporal scarcity (10%-60% data scarcity in time series); and 3) the development of a new evaluation method that incorporates information entropy. A case study was undertaken using the Soil and Water Assessment Tool (SWAT) in a typical watershed in China. The results of this study highlighted the importance of critical-site rainfall stations that often showed greater influences and cross-tributary impacts on the H/NPS simulations. Higher missing rates above a certain threshold as well as missing locations during the wet periods resulted in poorer simulation results. Compared to traditional indicators, information entropy could serve as a good substitute because it reflects the distribution of spatial variability and the development of temporal heterogeneity. This paper reports important implications for the application of Distributed Hydrological Models and Semi-distributed Hydrological Models, as well as for the optimal design of rainfall gauges among large basins.

  3. Mapping the scientific research on non-point source pollution: a bibliometric analysis.

    PubMed

    Yang, Beibei; Huang, Kai; Sun, Dezhi; Zhang, Yue

    2017-02-01

    A bibliometric analysis was conducted to examine the progress and future research trends of non-point source (NPS) pollution during the years 1991-2015 based on the Science Citation Index Expanded (SCI-Expanded) of Web of Science (WoS). The publications referencing NPS pollution were analyzed including the following aspects: document type, publication language, publication output and characteristics, subject category, source journal, distribution of country and institution, author keywords, etc. The results indicate that the study of NPS pollution demonstrated a sharply increasing trend since 1991. Article and English were the most commonly used document type and language. Environmental sciences and ecology, water resources, and engineering were the top three subject categories. Water science and technology ranked first in distribution of journal, followed by Science of the total environment and Environmental Monitoring and Assessment. The USA took a leading position in both quantity and quality, playing an important role in the research field of NPS pollution, followed by the UK and China. The most productive institution was the Chinese Academy of Sciences (Chinese Acad Sci), followed by Beijing Normal University and US Department of Agriculture's Agricultural Research Service (USDA ARS). The analysis of author keywords indicates that the major hotspots of NPS pollution from 1991 to 2015 contained "water," "model," "agriculture," "nitrogen," "phosphorus," etc. The results provide a comprehensive understanding of NPS pollution research and help readers to establish the future research directions.

  4. Determination of the direction to a source of antineutrinos via inverse beta decay in Double Chooz

    NASA Astrophysics Data System (ADS)

    Nikitenko, Ya.

    2016-11-01

    To determine the direction to a source of neutrinos (and antineutrinos) is an important problem for the physics of supernovae and of the Earth. The direction to a source of antineutrinos can be estimated through the reaction of inverse beta decay. We show that the reactor neutrino experiment Double Chooz has unique capabilities to study antineutrino signal from point-like sources. Contemporary experimental data on antineutrino directionality is given. A rigorous mathematical approach for neutrino direction studies has been developed. Exact expressions for the precision of the simple mean estimator of neutrinos' direction for normal and exponential distributions for a finite sample and for the limiting case of many events have been obtained.

  5. Propagation of Solar Energetic Particles in Three-dimensional Interplanetary Magnetic Fields: Radial Dependence of Peak Intensities

    NASA Astrophysics Data System (ADS)

    He, H.-Q.; Zhou, G.; Wan, W.

    2017-06-01

    A functional form {I}\\max (R)={{kR}}-α , where R is the radial distance of a spacecraft, was usually used to model the radial dependence of peak intensities {I}\\max (R) of solar energetic particles (SEPs). In this work, the five-dimensional Fokker-Planck transport equation incorporating perpendicular diffusion is numerically solved to investigate the radial dependence of SEP peak intensities. We consider two different scenarios for the distribution of a spacecraft fleet: (1) along the radial direction line and (2) along the Parker magnetic field line. We find that the index α in the above expression varies in a wide range, primarily depending on the properties (e.g., location and coverage) of SEP sources and on the longitudinal and latitudinal separations between the sources and the magnetic foot points of the observers. Particularly, whether the magnetic foot point of the observer is located inside or outside the SEP source is a crucial factor determining the values of index α. A two-phase phenomenon is found in the radial dependence of peak intensities. The “position” of the break point (transition point/critical point) is determined by the magnetic connection status of the observers. This finding suggests that a very careful examination of the magnetic connection between the SEP source and each spacecraft should be taken in the observational studies. We obtain a lower limit of {R}-1.7+/- 0.1 for empirically modeling the radial dependence of SEP peak intensities. Our findings in this work can be used to explain the majority of the previous multispacecraft survey results, and especially to reconcile the different or conflicting empirical values of the index α in the literature.

  6. 3-D Modeling of Irregular Volcanic Sources Using Sparsity-Promoting Inversions of Geodetic Data and Boundary Element Method

    NASA Astrophysics Data System (ADS)

    Zhai, Guang; Shirzaei, Manoochehr

    2017-12-01

    Geodetic observations of surface deformation associated with volcanic activities can be used to constrain volcanic source parameters and their kinematics. Simple analytical models, such as point and spherical sources, are widely used to model deformation data. The inherent nature of oversimplified model geometries makes them unable to explain fine details of surface deformation. Current nonparametric, geometry-free inversion approaches resolve the distributed volume change, assuming it varies smoothly in space, which may detect artificial volume change outside magmatic source regions. To obtain a physically meaningful representation of an irregular volcanic source, we devise a new sparsity-promoting modeling scheme assuming active magma bodies are well-localized melt accumulations, namely, outliers in the background crust. First, surface deformation data are inverted using a hybrid L1- and L2-norm regularization scheme to solve for sparse volume change distributions. Next, a boundary element method is implemented to solve for the displacement discontinuity distribution of the reservoir, which satisfies a uniform pressure boundary condition. The inversion approach is thoroughly validated using benchmark and synthetic tests, of which the results show that source dimension, depth, and shape can be recovered appropriately. We apply this modeling scheme to deformation observed at Kilauea summit for periods of uplift and subsidence leading to and following the 2007 Father's Day event. We find that the magmatic source geometries for these periods are statistically distinct, which may be an indicator that magma is released from isolated compartments due to large differential pressure leading to the rift intrusion.

  7. Environmental contaminants in bald eagle eggs from the Aleutian archipelago

    USGS Publications Warehouse

    Anthony, R.G.; Miles, A.K.; Ricca, M.A.; Estes, J.A.

    2007-01-01

    We collected 136 fresh and unhatched eggs from bald eagle (Haliaeetus leucocephalus) nests and assessed productivity on eight islands in the Aleutian archipelago, 2000 to 2002. Egg contents were analyzed for a broad spectrum of organochlorine (OC) contaminants, mercury (Hg), and stable isotopes of carbon (??13C) and nitrogen (??15N). Concentrations of polychlorinated biphenyls (??PCBs), p,p???- dichlorodiphenyldichloroethylene (DDE), and Hg in bald eagle eggs were elevated throughout the archipelago, but the patterns of distribution differed among the various contaminants. Total PCBs were highest in areas of past military activities on Adak and Amchitka Islands, indicating local point sources of these compounds. Concentrations of DDE and Hg were higher on Amchitka Island, which was subjected to much military activity during World War II and the middle of the 20th century. Concentrations of ??PCBs also were elevated on islands with little history of military activity (e.g., Amlia, Tanaga, Buldir), suggesting non-point sources of PCBs in addition to point sources. Concentrations of DDE and Hg were highest in eagle eggs from the most western Aleutian Islands (e.g., Buldir, Kiska) and decreased eastward along the Aleutian chain. This east-to-west increase suggested a Eurasian source of contamination, possibly through global transport and atmospheric distillation and/or from migratory seabirds. Eggshell thickness and productivity of bald eagles were normal and indicative of healthy populations because concentrations of most contaminants were below threshold levels for effects on reproduction. Contrary to our predictions, contaminant concentrations were not correlated with stable isotopes of carbon (??13C) or nitrogen (??15N) in eggs. These latter findings indicate that contaminant concentrations were influenced more by point sources and geographic location than trophic status of eagles among the different islands. ?? 2007 SETAC.

  8. Suitability of point kernel dose calculation techniques in brachytherapy treatment planning

    PubMed Central

    Lakshminarayanan, Thilagam; Subbaiah, K. V.; Thayalan, K.; Kannan, S. E.

    2010-01-01

    Brachytherapy treatment planning system (TPS) is necessary to estimate the dose to target volume and organ at risk (OAR). TPS is always recommended to account for the effect of tissue, applicator and shielding material heterogeneities exist in applicators. However, most brachytherapy TPS software packages estimate the absorbed dose at a point, taking care of only the contributions of individual sources and the source distribution, neglecting the dose perturbations arising from the applicator design and construction. There are some degrees of uncertainties in dose rate estimations under realistic clinical conditions. In this regard, an attempt is made to explore the suitability of point kernels for brachytherapy dose rate calculations and develop new interactive brachytherapy package, named as BrachyTPS, to suit the clinical conditions. BrachyTPS is an interactive point kernel code package developed to perform independent dose rate calculations by taking into account the effect of these heterogeneities, using two regions build up factors, proposed by Kalos. The primary aim of this study is to validate the developed point kernel code package integrated with treatment planning computational systems against the Monte Carlo (MC) results. In the present work, three brachytherapy applicators commonly used in the treatment of uterine cervical carcinoma, namely (i) Board of Radiation Isotope and Technology (BRIT) low dose rate (LDR) applicator and (ii) Fletcher Green type LDR applicator (iii) Fletcher Williamson high dose rate (HDR) applicator, are studied to test the accuracy of the software. Dose rates computed using the developed code are compared with the relevant results of the MC simulations. Further, attempts are also made to study the dose rate distribution around the commercially available shielded vaginal applicator set (Nucletron). The percentage deviations of BrachyTPS computed dose rate values from the MC results are observed to be within plus/minus 5.5% for BRIT LDR applicator, found to vary from 2.6 to 5.1% for Fletcher green type LDR applicator and are up to −4.7% for Fletcher-Williamson HDR applicator. The isodose distribution plots also show good agreements with the results of previous literatures. The isodose distributions around the shielded vaginal cylinder computed using BrachyTPS code show better agreement (less than two per cent deviation) with MC results in the unshielded region compared to shielded region, where the deviations are observed up to five per cent. The present study implies that the accurate and fast validation of complicated treatment planning calculations is possible with the point kernel code package. PMID:20589118

  9. Column Number Density Expressions Through M = 0 and M = 1 Point Source Plumes Along Any Straight Path

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael

    2017-01-01

    Providers of payloads carried aboard the International Space Station must conduct analyses to demonstrate that any planned gaseous venting events generate no more than a certain level of material that may interfere with optical measurements from other experiments or payloads located nearby. This requirement is expressed in terms of a maximum column number density (CND). Depending on the level of rarefaction, such venting may be characterized by effusion for low flow rates, or by a sonic distribution at higher levels. Since the relative locations of other sensitive payloads are often unknown because they may refer to future projects, this requirement becomes a search for the maximum CND along any path.In another application, certain astronomical observations make use of CND to estimate light attenuation from a distant star through gaseous plumes, such as the Fermi Bubbles emanating from the vicinity of the black hole at the center of our Milky Way galaxy, in order to infer the amount of material being expelled via those plumes.This paper presents analytical CND expressions developed for general straight paths based upon a free molecule point source model for steady effusive flow and for a distribution fitted to model flows from a sonic orifice. Among other things, in this Mach number range it is demonstrated that the maximum CND from a distant location occurs along the path parallel to the source plane that intersects the plume axis. For effusive flows this value is exactly twice the CND found along the ray originating from that point of intersection and extending to infinity along the plumes axis. For sonic plumes this ratio is reduced to about 43.

  10. Analysis of Temporal and Spatial Distributions of Ammonia Nitrogen in the Huaihe River Basin from 1998 to 2014

    NASA Astrophysics Data System (ADS)

    Xu, J.; Jin, G.; Tang, H.; Li, L.

    2016-12-01

    To assess the effectiveness of water pollution control measures taken in the Huaihe River Basin (HRB) in China, we analyzed the temporal and spatial distributions of ammonia nitrogen (NH3-N) in the river water from 1998 to 2014 (three Chinese Five-year Plan periods).Analysis of measured NH3-N concentrations from various monitoring stations using the STL (seasonal trend decomposition using loess) method and a modified log-linear model revealed that: (1) The rate of NH3-N concentration reduction over the whole period was 70% 81% in the main stream of Huaihe River, but reached 88% in two major tributaries - the Shaying River and Guo River. (2) The NH3-N concentrations decreased significantly particularly between the tenth Five-year Plan and eleventh Five-year Plan periods in the main stream. In comparison, significant NH3-N reduction occurred over all three Five-year Plan periods in the Shaying and Guo tributaries. The concentration in the first year of a Five-year Plan period tended to much higher than that in the last year of the same period, likely due to the difference in implementing the pollution control measures. (3) The NH3-N concentrations were higher in the spring (fertilization period) and winter (low discharge) than in the summer and autumn. (4) With the implementation of pollution control measures, the contribution rate of NH3-N in the two major tributaries from point sources has decreased from 74% 93% in earlier years to 3% 28% in later years. However, NH3-N input from non-point sources appeared to remain stable and largely depend on runoff. To further reduce the NH3-N concentration in the river, policies and control measures should focus on non-point sources.

  11. Theoretical Investigation of the High-Altitude Cusp Region using Observations from Interball and ISTP Spacecraft

    NASA Technical Reports Server (NTRS)

    Ashour-Abdalla, Maha

    1998-01-01

    A fundamental goal of magnetospheric physics is to understand the transport of plasma through the solar wind-magnetosphere-ionosphere system. To attain such an understanding, we must determine the sources of the plasma, the trajectories of the particles through the magnetospheric electric and magnetic fields to the point of observation, and the acceleration processes they undergo enroute. This study employed plasma distributions observed in the near-Earth plasma sheet by Interball and Geotail spacecraft together with theoretical techniques to investigate the ion sources and the transport of plasma. We used ion trajectory calculations in magnetic and electric fields from a global Magnetohydrodynamics (MHD) simulation to investigate the transport and to identify common ion sources for ions observed in the near-Earth magnetotail by the Interball and Geotail spacecraft. Our first step was to examine a number of distribution functions and identify distinct boundaries in both configuration and phase space that are indicative of different plasma sources and transport mechanisms. We examined events from October 26, 1995, November 29-30, 1996, and December 22, 1996. During the first event Interball and Geotail were separated by approximately 10 R(sub E) in z, and during the second event the spacecraft were separated by approximately 4(sub RE). Both of these events had a strong IMF By component pointing toward the dawnside. On October 26, 1995, the IMF B(sub Z) component was northward, and on November 1-9-30, 1996, the IMF B sub Z) component was near 0. During the first event, Geotail was located near the equator on the dawn flank, while Interball was for the most part in the lobe region. The distribution function from the Coral instrument on Interball showed less structure and resembled a drifting Maxwellian. The observed distribution on Geotail, on the other hand, included a great number of structures at both low and high energies. During the third event (December 22, 1996) both spacecraft were in the plasma sheet and were separated bY approximately 20 R(sub E) in the y direction. During this event the IMF was southward.

  12. Integrated species distribution models: combining presence-background data and site-occupancy data with imperfect detection

    USGS Publications Warehouse

    Koshkina, Vira; Wang, Yang; Gordon, Ascelin; Dorazio, Robert; White, Matthew; Stone, Lewi

    2017-01-01

    Two main sources of data for species distribution models (SDMs) are site-occupancy (SO) data from planned surveys, and presence-background (PB) data from opportunistic surveys and other sources. SO surveys give high quality data about presences and absences of the species in a particular area. However, due to their high cost, they often cover a smaller area relative to PB data, and are usually not representative of the geographic range of a species. In contrast, PB data is plentiful, covers a larger area, but is less reliable due to the lack of information on species absences, and is usually characterised by biased sampling. Here we present a new approach for species distribution modelling that integrates these two data types.We have used an inhomogeneous Poisson point process as the basis for constructing an integrated SDM that fits both PB and SO data simultaneously. It is the first implementation of an Integrated SO–PB Model which uses repeated survey occupancy data and also incorporates detection probability.The Integrated Model's performance was evaluated, using simulated data and compared to approaches using PB or SO data alone. It was found to be superior, improving the predictions of species spatial distributions, even when SO data is sparse and collected in a limited area. The Integrated Model was also found effective when environmental covariates were significantly correlated. Our method was demonstrated with real SO and PB data for the Yellow-bellied glider (Petaurus australis) in south-eastern Australia, with the predictive performance of the Integrated Model again found to be superior.PB models are known to produce biased estimates of species occupancy or abundance. The small sample size of SO datasets often results in poor out-of-sample predictions. Integrated models combine data from these two sources, providing superior predictions of species abundance compared to using either data source alone. Unlike conventional SDMs which have restrictive scale-dependence in their predictions, our Integrated Model is based on a point process model and has no such scale-dependency. It may be used for predictions of abundance at any spatial-scale while still maintaining the underlying relationship between abundance and area.

  13. Moment Analysis Characterizing Water Flow in Repellent Soils from On- and Sub-Surface Point Sources

    NASA Astrophysics Data System (ADS)

    Xiong, Yunwu; Furman, Alex; Wallach, Rony

    2010-05-01

    Water repellency has a significant impact on water flow patterns in the soil profile. Flow tends to become unstable in such soils, which affects the water availability to plants and subsurface hydrology. In this paper, water flow in repellent soils was experimentally studied using the light reflection method. The transient 2D moisture profiles were monitored by CCD camera for tested soils packed in a transparent flow chamber. Water infiltration experiments and subsequent redistribution from on-surface and subsurface point sources with different flow rates were conducted for two soils of different repellency degrees as well as for wettable soil. We used spatio-statistical analysis (moments) to characterize the flow patterns. The zeroth moment is related to the total volume of water inside the moisture plume, and the first and second moments are affinitive to the center of mass and spatial variances of the moisture plume, respectively. The experimental results demonstrate that both the general shape and size of the wetting plume and the moisture distribution within the plume for the repellent soils are significantly different from that for the wettable soil. The wetting plume of the repellent soils is smaller, narrower, and longer (finger-like) than that of the wettable soil compared with that for the wettable soil that tended to roundness. Compared to the wettable soil, where the soil water content decreases radially from the source, moisture content for the water-repellent soils is higher, relatively uniform horizontally and gradually increases with depth (saturation overshoot), indicating that flow tends to become unstable. Ellipses, defined around the mass center and whose semi-axes represented a particular number of spatial variances, were successfully used to simulate the spatial and temporal variation of the moisture distribution in the soil profiles. Cumulative probability functions were defined for the water enclosed in these ellipses. Practically identical cumulative probability functions (beta distribution) were obtained for all soils, all source types, and flow rates. Further, same distributions were obtained for the infiltration and redistribution processes. This attractive result demonstrates the competence and advantage of the moment analysis method.

  14. Boosting Bayesian parameter inference of nonlinear stochastic differential equation models by Hamiltonian scale separation.

    PubMed

    Albert, Carlo; Ulzega, Simone; Stoop, Ruedi

    2016-04-01

    Parameter inference is a fundamental problem in data-driven modeling. Given observed data that is believed to be a realization of some parameterized model, the aim is to find parameter values that are able to explain the observed data. In many situations, the dominant sources of uncertainty must be included into the model for making reliable predictions. This naturally leads to stochastic models. Stochastic models render parameter inference much harder, as the aim then is to find a distribution of likely parameter values. In Bayesian statistics, which is a consistent framework for data-driven learning, this so-called posterior distribution can be used to make probabilistic predictions. We propose a novel, exact, and very efficient approach for generating posterior parameter distributions for stochastic differential equation models calibrated to measured time series. The algorithm is inspired by reinterpreting the posterior distribution as a statistical mechanics partition function of an object akin to a polymer, where the measurements are mapped on heavier beads compared to those of the simulated data. To arrive at distribution samples, we employ a Hamiltonian Monte Carlo approach combined with a multiple time-scale integration. A separation of time scales naturally arises if either the number of measurement points or the number of simulation points becomes large. Furthermore, at least for one-dimensional problems, we can decouple the harmonic modes between measurement points and solve the fastest part of their dynamics analytically. Our approach is applicable to a wide range of inference problems and is highly parallelizable.

  15. Total Nitrogen Sources of the Three Gorges Reservoir — A Spatio-Temporal Approach

    PubMed Central

    Ren, Chunping; Wang, Lijing; Zheng, Binghui; Holbach, Andreas

    2015-01-01

    Understanding the spatial and temporal variation of nutrient concentrations, loads, and their distribution from upstream tributaries is important for the management of large lakes and reservoirs. The Three Gorges Dam was built on the Yangtze River in China, the world’s third longest river, and impounded the famous Three Gorges Reservoir (TGR). In this study, we analyzed total nitrogen (TN) concentrations and inflow data from 2003 till 2010 for the main upstream tributaries of the TGR that contribute about 82% of the TGR’s total inflow. We used time series analysis for seasonal decomposition of TN concentrations and used non-parametric statistical tests (Kruskal-Walli H, Mann-Whitney U) as well as base flow segmentation to analyze significant spatial and temporal patterns of TN pollution input into the TGR. Our results show that TN concentrations had significant spatial heterogeneity across the study area (Tuo River> Yangtze River> Wu River> Min River> Jialing River>Jinsha River). Furthermore, we derived apparent seasonal changes in three out of five upstream tributaries of the TGR rivers (Kruskal-Walli H ρ = 0.009, 0.030 and 0.029 for Tuo River, Jinsha River and Min River in sequence). TN pollution from non-point sources in the upstream tributaries accounted for 68.9% of the total TN input into the TGR. Non-point source pollution of TN revealed increasing trends for 4 out of five upstream tributaries of the TGR. Land use/cover and soil type were identified as the dominant driving factors for the spatial distribution of TN. Intensifying agriculture and increasing urbanization in the upstream catchments of the TGR were the main driving factors for non-point source pollution of TN increase from 2003 till 2010. Land use and land cover management as well as chemical fertilizer use restriction were needed to overcome the threats of increasing TN pollution. PMID:26510158

  16. Evaluating Air-Quality Models: Review and Outlook.

    NASA Astrophysics Data System (ADS)

    Weil, J. C.; Sykes, R. I.; Venkatram, A.

    1992-10-01

    Over the past decade, much attention has been devoted to the evaluation of air-quality models with emphasis on model performance in predicting the high concentrations that are important in air-quality regulations. This paper stems from our belief that this practice needs to be expanded to 1) evaluate model physics and 2) deal with the large natural or stochastic variability in concentration. The variability is represented by the root-mean- square fluctuating concentration (c about the mean concentration (C) over an ensemble-a given set of meteorological, source, etc. conditions. Most air-quality models used in applications predict C, whereas observations are individual realizations drawn from an ensemble. For cC large residuals exist between predicted and observed concentrations, which confuse model evaluations.This paper addresses ways of evaluating model physics in light of the large c the focus is on elevated point-source models. Evaluation of model physics requires the separation of the mean model error-the difference between the predicted and observed C-from the natural variability. A residual analysis is shown to be an elective way of doing this. Several examples demonstrate the usefulness of residuals as well as correlation analyses and laboratory data in judging model physics.In general, c models and predictions of the probability distribution of the fluctuating concentration (c), (c, are in the developmental stage, with laboratory data playing an important role. Laboratory data from point-source plumes in a convection tank show that (c approximates a self-similar distribution along the plume center plane, a useful result in a residual analysis. At pmsent,there is one model-ARAP-that predicts C, c, and (c for point-source plumes. This model is more computationally demanding than other dispersion models (for C only) and must be demonstrated as a practical tool. However, it predicts an important quantity for applications- the uncertainty in the very high and infrequent concentrations. The uncertainty is large and is needed in evaluating operational performance and in predicting the attainment of air-quality standards.

  17. Monte Carlo calculations of energy deposition distributions of electrons below 20 keV in protein.

    PubMed

    Tan, Zhenyu; Liu, Wei

    2014-05-01

    The distributions of energy depositions of electrons in semi-infinite bulk protein and the radial dose distributions of point-isotropic mono-energetic electron sources [i.e., the so-called dose point kernel (DPK)] in protein have been systematically calculated in the energy range below 20 keV, based on Monte Carlo methods. The ranges of electrons have been evaluated by extrapolating two calculated distributions, respectively, and the evaluated ranges of electrons are compared with the electron mean path length in protein which has been calculated by using electron inelastic cross sections described in this work in the continuous-slowing-down approximation. It has been found that for a given energy, the electron mean path length is smaller than the electron range evaluated from DPK, but it is large compared to the electron range obtained from the energy deposition distributions of electrons in semi-infinite bulk protein. The energy dependences of the extrapolated electron ranges based on the two investigated distributions are given, respectively, in a power-law form. In addition, the DPK in protein has also been compared with that in liquid water. An evident difference between the two DPKs is observed. The calculations presented in this work may be useful in studies of radiation effects on proteins.

  18. Bacterial community structure in the drinking water microbiome is governed by filtration processes.

    PubMed

    Pinto, Ameet J; Xi, Chuanwu; Raskin, Lutgarde

    2012-08-21

    The bacterial community structure of a drinking water microbiome was characterized over three seasons using 16S rRNA gene based pyrosequencing of samples obtained from source water (a mix of a groundwater and a surface water), different points in a drinking water plant operated to treat this source water, and in the associated drinking water distribution system. Even though the source water was shown to seed the drinking water microbiome, treatment process operations limit the source water's influence on the distribution system bacterial community. Rather, in this plant, filtration by dual media rapid sand filters played a primary role in shaping the distribution system bacterial community over seasonal time scales as the filters harbored a stable bacterial community that seeded the water treatment processes past filtration. Bacterial taxa that colonized the filter and sloughed off in the filter effluent were able to persist in the distribution system despite disinfection of finished water by chloramination and filter backwashing with chloraminated backwash water. Thus, filter colonization presents a possible ecological survival strategy for bacterial communities in drinking water systems, which presents an opportunity to control the drinking water microbiome by manipulating the filter microbial community. Grouping bacterial taxa based on their association with the filter helped to elucidate relationships between the abundance of bacterial groups and water quality parameters and showed that pH was the strongest regulator of the bacterial community in the sampled drinking water system.

  19. Seismic Sources and Recurrence Rates as Adopted by USGS Staff for the Production of the 1982 and 1990 Probabilistic Ground Motion Maps for Alaska and the Conterminous United States

    USGS Publications Warehouse

    Hanson, Stanley L.; Perkins, David M.

    1995-01-01

    The construction of a probabilistic ground-motion hazard map for a region follows a sequence of analyses beginning with the selection of an earthquake catalog and ending with the mapping of calculated probabilistic ground-motion values (Hanson and others, 1992). An integral part of this process is the creation of sources used for the calculation of earthquake recurrence rates and ground motions. These sources consist of areas and lines that are representative of geologic or tectonic features and faults. After the design of the sources, it is necessary to arrange the coordinate points in a particular order compatible with the input format for the SEISRISK-III program (Bender and Perkins, 1987). Source zones are usually modeled as a point-rupture source. Where applicable, linear rupture sources are modeled with articulated lines, representing known faults, or a field of parallel lines, representing a generalized distribution of hypothetical faults. Based on the distribution of earthquakes throughout the individual source zones (or a collection of several sources), earthquake recurrence rates are computed for each of the sources, and a minimum and maximum magnitude is assigned. Over a period of time from 1978 to 1980 several conferences were held by the USGS to solicit information on regions of the United States for the purpose of creating source zones for computation of probabilistic ground motions (Thenhaus, 1983). As a result of these regional meetings and previous work in the Pacific Northwest, (Perkins and others, 1980), California continental shelf, (Thenhaus and others, 1980), and the Eastern outer continental shelf, (Perkins and others, 1979) a consensus set of source zones was agreed upon and subsequently used to produce a national ground motion hazard map for the United States (Algermissen and others, 1982). In this report and on the accompanying disk we provide a complete list of source areas and line sources as used for the 1982 and later 1990 seismic hazard maps for the conterminous U.S. and Alaska. These source zones are represented in the input form required for the hazard program SEISRISK-III, and they include the attenuation table and several other input parameter lines normally found at the beginning of an input data set for SEISRISK-III.

  20. Distribution of borates around point source injections in wood members exposed outside

    Treesearch

    Rodney C. De Groot; Colin C. Felton; Douglas M. Crawford

    2000-01-01

    In bridge timbers, wood decay is usually found where water has accessed the end-grain surfaces. In preservative-treated members, end-grain surfaces are most likely to be those resulting from on-site framing cuts or borings. Because these at-risk surfaces are easy to see, it seems feasible to establish a program where diffusible preservatives are repetitively inserted...

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohanty, Soumya D.; Nayak, Rajesh K.

    The space based gravitational wave detector LISA (Laser Interferometer Space Antenna) is expected to observe a large population of Galactic white dwarf binaries whose collective signal is likely to dominate instrumental noise at observational frequencies in the range 10{sup -4} to 10{sup -3} Hz. The motion of LISA modulates the signal of each binary in both frequency and amplitude--the exact modulation depending on the source direction and frequency. Starting with the observed response of one LISA interferometer and assuming only Doppler modulation due to the orbital motion of LISA, we show how the distribution of the entire binary population inmore » frequency and sky position can be reconstructed using a tomographic approach. The method is linear and the reconstruction of a delta-function distribution, corresponding to an isolated binary, yields a point spread function (psf). An arbitrary distribution and its reconstruction are related via smoothing with this psf. Exploratory results are reported demonstrating the recovery of binary sources, in the presence of white Gaussian noise.« less

  2. The electron foreshock

    NASA Technical Reports Server (NTRS)

    Fitzenreiter, R. J.

    1995-01-01

    An overview of the observations of backstreaming electrons in the foreshock and the mechanisms that have been proposed to explain their properties will be presented. A primary characteristic of observed foreshock electrons is that their velocity distributions are spatially structured in a systematic way depending on distance from the magnetic field line which is tangent to the shock. There are two interrelated aspects to explaining the structure of velocity distributions in the foreshock, one involving the acceleration mechanism and the other, propagation from the source to the observing point. First, the source distribution of electrons energized by the shock must be determined along the shock surface. Proposed acceleration mechanisms include magnetic mirroring of incoming solar wind particles and mechanisms involving transmission of particles through the shock. Secondly, the kinematics of observable electrons streaming away from a curved shock with an initial parallel velocity and a downstream perpendicular velocity component due to the motional electric field must be determined. This is the context in which the observations and their explanations will be reviewed.

  3. Fermi-LAT observations of the diffuse γ-ray emission: Implications for cosmic rays and the interstellar medium

    DOE PAGES

    Ackermann, M.; Ajello, M.; Atwood, W. B.; ...

    2012-04-09

    The γ-ray sky >100 MeV is dominated by the diffuse emissions from interactions of cosmic rays with the interstellar gas and radiation fields of the Milky Way. Our observations of these diffuse emissions provide a tool to study cosmic-ray origin and propagation, and the interstellar medium. We present measurements from the first 21 months of the Fermi Large Area Telescope (Fermi-LAT) mission and compare with models of the diffuse γ-ray emission generated using the GALPROP code. The models are fitted to cosmic-ray data and incorporate astrophysical input for the distribution of cosmic-ray sources, interstellar gas, and radiation fields. In ordermore » to assess uncertainties associated with the astrophysical input, a grid of models is created by varying within observational limits the distribution of cosmic-ray sources, the size of the cosmic-ray confinement volume (halo), and the distribution of interstellar gas. An all-sky maximum-likelihood fit is used to determine the X CO factor, the ratio between integrated CO-line intensity and H2 column density, the fluxes and spectra of the γ-ray point sources from the first Fermi-LAT catalog, and the intensity and spectrum of the isotropic background including residual cosmic rays that were misclassified as γ-rays, all of which have some dependency on the assumed diffuse emission model. The models are compared on the basis of their maximum-likelihood ratios as well as spectra, longitude, and latitude profiles. Here, we provide residual maps for the data following subtraction of the diffuse emission models. The models are consistent with the data at high and intermediate latitudes but underpredict the data in the inner Galaxy for energies above a few GeV. Possible explanations for this discrepancy are discussed, including the contribution by undetected point-source populations and spectral variations of cosmic rays throughout the Galaxy. In the outer Galaxy, we find that the data prefer models with a flatter distribution of cosmic-ray sources, a larger cosmic-ray halo, or greater gas density than is usually assumed. Our results in the outer Galaxy are consistent with other Fermi-LAT studies of this region that used different analysis methods than employed in this paper.« less

  4. Fermi-LAT Observations of the Diffuse γ-Ray Emission: Implications for Cosmic Rays and the Interstellar Medium

    NASA Astrophysics Data System (ADS)

    Ackermann, M.; Ajello, M.; Atwood, W. B.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Berenji, B.; Blandford, R. D.; Bloom, E. D.; Bonamente, E.; Borgland, A. W.; Brandt, T. J.; Bregeon, J.; Brigida, M.; Bruel, P.; Buehler, R.; Buson, S.; Caliandro, G. A.; Cameron, R. A.; Caraveo, P. A.; Cavazzuti, E.; Cecchi, C.; Charles, E.; Chekhtman, A.; Chiang, J.; Ciprini, S.; Claus, R.; Cohen-Tanugi, J.; Conrad, J.; Cutini, S.; de Angelis, A.; de Palma, F.; Dermer, C. D.; Digel, S. W.; Silva, E. do Couto e.; Drell, P. S.; Drlica-Wagner, A.; Falletti, L.; Favuzzi, C.; Fegan, S. J.; Ferrara, E. C.; Focke, W. B.; Fortin, P.; Fukazawa, Y.; Funk, S.; Fusco, P.; Gaggero, D.; Gargano, F.; Germani, S.; Giglietto, N.; Giordano, F.; Giroletti, M.; Glanzman, T.; Godfrey, G.; Grove, J. E.; Guiriec, S.; Gustafsson, M.; Hadasch, D.; Hanabata, Y.; Harding, A. K.; Hayashida, M.; Hays, E.; Horan, D.; Hou, X.; Hughes, R. E.; Jóhannesson, G.; Johnson, A. S.; Johnson, R. P.; Kamae, T.; Katagiri, H.; Kataoka, J.; Knödlseder, J.; Kuss, M.; Lande, J.; Latronico, L.; Lee, S.-H.; Lemoine-Goumard, M.; Longo, F.; Loparco, F.; Lott, B.; Lovellette, M. N.; Lubrano, P.; Mazziotta, M. N.; McEnery, J. E.; Michelson, P. F.; Mitthumsiri, W.; Mizuno, T.; Monte, C.; Monzani, M. E.; Morselli, A.; Moskalenko, I. V.; Murgia, S.; Naumann-Godo, M.; Norris, J. P.; Nuss, E.; Ohsugi, T.; Okumura, A.; Omodei, N.; Orlando, E.; Ormes, J. F.; Paneque, D.; Panetta, J. H.; Parent, D.; Pesce-Rollins, M.; Pierbattista, M.; Piron, F.; Pivato, G.; Porter, T. A.; Rainò, S.; Rando, R.; Razzano, M.; Razzaque, S.; Reimer, A.; Reimer, O.; Sadrozinski, H. F.-W.; Sgrò, C.; Siskind, E. J.; Spandre, G.; Spinelli, P.; Strong, A. W.; Suson, D. J.; Takahashi, H.; Tanaka, T.; Thayer, J. G.; Thayer, J. B.; Thompson, D. J.; Tibaldo, L.; Tinivella, M.; Torres, D. F.; Tosti, G.; Troja, E.; Usher, T. L.; Vandenbroucke, J.; Vasileiou, V.; Vianello, G.; Vitale, V.; Waite, A. P.; Wang, P.; Winer, B. L.; Wood, K. S.; Wood, M.; Yang, Z.; Ziegler, M.; Zimmer, S.

    2012-05-01

    The γ-ray sky >100 MeV is dominated by the diffuse emissions from interactions of cosmic rays with the interstellar gas and radiation fields of the Milky Way. Observations of these diffuse emissions provide a tool to study cosmic-ray origin and propagation, and the interstellar medium. We present measurements from the first 21 months of the Fermi Large Area Telescope (Fermi-LAT) mission and compare with models of the diffuse γ-ray emission generated using the GALPROP code. The models are fitted to cosmic-ray data and incorporate astrophysical input for the distribution of cosmic-ray sources, interstellar gas, and radiation fields. To assess uncertainties associated with the astrophysical input, a grid of models is created by varying within observational limits the distribution of cosmic-ray sources, the size of the cosmic-ray confinement volume (halo), and the distribution of interstellar gas. An all-sky maximum-likelihood fit is used to determine the X CO factor, the ratio between integrated CO-line intensity and H2 column density, the fluxes and spectra of the γ-ray point sources from the first Fermi-LAT catalog, and the intensity and spectrum of the isotropic background including residual cosmic rays that were misclassified as γ-rays, all of which have some dependency on the assumed diffuse emission model. The models are compared on the basis of their maximum-likelihood ratios as well as spectra, longitude, and latitude profiles. We also provide residual maps for the data following subtraction of the diffuse emission models. The models are consistent with the data at high and intermediate latitudes but underpredict the data in the inner Galaxy for energies above a few GeV. Possible explanations for this discrepancy are discussed, including the contribution by undetected point-source populations and spectral variations of cosmic rays throughout the Galaxy. In the outer Galaxy, we find that the data prefer models with a flatter distribution of cosmic-ray sources, a larger cosmic-ray halo, or greater gas density than is usually assumed. Our results in the outer Galaxy are consistent with other Fermi-LAT studies of this region that used different analysis methods than employed in this paper.

  5. FERMI-LAT OBSERVATIONS OF THE DIFFUSE {gamma}-RAY EMISSION: IMPLICATIONS FOR COSMIC RAYS AND THE INTERSTELLAR MEDIUM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ackermann, M.; Ajello, M.; Bechtol, K.

    The {gamma}-ray sky >100 MeV is dominated by the diffuse emissions from interactions of cosmic rays with the interstellar gas and radiation fields of the Milky Way. Observations of these diffuse emissions provide a tool to study cosmic-ray origin and propagation, and the interstellar medium. We present measurements from the first 21 months of the Fermi Large Area Telescope (Fermi-LAT) mission and compare with models of the diffuse {gamma}-ray emission generated using the GALPROP code. The models are fitted to cosmic-ray data and incorporate astrophysical input for the distribution of cosmic-ray sources, interstellar gas, and radiation fields. To assess uncertaintiesmore » associated with the astrophysical input, a grid of models is created by varying within observational limits the distribution of cosmic-ray sources, the size of the cosmic-ray confinement volume (halo), and the distribution of interstellar gas. An all-sky maximum-likelihood fit is used to determine the X{sub CO} factor, the ratio between integrated CO-line intensity and H{sub 2} column density, the fluxes and spectra of the {gamma}-ray point sources from the first Fermi-LAT catalog, and the intensity and spectrum of the isotropic background including residual cosmic rays that were misclassified as {gamma}-rays, all of which have some dependency on the assumed diffuse emission model. The models are compared on the basis of their maximum-likelihood ratios as well as spectra, longitude, and latitude profiles. We also provide residual maps for the data following subtraction of the diffuse emission models. The models are consistent with the data at high and intermediate latitudes but underpredict the data in the inner Galaxy for energies above a few GeV. Possible explanations for this discrepancy are discussed, including the contribution by undetected point-source populations and spectral variations of cosmic rays throughout the Galaxy. In the outer Galaxy, we find that the data prefer models with a flatter distribution of cosmic-ray sources, a larger cosmic-ray halo, or greater gas density than is usually assumed. Our results in the outer Galaxy are consistent with other Fermi-LAT studies of this region that used different analysis methods than employed in this paper.« less

  6. Fermi-LAT observations of the diffuse γ-ray emission: Implications for cosmic rays and the interstellar medium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ackermann, M.; Ajello, M.; Atwood, W. B.

    The γ-ray sky >100 MeV is dominated by the diffuse emissions from interactions of cosmic rays with the interstellar gas and radiation fields of the Milky Way. Our observations of these diffuse emissions provide a tool to study cosmic-ray origin and propagation, and the interstellar medium. We present measurements from the first 21 months of the Fermi Large Area Telescope (Fermi-LAT) mission and compare with models of the diffuse γ-ray emission generated using the GALPROP code. The models are fitted to cosmic-ray data and incorporate astrophysical input for the distribution of cosmic-ray sources, interstellar gas, and radiation fields. In ordermore » to assess uncertainties associated with the astrophysical input, a grid of models is created by varying within observational limits the distribution of cosmic-ray sources, the size of the cosmic-ray confinement volume (halo), and the distribution of interstellar gas. An all-sky maximum-likelihood fit is used to determine the X CO factor, the ratio between integrated CO-line intensity and H2 column density, the fluxes and spectra of the γ-ray point sources from the first Fermi-LAT catalog, and the intensity and spectrum of the isotropic background including residual cosmic rays that were misclassified as γ-rays, all of which have some dependency on the assumed diffuse emission model. The models are compared on the basis of their maximum-likelihood ratios as well as spectra, longitude, and latitude profiles. Here, we provide residual maps for the data following subtraction of the diffuse emission models. The models are consistent with the data at high and intermediate latitudes but underpredict the data in the inner Galaxy for energies above a few GeV. Possible explanations for this discrepancy are discussed, including the contribution by undetected point-source populations and spectral variations of cosmic rays throughout the Galaxy. In the outer Galaxy, we find that the data prefer models with a flatter distribution of cosmic-ray sources, a larger cosmic-ray halo, or greater gas density than is usually assumed. Our results in the outer Galaxy are consistent with other Fermi-LAT studies of this region that used different analysis methods than employed in this paper.« less

  7. Magnetoacoustic tomography with magnetic induction for high-resolution bioimepedance imaging through vector source reconstruction under the static field of MRI magnet.

    PubMed

    Mariappan, Leo; Hu, Gang; He, Bin

    2014-02-01

    Magnetoacoustic tomography with magnetic induction (MAT-MI) is an imaging modality to reconstruct the electrical conductivity of biological tissue based on the acoustic measurements of Lorentz force induced tissue vibration. This study presents the feasibility of the authors' new MAT-MI system and vector source imaging algorithm to perform a complete reconstruction of the conductivity distribution of real biological tissues with ultrasound spatial resolution. In the present study, using ultrasound beamformation, imaging point spread functions are designed to reconstruct the induced vector source in the object which is used to estimate the object conductivity distribution. Both numerical studies and phantom experiments are performed to demonstrate the merits of the proposed method. Also, through the numerical simulations, the full width half maximum of the imaging point spread function is calculated to estimate of the spatial resolution. The tissue phantom experiments are performed with a MAT-MI imaging system in the static field of a 9.4 T magnetic resonance imaging magnet. The image reconstruction through vector beamformation in the numerical and experimental studies gives a reliable estimate of the conductivity distribution in the object with a ∼ 1.5 mm spatial resolution corresponding to the imaging system frequency of 500 kHz ultrasound. In addition, the experiment results suggest that MAT-MI under high static magnetic field environment is able to reconstruct images of tissue-mimicking gel phantoms and real tissue samples with reliable conductivity contrast. The results demonstrate that MAT-MI is able to image the electrical conductivity properties of biological tissues with better than 2 mm spatial resolution at 500 kHz, and the imaging with MAT-MI under a high static magnetic field environment is able to provide improved imaging contrast for biological tissue conductivity reconstruction.

  8. Chemical and microbial characteristics of municipal drinking water supply systems in the Canadian Arctic.

    PubMed

    Daley, Kiley; Truelstrup Hansen, Lisbeth; Jamieson, Rob C; Hayward, Jenny L; Piorkowski, Greg S; Krkosek, Wendy; Gagnon, Graham A; Castleden, Heather; MacNeil, Kristen; Poltarowicz, Joanna; Corriveau, Emmalina; Jackson, Amy; Lywood, Justine; Huang, Yannan

    2017-06-13

    Drinking water in the vast Arctic Canadian territory of Nunavut is sourced from surface water lakes or rivers and transferred to man-made or natural reservoirs. The raw water is at a minimum treated by chlorination and distributed to customers either by trucks delivering to a water storage tank inside buildings or through a piped distribution system. The objective of this study was to characterize the chemical and microbial drinking water quality from source to tap in three hamlets (Coral Harbour, Pond Inlet and Pangnirtung-each has a population of <2000) on trucked service, and in Iqaluit (population ~6700), which uses a combination of trucked and piped water conveyance. Generally, the source and drinking water was of satisfactory microbial quality, containing Escherichia coli levels of <1 MPN/100 mL with a few exceptions, and selected pathogenic bacteria and parasites were below detection limits using quantitative polymerase chain reaction (qPCR) methods. Tap water in households receiving trucked water contained less than the recommended 0.2 mg/L of free chlorine, while piped drinking water in Iqaluit complied with Health Canada guidelines for residual chlorine (i.e. >0.2 mg/L free chlorine). Some buildings in the four communities contained manganese (Mn), copper (Cu), iron (Fe) and/or lead (Pb) concentrations above Health Canada guideline values for the aesthetic (Mn, Cu and Fe) and health (Pb) objectives. Corrosion of components of the drinking water distribution system (household storage tanks, premise plumbing) could be contributing to Pb, Cu and Fe levels, as the source water in three of the four communities had low alkalinity. The results point to the need for robust disinfection, which may include secondary disinfection or point-of-use disinfection, to prevent microbial risks in drinking water tanks in buildings and ultimately at the tap.

  9. Evaluation of a Proposed Biodegradable 188Re Source for Brachytherapy Application

    PubMed Central

    Khorshidi, Abdollah; Ahmadinejad, Marjan; Hamed Hosseini, S.

    2015-01-01

    Abstract This study aimed to evaluate dosimetric characteristics based on Monte Carlo (MC) simulations for a proposed beta emitter bioglass 188Re seed for internal radiotherapy applications. The bioactive glass seed has been developed using the sol-gel technique. The simulations were performed for the seed using MC radiation transport code to investigate the dosimetric factors recommended by the AAPM Task Group 60 (TG-60). Dose distributions due to the beta and photon radiation were predicted at different radial distances surrounding the source. The dose rate in water at the reference point was calculated to be 7.43 ± 0.5 cGy/h/μCi. The dosimetric factors consisting of the reference point dose rate, D(r0,θ0), the radial dose function, g(r), the 2-dimensional anisotropy function, F(r,θ), the 1-dimensional anisotropy function, φan(r), and the R90 quantity were estimated and compared with several available beta-emitting sources. The element 188Re incorporated in bioactive glasses produced by the sol-gel technique provides a suitable solution for producing new materials for seed implants applied to brachytherapy applications in prostate and liver cancers treatment. Dose distribution of 188Re seed was greater isotropic than other commercially attainable encapsulated seeds, since it has no end weld to attenuate radiation. The beta radiation-emitting 188Re source provides high doses of local radiation to the tumor tissue and the short range of the beta particles limit damage to the adjacent normal tissue. PMID:26181543

  10. Generalisation of the identity method for determination of high-order moments of multiplicity distributions with a software implementation

    NASA Astrophysics Data System (ADS)

    Maćkowiak-Pawłowska, Maja; Przybyła, Piotr

    2018-05-01

    The incomplete particle identification limits the experimentally-available phase space region for identified particle analysis. This problem affects ongoing fluctuation and correlation studies including the search for the critical point of strongly interacting matter performed on SPS and RHIC accelerators. In this paper we provide a procedure to obtain nth order moments of the multiplicity distribution using the identity method, generalising previously published solutions for n=2 and n=3. Moreover, we present an open source software implementation of this computation, called Idhim, that allows one to obtain the true moments of identified particle multiplicity distributions from the measured ones provided the response function of the detector is known.

  11. Lenstronomy: Multi-purpose gravitational lens modeling software package

    NASA Astrophysics Data System (ADS)

    Birrer, Simon; Amara, Adam

    2018-04-01

    Lenstronomy is a multi-purpose open-source gravitational lens modeling python package. Lenstronomy reconstructs the lens mass and surface brightness distributions of strong lensing systems using forward modelling and supports a wide range of analytic lens and light models in arbitrary combination. The software is also able to reconstruct complex extended sources as well as point sources. Lenstronomy is flexible and numerically accurate, with a clear user interface that could be deployed across different platforms. Lenstronomy has been used to derive constraints on dark matter properties in strong lenses, measure the expansion history of the universe with time-delay cosmography, measure cosmic shear with Einstein rings, and decompose quasar and host galaxy light.

  12. A systematic review and meta-analysis on the incubation period of Campylobacteriosis.

    PubMed

    Awofisayo-Okuyelu, A; Hall, I; Adak, G; Hawker, J I; Abbott, S; McCARTHY, N

    2017-08-01

    Accurate knowledge of pathogen incubation period is essential to inform public health policies and implement interventions that contribute to the reduction of burden of disease. The incubation period distribution of campylobacteriosis is currently unknown with several sources reporting different times. Variation in the distribution could be expected due to host, transmission vehicle, and organism characteristics, however, the extent of this variation and influencing factors are unclear. The authors have undertaken a systematic review of published literature of outbreak studies with well-defined point source exposures and human experimental studies to estimate the distribution of incubation period and also identify and explain the variation in the distribution between studies. We tested for heterogeneity using I 2 and Kolmogorov-Smirnov tests, regressed incubation period against possible explanatory factors, and used hierarchical clustering analysis to define subgroups of studies without evidence of heterogeneity. The mean incubation period of subgroups ranged from 2·5 to 4·3 days. We observed variation in the distribution of incubation period between studies that was not due to chance. A significant association between the mean incubation period and age distribution was observed with outbreaks involving only children reporting an incubation of 1·29 days longer when compared with outbreaks involving other age groups.

  13. Investigation of Main Radiation Source above Shield Plug of Unit 3 at Fukushima Daiichi Nuclear Power Station

    NASA Astrophysics Data System (ADS)

    Hiratama, Hideo; Kondo, Kenjiro; Suzuki, Seishiro; Tanimura, Yoshihiko; Iwanaga, Kohei; Nagata, Hiroshi

    2017-09-01

    Pulse height distributions were measured using a CdZnTe detector inside a lead collimator to investigate main source producing high dose rates above the shield plugs of Unit 3 at Fukushima Daiichi Nuclear Power Station. It was confirmed that low energy photons are dominant. Concentrations of Cs-137 under 60 cm concrete of the shield plug were estimated to be between 8.1E+9 and 5.7E+10 Bq/cm2 from the measured peak count rate of 0.662 MeV photons. If Cs-137 was distributed on the surfaces of the gaps with radius 6m and with the averaged concentration of 5 points, 2.6E+10 Bq/cm2, total amount of Cs-137 is estimated to be 30 PBq.

  14. Mapping of chlorophyll a distributions in coastal zones

    NASA Technical Reports Server (NTRS)

    Johnson, R. W.

    1978-01-01

    It is pointed out that chlorophyll a is an important environmental parameter for monitoring water quality, nutrient loads, and pollution effects in coastal zones. High chlorophyll a concentrations occur in areas which have high nutrient inflows from sources such as sewage treatment plants and industrial wastes. Low chlorophyll a concentrations may be due to the addition of toxic substances from industrial wastes or other sources. Remote sensing provides an opportunity to assess distributions of water quality parameters, such as chlorophyll a. A description is presented of the chlorophyll a analysis and a quantitative mapping of the James River, Virginia. An approach considered by Johnson (1977) was used in the analysis. An application of the multiple regression analysis technique to a data set collected over the New York Bight, an environmentally different area of the coastal zone, is also discussed.

  15. Competition and Cooperation of Distributed Generation and Power System

    NASA Astrophysics Data System (ADS)

    Miyake, Masatoshi; Nanahara, Toshiya

    Advances in distributed generation technologies together with the deregulation of an electric power industry can lead to a massive introduction of distributed generation. Since most of distributed generation will be interconnected to a power system, coordination and competition between distributed generators and large-scale power sources would be a vital issue in realizing a more desirable energy system in the future. This paper analyzes competitions between electric utilities and cogenerators from the viewpoints of economic and energy efficiency based on the simulation results on an energy system including a cogeneration system. First, we examine best response correspondence of an electric utility and a cogenerator with a noncooperative game approach: we obtain a Nash equilibrium point. Secondly, we examine the optimum strategy that attains the highest social surplus and the highest energy efficiency through global optimization.

  16. Seismic interferometry by crosscorrelation and by multidimensional deconvolution: a systematic comparison

    NASA Astrophysics Data System (ADS)

    Wapenaar, Kees; van der Neut, Joost; Ruigrok, Elmer; Draganov, Deyan; Hunziker, Jürg; Slob, Evert; Thorbecke, Jan; Snieder, Roel

    2011-06-01

    Seismic interferometry, also known as Green's function retrieval by crosscorrelation, has a wide range of applications, ranging from surface-wave tomography using ambient noise, to creating virtual sources for improved reflection seismology. Despite its successful applications, the crosscorrelation approach also has its limitations. The main underlying assumptions are that the medium is lossless and that the wavefield is equipartitioned. These assumptions are in practice often violated: the medium of interest is often illuminated from one side only, the sources may be irregularly distributed, and losses may be significant. These limitations may partly be overcome by reformulating seismic interferometry as a multidimensional deconvolution (MDD) process. We present a systematic analysis of seismic interferometry by crosscorrelation and by MDD. We show that for the non-ideal situations mentioned above, the correlation function is proportional to a Green's function with a blurred source. The source blurring is quantified by a so-called interferometric point-spread function which, like the correlation function, can be derived from the observed data (i.e. without the need to know the sources and the medium). The source of the Green's function obtained by the correlation method can be deblurred by deconvolving the correlation function for the point-spread function. This is the essence of seismic interferometry by MDD. We illustrate the crosscorrelation and MDD methods for controlled-source and passive-data applications with numerical examples and discuss the advantages and limitations of both methods.

  17. Turning Noise into Signal: Utilizing Impressed Pipeline Currents for EM Exploration

    NASA Astrophysics Data System (ADS)

    Lindau, Tobias; Becken, Michael

    2017-04-01

    Impressed Current Cathodic Protection (ICCP) systems are extensively used for the protection of central Europe's dense network of oil-, gas- and water pipelines against destruction by electrochemical corrosion. While ICCP systems usually provide protection by injecting a DC current into the pipeline, mandatory pipeline integrity surveys demand a periodical switching of the current. Consequently, the resulting time varying pipe currents induce secondary electric- and magnetic fields in the surrounding earth. While these fields are usually considered to be unwanted cultural noise in electromagnetic exploration, this work aims at utilizing the fields generated by the ICCP system for determining the electrical resistivity of the subsurface. The fundamental period of the switching cycles typically amounts to 15 seconds in Germany and thereby roughly corresponds to periods used in controlled source EM applications (CSEM). For detailed studies we chose an approximately 30km long pipeline segment near Herford, Germany as a test site. The segment is located close to the southern margin of the Lower Saxony Basin (LSB) and part of a larger gas pipeline composed of multiple segments. The current injected into the pipeline segment originates in a rectified 50Hz AC signal which is periodically switched on and off. In contrast to the usual dipole sources used in CSEM surveys, the current distribution along the pipeline is unknown and expected to be non-uniform due to coating defects that cause current to leak into the surrounding soil. However, an accurate current distribution is needed to model the fields generated by the pipeline source. We measured the magnetic fields at several locations above the pipeline and used Biot-Savarts-Law to estimate the currents decay function. The resulting frequency dependent current distribution shows a current decay away from the injection point as well as a frequency dependent phase shift which is increasing with distance from the injection point. Electric field data were recorded at 45 stations located in an area of about 60 square kilometers in the vicinity to the pipeline. Additionally, the injected source current was recorded directly at the injection point. Transfer functions between the local electric fields and the injected source current are estimated for frequencies ranging from 0.03Hz to 15Hz using robust time series processing techniques. The resulting transfer functions are inverted for a 3D conductivity model of the subsurface using an elaborate pipeline model. We interpret the model with regards to the local geologic setting, demonstrating the methods capabilities to image the subsurface.

  18. SU-C-BRC-04: Efficient Dose Calculation Algorithm for FFF IMRT with a Simplified Bivariate Gaussian Source Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, F; Park, J; Barraclough, B

    2016-06-15

    Purpose: To develop an efficient and accurate independent dose calculation algorithm with a simplified analytical source model for the quality assurance and safe delivery of Flattening Filter Free (FFF)-IMRT on an Elekta Versa HD. Methods: The source model consisted of a point source and a 2D bivariate Gaussian source, respectively modeling the primary photons and the combined effect of head scatter, monitor chamber backscatter and collimator exchange effect. The in-air fluence was firstly calculated by back-projecting the edges of beam defining devices onto the source plane and integrating the visible source distribution. The effect of the rounded MLC leaf end,more » tongue-and-groove and interleaf transmission was taken into account in the back-projection. The in-air fluence was then modified with a fourth degree polynomial modeling the cone-shaped dose distribution of FFF beams. Planar dose distribution was obtained by convolving the in-air fluence with a dose deposition kernel (DDK) consisting of the sum of three 2D Gaussian functions. The parameters of the source model and the DDK were commissioned using measured in-air output factors (Sc) and cross beam profiles, respectively. A novel method was used to eliminate the volume averaging effect of ion chambers in determining the DDK. Planar dose distributions of five head-and-neck FFF-IMRT plans were calculated and compared against measurements performed with a 2D diode array (MapCHECK™) to validate the accuracy of the algorithm. Results: The proposed source model predicted Sc for both 6MV and 10MV with an accuracy better than 0.1%. With a stringent gamma criterion (2%/2mm/local difference), the passing rate of the FFF-IMRT dose calculation was 97.2±2.6%. Conclusion: The removal of the flattening filter represents a simplification of the head structure which allows the use of a simpler source model for very accurate dose calculation. The proposed algorithm offers an effective way to ensure the safe delivery of FFF-IMRT.« less

  19. Aerial Measuring System Sensor Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. S. Detwiler

    2002-04-01

    This project deals with the modeling the Aerial Measuring System (AMS) fixed-wing and rotary-wing sensor systems, which are critical U.S. Department of Energy's National Nuclear Security Administration (NNSA) Consequence Management assets. The fixed-wing system is critical in detecting lost or stolen radiography or medical sources, or mixed fission products as from a commercial power plant release at high flying altitudes. The helicopter is typically used at lower altitudes to determine ground contamination, such as in measuring americium from a plutonium ground dispersal during a cleanup. Since the sensitivity of these instruments as a function of altitude is crucial in estimatingmore » detection limits of various ground contaminations and necessary count times, a characterization of their sensitivity as a function of altitude and energy is needed. Experimental data at altitude as well as laboratory benchmarks is important to insure that the strong effects of air attenuation are modeled correctly. The modeling presented here is the first attempt at such a characterization of the equipment for flying altitudes. The sodium iodide (NaI) sensors utilized with these systems were characterized using the Monte Carlo N-Particle code (MCNP) developed at Los Alamos National Laboratory. For the fixed wing system, calculations modeled the spectral response for the 3-element NaI detector pod and High-Purity Germanium (HPGe) detector, in the relevant energy range of 50 keV to 3 MeV. NaI detector responses were simulated for both point and distributed surface sources as a function of gamma energy and flying altitude. For point sources, photopeak efficiencies were calculated for a zero radial distance and an offset equal to the altitude. For distributed sources approximating an infinite plane, gross count efficiencies were calculated and normalized to a uniform surface deposition of 1 {micro}Ci/m{sup 2}. The helicopter calculations modeled the transport of americium-241 ({sup 241}Am) as this is the ''marker'' isotope utilized by the system for Pu detection. The helicopter sensor array consists of 2 six-element NaI detector pods, and the NaI pod detector response was simulated for a distributed surface source of {sup 241}Am as a function of altitude.« less

  20. Lunar Neutral Exposphere Properties from Pickup Ion Analysis

    NASA Technical Reports Server (NTRS)

    Hartle, R. E.; Sarantos, M.; Killen, R.; Sittler, E. C. Jr.; Halekas, J.; Yokota, S.; Saito, Y.

    2009-01-01

    Composition and structure of neutral constituents in the lunar exosphere can be determined through measurements of phase space distributions of pickup ions borne from the exosphere [1]. An essential point made in an early study [ 1 ] and inferred by recent pickup ion measurements [2, 3] is that much lower neutral exosphere densities can be derived from ion mass spectrometer measurements of pickup ions than can be determined by conventional neutral mass spectrometers or remote sensing instruments. One approach for deriving properties of neutral exospheric source gasses is to first compare observed ion spectra with pickup ion model phase space distributions. Neutral exosphere properties are then inferred by adjusting exosphere model parameters to obtain the best fit between the resulting model pickup ion distributions and the observed ion spectra. Adopting this path, we obtain ion distributions from a new general pickup ion model, an extension of a simpler analytic description obtained from the Vlasov equation with an ion source [4]. In turn, the ion source is formed from a three-dimensional exospheric density distribution, which can range from the classical Chamberlain type distribution to one with variable exobase temperatures and nonthermal constituents as well as those empirically derived. The initial stage of this approach uses the Moon's known neutral He and Na exospheres to deriv e He+ and Na+ pickup ion exospheres, including their phase space distributions, densities and fluxes. The neutral exospheres used are those based on existing models and remote sensing studies. As mentioned, future ion measurements can be used to constrain the pickup ion model and subsequently improve the neutral exosphere descriptions. The pickup ion model is also used to estimate the exosphere sources of recently observed pickup ions on KAGUYA [3]. Future missions carrying ion spectrometers (e.g., ARTEMIS) will be able to study the lunar neutral exosphere with great sensitivity, yielding the necessary ion velocity spectra needed to further analysis of parent neutral exosphere properties.

  1. Ultrasound shear wave simulation based on nonlinear wave propagation and Wigner-Ville Distribution analysis

    NASA Astrophysics Data System (ADS)

    Bidari, Pooya Sobhe; Alirezaie, Javad; Tavakkoli, Jahan

    2017-03-01

    This paper presents a method for modeling and simulation of shear wave generation from a nonlinear Acoustic Radiation Force Impulse (ARFI) that is considered as a distributed force applied at the focal region of a HIFU transducer radiating in nonlinear regime. The shear wave propagation is simulated by solving the Navier's equation from the distributed nonlinear ARFI as the source of the shear wave. Then, the Wigner-Ville Distribution (WVD) as a time-frequency analysis method is used to detect the shear wave at different local points in the region of interest. The WVD results in an estimation of the shear wave time of arrival, its mean frequency and local attenuation which can be utilized to estimate medium's shear modulus and shear viscosity using the Voigt model.

  2. A mesospheric source of nitrous oxide

    NASA Technical Reports Server (NTRS)

    Zipf, E. C.; Prasad, S. S.

    1982-01-01

    In the terrestrial atmosphere, nitrous oxide (N2O) has a major role in the chemistry of ozone. Current atmospheric models assume that N2O is produced only by fixation at the earth's surface and that there are no local sources in the stratosphere or mesosphere. It is pointed out here that a significant in situ N2O source does exist above 20 km due to the excitation of the metastable N2(A 3Sigma u +) state by resonance absorption of solar UV photons that penetrate deeply into the atmosphere through the 1,800-2,200 A O2-O3 window. This source significantly affects the NO altitude distribution in the mesosphere and, in the earth's prebiological atmosphere, made N2O an important stratospheric constituent.

  3. The Effects of Weather Patterns on the Spatio-Temporal Distribution of SO2 over East Asia as Seen from Satellite Measurements

    NASA Astrophysics Data System (ADS)

    Dunlap, L.; Li, C.; Dickerson, R. R.; Krotkov, N. A.

    2015-12-01

    Weather systems, particularly mid-latitude wave cyclones, have been known to play an important role in the short-term variation of near-surface air pollution. Ground measurements and model simulations have demonstrated that stagnant air and minimal precipitation associated with high pressure systems are conducive to pollutant accumulation. With the passage of a cold front, built up pollution is transported downwind of the emission sources or washed out by precipitation. This concept is important to note when studying long-term changes in spatio-temporal pollution distribution, but has not been studied in detail from space. In this study, we focus on East Asia (especially the industrialized eastern China), where numerous large power plants and other point sources as well as area sources emit large amounts of SO2, an important gaseous pollutant and a precursor of aerosols. Using data from the Aura Ozone Monitoring Instrument (OMI) we show that such weather driven distribution can indeed be discerned from satellite data by utilizing probability distribution functions (PDFs) of SO2 column content. These PDFs are multimodal and give insight into the background pollution level at a given location and contribution from local and upwind emission sources. From these PDFs it is possible to determine the frequency for a given region to have SO2 loading that exceeds the background amount. By comparing OMI-observed long-term change in the frequency with meteorological data, we can gain insights into the effects of climate change (e.g., the weakening of Asian monsoon) on regional air quality. Such insight allows for better interpretation of satellite measurements as well as better prediction of future pollution distribution as a changing climate gives way to changing weather patterns.

  4. Effect of current injection into thin-film Josephson junctions

    DOE PAGES

    Kogan, V. G.; Mints, R. G.

    2014-11-11

    New thin-film Josephson junctions have recently been tested in which the current injected into one of the junction banks governs Josephson phenomena. One thus can continuously manage the phase distribution at the junction by changing the injected current. Our method of calculating the distribution of injected currents is also proposed for a half-infinite thin-film strip with source-sink points at arbitrary positions at the film edges. The strip width W is assumed small relative to Λ=2λ 2/d;λ is the bulk London penetration depth of the film material and d is the film thickness.

  5. Study on beam geometry and image reconstruction algorithm in fast neutron computerized tomography at NECTAR facility

    NASA Astrophysics Data System (ADS)

    Guo, J.; Bücherl, T.; Zou, Y.; Guo, Z.

    2011-09-01

    Investigations on the fast neutron beam geometry for the NECTAR facility are presented. The results of MCNP simulations and experimental measurements of the beam distributions at NECTAR are compared. Boltzmann functions are used to describe the beam profile in the detection plane assuming the area source to be set up of large number of single neutron point sources. An iterative algebraic reconstruction algorithm is developed, realized and verified by both simulated and measured projection data. The feasibility for improved reconstruction in fast neutron computerized tomography at the NECTAR facility is demonstrated.

  6. [Passport registers: their limits and possibilities for the study of emigration].

    PubMed

    Baganha, M I

    1996-08-01

    "There are two main nominal sources of data on Portuguese emigration during the nineteenth and early twentieth centuries: the Rois de Confessados or Rois de Desobriga and the Livros de Registos de Passaportes.... The major question regarding passport registers concerns the level of clandestine emigration. Thus a comparison with U.S. ship lists reveals two different pictures of Portuguese emigration [with regard to] sex ratio, occupations and age distribution. Data obtained point at a larger generalization: sources containing data on legal emigration only do not reflect ¿true' emigration in countries with important clandestine streams." (EXCERPT)

  7. A Survey of nearby, nearly face-on spiral galaxies

    NASA Astrophysics Data System (ADS)

    Garmire, Gordon

    2014-09-01

    This is a continuation of a survey of nearby, nearly face-on spiral galaxies. The main purpose is to search for evidence of collisions with small galaxies that show up in X-rays by the generation of hot shocked gas from the collision. Secondary objectives include study of the spatial distribution point sources in the galaxy and to detect evidence for a central massive blackhole.

  8. Depth to the bottom of magnetic sources (DBMS) from aeromagnetic data of Central India using modified centroid method for fractal distribution of sources

    NASA Astrophysics Data System (ADS)

    Bansal, A. R.; Anand, S.; Rajaram, M.; Rao, V.; Dimri, V. P.

    2012-12-01

    The depth to the bottom of the magnetic sources (DBMS) may be used as an estimate of the Curie - point depth. The DBMSs can also be interpreted in term of thermal structure of the crust. The thermal structure of the crust is a sensitive parameter and depends on the many properties of crust e.g. modes of deformation, depths of brittle and ductile deformation zones, regional heat flow variations, seismicity, subsidence/uplift patterns and maturity of organic matter in sedimentary basins. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on fractal distribution has been proposed. We applied this modified centroid method to the aeromagnetic data of the central Indian region and selected 29 half overlapping blocks of dimension 200 km x 200 km covering different parts of the central India. Shallower values of the DBMS are found for the western and southern portion of Indian shield. The DBMSs values are found as low as close to middle crust in the south west Deccan trap and probably deeper than Moho in the Chhatisgarh basin. In few places DBMS are close to the Moho depth found from the seismic study and others places shallower than the Moho. The DBMS indicate complex nature of the Indian crust.

  9. Herschel Key Program Heritage: a Far-Infrared Source Catalog for the Magellanic Clouds

    NASA Astrophysics Data System (ADS)

    Seale, Jonathan P.; Meixner, Margaret; Sewiło, Marta; Babler, Brian; Engelbracht, Charles W.; Gordon, Karl; Hony, Sacha; Misselt, Karl; Montiel, Edward; Okumura, Koryo; Panuzzo, Pasquale; Roman-Duval, Julia; Sauvage, Marc; Boyer, Martha L.; Chen, C.-H. Rosie; Indebetouw, Remy; Matsuura, Mikako; Oliveira, Joana M.; Srinivasan, Sundar; van Loon, Jacco Th.; Whitney, Barbara; Woods, Paul M.

    2014-12-01

    Observations from the HERschel Inventory of the Agents of Galaxy Evolution (HERITAGE) have been used to identify dusty populations of sources in the Large and Small Magellanic Clouds (LMC and SMC). We conducted the study using the HERITAGE catalogs of point sources available from the Herschel Science Center from both the Photodetector Array Camera and Spectrometer (PACS; 100 and 160 μm) and Spectral and Photometric Imaging Receiver (SPIRE; 250, 350, and 500 μm) cameras. These catalogs are matched to each other to create a Herschel band-merged catalog and then further matched to archival Spitzer IRAC and MIPS catalogs from the Spitzer Surveying the Agents of Galaxy Evolution (SAGE) and SAGE-SMC surveys to create single mid- to far-infrared (far-IR) point source catalogs that span the wavelength range from 3.6 to 500 μm. There are 35,322 unique sources in the LMC and 7503 in the SMC. To be bright in the FIR, a source must be very dusty, and so the sources in the HERITAGE catalogs represent the dustiest populations of sources. The brightest HERITAGE sources are dominated by young stellar objects (YSOs), and the dimmest by background galaxies. We identify the sources most likely to be background galaxies by first considering their morphology (distant galaxies are point-like at the resolution of Herschel) and then comparing the flux distribution to that of the Herschel Astrophysical Terahertz Large Area Survey (ATLAS) survey of galaxies. We find a total of 9745 background galaxy candidates in the LMC HERITAGE images and 5111 in the SMC images, in agreement with the number predicted by extrapolating from the ATLAS flux distribution. The majority of the Magellanic Cloud-residing sources are either very young, embedded forming stars or dusty clumps of the interstellar medium. Using the presence of 24 μm emission as a tracer of star formation, we identify 3518 YSO candidates in the LMC and 663 in the SMC. There are far fewer far-IR bright YSOs in the SMC than the LMC due to both the SMC's smaller size and its lower dust content. The YSO candidate lists may be contaminated at low flux levels by background galaxies, and so we differentiate between sources with a high (“probable”) and moderate (“possible”) likelihood of being a YSO. There are 2493/425 probable YSO candidates in the LMC/SMC. Approximately 73% of the Herschel YSO candidates are newly identified in the LMC, and 35% in the SMC. We further identify a small population of dusty objects in the late stages of stellar evolution including extreme and post-asymptotic giant branch, planetary nebulae, and supernova remnants. These populations are identified by matching the HERITAGE catalogs to lists of previously identified objects in the literature. Approximately half of the LMC sources and one quarter of the SMC sources are too faint to obtain accurate ample FIR photometry and are unclassified.

  10. How to Avoid Fragility of Financial Systems: Lessons from the Financial Crisis and St. Petersburg Paradox

    NASA Astrophysics Data System (ADS)

    Takayasu, Hideki

    Firstly, I point out that the financial crisis occurred in 2008 has many analogous points with a physical phenomenon of brittle fracture in the sense that it is a highly irreversible phenomenon caused by concentration of stress to the weakest point. Then, I discuss distribution of gain-loss of continuous transactions of options which can be regarded as a source of stress among financial companies. The historical problem of Saint Petersburg paradox is reviewed and it is argued that the paradox is solved by decomposing the process into a combination of a fair gamble and an accompanied financial option. By generalizing this fair gamble it is shown that the gain-loss distribution in this problem is closely related to the distribution of gain-loss of business firms in the real world. Finally, I pose a serious question to the ordinary way of financing money to business firms with compound interest rates. Instead we introduce a new way of financing business firms without applying prefixed interest rates, in which financial stress is shared by all involved firms. This method is expected to reduce the risk of both financial firms and business firms, and is applicable even in the non-growing society.

  11. Spatial analysis and source profiling of beta-agonists and sulfonamides in Langat River basin, Malaysia.

    PubMed

    Sakai, Nobumitsu; Mohd Yusof, Roslan; Sapar, Marni; Yoneda, Minoru; Ali Mohd, Mustafa

    2016-04-01

    Beta-agonists and sulfonamides are widely used for treating both humans and livestock for bronchial and cardiac problems, infectious disease and even as growth promoters. There are concerns about their potential environmental impacts, such as producing drug resistance in bacteria. This study focused on their spatial distribution in surface water and the identification of pollution sources in the Langat River basin, which is one of the most urbanized watersheds in Malaysia. Fourteen beta-agonists and 12 sulfonamides were quantitatively analyzed by liquid chromatography-tandem mass spectrometry (LC-MS/MS). A geographic information system (GIS) was used to visualize catchment areas of the sampling points, and source profiling was conducted to identify the pollution sources based on a correlation between a daily pollutant load of the detected contaminant and an estimated density of human or livestock population in the catchment areas. As a result, 6 compounds (salbutamol, sulfadiazine, sulfapyridine, sulfamethazine, sulfadimethoxine and sulfamethoxazole) were widely detected in mid catchment areas towards estuary. The source profiling indicated that the pollution sources of salbutamol and sulfamethoxazole were from sewage, while sulfadiazine was from effluents of cattle, goat and sheep farms. Thus, this combination method of quantitative and spatial analysis clarified the spatial distribution of these drugs and assisted for identifying the pollution sources. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. A comprehensive analysis of heavy metals in urban road dust of Xi'an, China: Contamination, source apportionment and spatial distribution.

    PubMed

    Pan, Huiyun; Lu, Xinwei; Lei, Kai

    2017-12-31

    A detailed investigation was conducted to study heavy metal contamination in road dust from four regions of Xi'an, Northwest China. The concentrations of eight heavy metals Co, Cr, Cu, Mn, Ni, Pb, Zn and V were determined by X-Ray Fluorescence. The mean concentrations of these elements were: 30.9mgkg -1 Co, 145.0mgkg -1 Cr, 54.7mgkg -1 Cu, 510.5mgkg -1 Mn, 30.8mgkg -1 Ni, 124.5mgkg -1 Pb, 69.6mgkg -1 V and 268.6mgkg -1 Zn. There was significant enrichment of Pb, Zn, Co, Cu and Cr based on geo-accumulation index value. Multivariate statistical analysis showed that levels of Cu, Pb, Zn, Co and Cr were controlled by anthropogenic activities, while levels of Mn, Ni and V were associated with natural sources. Principle component analysis and multiple linear regression were applied to determine the source apportionment. The results showed that traffic was the main source with a percent contribution of 53.4%. Natural sources contributed 26.5%, and other anthropogenic pollution sources contributed 20.1%. Clear heavy metal pollution hotspots were identified by GIS mapping. The location of point pollution sources and prevailing wind direction were found to be important factors in the spatial distribution of heavy metals. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. The Herschel Virgo Cluster Survey. XVII. SPIRE point-source catalogs and number counts

    NASA Astrophysics Data System (ADS)

    Pappalardo, Ciro; Bendo, George J.; Bianchi, Simone; Hunt, Leslie; Zibetti, Stefano; Corbelli, Edvige; di Serego Alighieri, Sperello; Grossi, Marco; Davies, Jonathan; Baes, Maarten; De Looze, Ilse; Fritz, Jacopo; Pohlen, Michael; Smith, Matthew W. L.; Verstappen, Joris; Boquien, Médéric; Boselli, Alessandro; Cortese, Luca; Hughes, Thomas; Viaene, Sebastien; Bizzocchi, Luca; Clemens, Marcel

    2015-01-01

    Aims: We present three independent catalogs of point-sources extracted from SPIRE images at 250, 350, and 500 μm, acquired with the Herschel Space Observatory as a part of the Herschel Virgo Cluster Survey (HeViCS). The catalogs have been cross-correlated to consistently extract the photometry at SPIRE wavelengths for each object. Methods: Sources have been detected using an iterative loop. The source positions are determined by estimating the likelihood to be a real source for each peak on the maps, according to the criterion defined in the sourceExtractorSussextractor task. The flux densities are estimated using the sourceExtractorTimeline, a timeline-based point source fitter that also determines the fitting procedure with the width of the Gaussian that best reproduces the source considered. Afterwards, each source is subtracted from the maps, removing a Gaussian function in every position with the full width half maximum equal to that estimated in sourceExtractorTimeline. This procedure improves the robustness of our algorithm in terms of source identification. We calculate the completeness and the flux accuracy by injecting artificial sources in the timeline and estimate the reliability of the catalog using a permutation method. Results: The HeViCS catalogs contain about 52 000, 42 200, and 18 700 sources selected at 250, 350, and 500 μm above 3σ and are ~75%, 62%, and 50% complete at flux densities of 20 mJy at 250, 350, 500 μm, respectively. We then measured source number counts at 250, 350, and 500 μm and compare them with previous data and semi-analytical models. We also cross-correlated the catalogs with the Sloan Digital Sky Survey to investigate the redshift distribution of the nearby sources. From this cross-correlation, we select ~2000 sources with reliable fluxes and a high signal-to-noise ratio, finding an average redshift z ~ 0.3 ± 0.22 and 0.25 (16-84 percentile). Conclusions: The number counts at 250, 350, and 500 μm show an increase in the slope below 200 mJy, indicating a strong evolution in number of density for galaxies at these fluxes. In general, models tend to overpredict the counts at brighter flux densities, underlying the importance of studying the Rayleigh-Jeans part of the spectral energy distribution to refine the theoretical recipes of the models. Our iterative method for source identification allowed the detection of a family of 500 μm sources that are not foreground objects belonging to Virgo and not found in other catalogs. Herschel is an ESA space observatory with science instruments provided by a European-led principal investigator consortia and with an important participation from NASA.The 250, 350, 500 μm, and the total catalogs are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/573/A129

  14. Distributed and dynamic modelling of hydrology, phosphorus and ecology in the Hampshire Avon and Blashford Lakes: evaluating alternative strategies to meet WFD standards.

    PubMed

    Whitehead, P G; Jin, L; Crossman, J; Comber, S; Johnes, P J; Daldorph, P; Flynn, N; Collins, A L; Butterfield, D; Mistry, R; Bardon, R; Pope, L; Willows, R

    2014-05-15

    The issues of diffuse and point source phosphorus (P) pollution in the Hampshire Avon and Blashford Lakes are explored using a catchment model of the river system. A multibranch, process based, dynamic water quality model (INCA-P) has been applied to the whole river system to simulate water fluxes, total phosphorus (TP) and soluble reactive phosphorus (SRP) concentrations and ecology. The model has been used to assess impacts of both agricultural runoff and point sources from waste water treatment plants (WWTPs) on water quality. The results show that agriculture contributes approximately 40% of the phosphorus load and point sources the other 60% of the load in this catchment. A set of scenarios have been investigated to assess the impacts of alternative phosphorus reduction strategies and it is shown that a combined strategy of agricultural phosphorus reduction through either fertiliser reductions or better phosphorus management together with improved treatment at WWTPs would reduce the SRP concentrations in the river to acceptable levels to meet the EU Water Framework Directive (WFD) requirements. A seasonal strategy for WWTP phosphorus reductions would achieve significant benefits at reduced cost. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Single Crystal Diamond Needle as Point Electron Source

    PubMed Central

    Kleshch, Victor I.; Purcell, Stephen T.; Obraztsov, Alexander N.

    2016-01-01

    Diamond has been considered to be one of the most attractive materials for cold-cathode applications during past two decades. However, its real application is hampered by the necessity to provide appropriate amount and transport of electrons to emitter surface which is usually achieved by using nanometer size or highly defective crystallites having much lower physical characteristics than the ideal diamond. Here, for the first time the use of single crystal diamond emitter with high aspect ratio as a point electron source is reported. Single crystal diamond needles were obtained by selective oxidation of polycrystalline diamond films produced by plasma enhanced chemical vapor deposition. Field emission currents and total electron energy distributions were measured for individual diamond needles as functions of extraction voltage and temperature. The needles demonstrate current saturation phenomenon and sensitivity of emission to temperature. The analysis of the voltage drops measured via electron energy analyzer shows that the conduction is provided by the surface of the diamond needles and is governed by Poole-Frenkel transport mechanism with characteristic trap energy of 0.2–0.3 eV. The temperature-sensitive FE characteristics of the diamond needles are of great interest for production of the point electron beam sources and sensors for vacuum electronics. PMID:27731379

  16. A Deep XMM-Newton Survey of M33: Point-source Catalog, Source Detection, and Characterization of Overlapping Fields

    NASA Astrophysics Data System (ADS)

    Williams, Benjamin F.; Wold, Brian; Haberl, Frank; Garofali, Kristen; Blair, William P.; Gaetz, Terrance J.; Kuntz, K. D.; Long, Knox S.; Pannuti, Thomas G.; Pietsch, Wolfgang; Plucinsky, Paul P.; Winkler, P. Frank

    2015-05-01

    We have obtained a deep 8 field XMM-Newton mosaic of M33 covering the galaxy out to the D25 isophote and beyond to a limiting 0.2-4.5 keV unabsorbed flux of 5 × 10-16 erg cm-2 s-1 (L \\gt 4 × 1034 erg s-1 at the distance of M33). These data allow complete coverage of the galaxy with high sensitivity to soft sources such as diffuse hot gas and supernova remnants (SNRs). Here, we describe the methods we used to identify and characterize 1296 point sources in the 8 fields. We compare our resulting source catalog to the literature, note variable sources, construct hardness ratios, classify soft sources, analyze the source density profile, and measure the X-ray luminosity function (XLF). As a result of the large effective area of XMM-Newton below 1 keV, the survey contains many new soft X-ray sources. The radial source density profile and XLF for the sources suggest that only ˜15% of the 391 bright sources with L \\gt 3.6 × 1035 erg s-1 are likely to be associated with M33, and more than a third of these are known SNRs. The log(N)-log(S) distribution, when corrected for background contamination, is a relatively flat power law with a differential index of 1.5, which suggests that many of the other M33 sources may be high-mass X-ray binaries. Finally, we note the discovery of an interesting new transient X-ray source, which we are unable to classify.

  17. NPTFit: A Code Package for Non-Poissonian Template Fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra-Sharma, Siddharth; Rodd, Nicholas L.; Safdi, Benjamin R., E-mail: smsharma@princeton.edu, E-mail: nrodd@mit.edu, E-mail: bsafdi@mit.edu

    We present NPTFit, an open-source code package, written in Python and Cython, for performing non-Poissonian template fits (NPTFs). The NPTF is a recently developed statistical procedure for characterizing the contribution of unresolved point sources (PSs) to astrophysical data sets. The NPTF was first applied to Fermi gamma-ray data to provide evidence that the excess of ∼GeV gamma-rays observed in the inner regions of the Milky Way likely arises from a population of sub-threshold point sources, and the NPTF has since found additional applications studying sub-threshold extragalactic sources at high Galactic latitudes. The NPTF generalizes traditional astrophysical template fits to allowmore » for the ability to search for populations of unresolved PSs that may follow a given spatial distribution. NPTFit builds upon the framework of the fluctuation analyses developed in X-ray astronomy, thus it likely has applications beyond those demonstrated with gamma-ray data. The NPTFit package utilizes novel computational methods to perform the NPTF efficiently. The code is available at http://github.com/bsafdi/NPTFit and up-to-date and extensive documentation may be found at http://nptfit.readthedocs.io.« less

  18. Evaluation of pharmaceuticals and personal care products with emphasis on anthelmintics in human sanitary waste, sewage, hospital wastewater, livestock wastewater and receiving water.

    PubMed

    Sim, Won-Jin; Kim, Hee-Young; Choi, Sung-Deuk; Kwon, Jung-Hwan; Oh, Jeong-Eun

    2013-03-15

    We investigated 33 pharmaceuticals and personal care products (PPCPs) with emphasis on anthelmintics and their metabolites in human sanitary waste treatment plants (HTPs), sewage treatment plants (STPs), hospital wastewater treatment plants (HWTPs), livestock wastewater treatment plants (LWTPs), river water and seawater. PPCPs showed the characteristic specific occurrence patterns according to wastewater sources. The LWTPs and HTPs showed higher levels (maximum 3000 times in influents) of anthelmintics than other wastewater treatment plants, indicating that livestock wastewater and human sanitary waste are one of principal sources of anthelmintics. Among anthelmintics, fenbendazole and its metabolites are relatively high in the LWTPs, while human anthelmintics such as albendazole and flubendazole are most dominant in the HTPs, STPs and HWTPs. The occurrence pattern of fenbendazole's metabolites in water was different from pharmacokinetics studies, showing the possibility of transformation mechanism other than the metabolism in animal bodies by some processes unknown to us. The river water and seawater are generally affected by the point sources, but the distribution patterns in some receiving water are slightly different from the effluent, indicating the influence of non-point sources. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Polarization Effects Aboard the Space Interferometry Mission

    NASA Technical Reports Server (NTRS)

    Levin, Jason; Young, Martin; Dubovitsky, Serge; Dorsky, Leonard

    2006-01-01

    For precision displacement measurements, laser metrology is currently one of the most accurate measurements. Often, the measurement is located some distance away from the laser source, and as a result, stringent requirements are placed on the laser delivery system with respect to the state of polarization. Such is the case with the fiber distribution assembly (FDA) that is slated to fly aboard the Space Interferometry Mission (SIM) next decade. This system utilizes a concatenated array of couplers, polarizers and lengthy runs of polarization-maintaining (PM) fiber to distribute linearly-polarized light from a single laser to fourteen different optical metrology measurement points throughout the spacecraft. Optical power fluctuations at the point of measurement can be traced back to the polarization extinction ration (PER) of the concatenated components, in conjunction with the rate of change in phase difference of the light along the slow and fast axes of the PM fiber.

  20. Optimal Design for Placements of Tsunami Observing Systems to Accurately Characterize the Inducing Earthquake

    NASA Astrophysics Data System (ADS)

    Mulia, Iyan E.; Gusman, Aditya Riadi; Satake, Kenji

    2017-12-01

    Recently, there are numerous tsunami observation networks deployed in several major tsunamigenic regions. However, guidance on where to optimally place the measurement devices is limited. This study presents a methodological approach to select strategic observation locations for the purpose of tsunami source characterizations, particularly in terms of the fault slip distribution. Initially, we identify favorable locations and determine the initial number of observations. These locations are selected based on extrema of empirical orthogonal function (EOF) spatial modes. To further improve the accuracy, we apply an optimization algorithm called a mesh adaptive direct search to remove redundant measurement locations from the EOF-generated points. We test the proposed approach using multiple hypothetical tsunami sources around the Nankai Trough, Japan. The results suggest that the optimized observation points can produce more accurate fault slip estimates with considerably less number of observations compared to the existing tsunami observation networks.

  1. Detector Position Estimation for PET Scanners.

    PubMed

    Pierce, Larry; Miyaoka, Robert; Lewellen, Tom; Alessio, Adam; Kinahan, Paul

    2012-06-11

    Physical positioning of scintillation crystal detector blocks in Positron Emission Tomography (PET) scanners is not always exact. We test a proof of concept methodology for the determination of the six degrees of freedom for detector block positioning errors by utilizing a rotating point source over stepped axial intervals. To test our method, we created computer simulations of seven Micro Crystal Element Scanner (MiCES) PET systems with randomized positioning errors. The computer simulations show that our positioning algorithm can estimate the positions of the block detectors to an average of one-seventh of the crystal pitch tangentially, and one-third of the crystal pitch axially. Virtual acquisitions of a point source grid and a distributed phantom show that our algorithm improves both the quantitative and qualitative accuracy of the reconstructed objects. We believe this estimation algorithm is a practical and accurate method for determining the spatial positions of scintillation detector blocks.

  2. Gibbon travel paths are goal oriented.

    PubMed

    Asensio, Norberto; Brockelman, Warren Y; Malaivijitnond, Suchinda; Reichard, Ulrich H

    2011-05-01

    Remembering locations of food resources is critical for animal survival. Gibbons are territorial primates which regularly travel through small and stable home ranges in search of preferred, limited and patchily distributed resources (primarily ripe fruit). They are predicted to profit from an ability to memorize the spatial characteristics of their home range and may increase their foraging efficiency by using a 'cognitive map' either with Euclidean or with topological properties. We collected ranging and feeding data from 11 gibbon groups (Hylobates lar) to test their navigation skills and to better understand gibbons' 'spatial intelligence'. We calculated the locations at which significant travel direction changes occurred using the change-point direction test and found that these locations primarily coincided with preferred fruit sources. Within the limits of biologically realistic visibility distances observed, gibbon travel paths were more efficient in detecting known preferred food sources than a heuristic travel model based on straight travel paths in random directions. Because consecutive travel change-points were far from the gibbons' sight, planned movement between preferred food sources was the most parsimonious explanation for the observed travel patterns. Gibbon travel appears to connect preferred food sources as expected under the assumption of a good mental representation of the most relevant sources in a large-scale space.

  3. A New Simplified Source Model to Explain Strong Ground Motions from a Mega-Thrust Earthquake - Application to the 2011 Tohoku Earthquake (Mw9.0) -

    NASA Astrophysics Data System (ADS)

    Nozu, A.

    2013-12-01

    A new simplified source model is proposed to explain strong ground motions from a mega-thrust earthquake. The proposed model is simpler, and involves less model parameters, than the conventional characterized source model, which itself is a simplified expression of actual earthquake source. In the proposed model, the spacio-temporal distribution of slip within a subevent is not modeled. Instead, the source spectrum associated with the rupture of a subevent is modeled and it is assumed to follow the omega-square model. By multiplying the source spectrum with the path effect and the site amplification factor, the Fourier amplitude at a target site can be obtained. Then, combining it with Fourier phase characteristics of a smaller event, the time history of strong ground motions from the subevent can be calculated. Finally, by summing up contributions from the subevents, strong ground motions from the entire rupture can be obtained. The source model consists of six parameters for each subevent, namely, longitude, latitude, depth, rupture time, seismic moment and corner frequency of the subevent. Finite size of the subevent can be taken into account in the model, because the corner frequency of the subevent is included in the model, which is inversely proportional to the length of the subevent. Thus, the proposed model is referred to as the 'pseudo point-source model'. To examine the applicability of the model, a pseudo point-source model was developed for the 2011 Tohoku earthquake. The model comprises nine subevents, located off Miyagi Prefecture through Ibaraki Prefecture. The velocity waveforms (0.2-1 Hz), the velocity envelopes (0.2-10 Hz) and the Fourier spectra (0.2-10 Hz) at 15 sites calculated with the pseudo point-source model agree well with the observed ones, indicating the applicability of the model. Then the results were compared with the results of a super-asperity (SPGA) model of the same earthquake (Nozu, 2012, AGU), which can be considered as an example of characterized source models. Although the pseudo point-source model involves much less model parameters than the super-asperity model, the errors associated with the former model were comparable to those for the latter model for velocity waveforms and envelopes. Furthermore, the errors associated with the former model were much smaller than those for the latter model for Fourier spectra. These evidences indicate the usefulness of the pseudo point-source model. Comparison of the observed (black) and synthetic (red) Fourier spectra. The spectra are the composition of two horizontal components and smoothed with a Parzen window with a band width of 0.05 Hz.

  4. Location identification for indoor instantaneous point contaminant source by probability-based inverse Computational Fluid Dynamics modeling.

    PubMed

    Liu, X; Zhai, Z

    2008-02-01

    Indoor pollutions jeopardize human health and welfare and may even cause serious morbidity and mortality under extreme conditions. To effectively control and improve indoor environment quality requires immediate interpretation of pollutant sensor readings and accurate identification of indoor pollution history and source characteristics (e.g. source location and release time). This procedure is complicated by non-uniform and dynamic contaminant indoor dispersion behaviors as well as diverse sensor network distributions. This paper introduces a probability concept based inverse modeling method that is able to identify the source location for an instantaneous point source placed in an enclosed environment with known source release time. The study presents the mathematical models that address three different sensing scenarios: sensors without concentration readings, sensors with spatial concentration readings, and sensors with temporal concentration readings. The paper demonstrates the inverse modeling method and algorithm with two case studies: air pollution in an office space and in an aircraft cabin. The predictions were successfully verified against the forward simulation settings, indicating good capability of the method in finding indoor pollutant sources. The research lays a solid ground for further study of the method for more complicated indoor contamination problems. The method developed can help track indoor contaminant source location with limited sensor outputs. This will ensure an effective and prompt execution of building control strategies and thus achieve a healthy and safe indoor environment. The method can also assist the design of optimal sensor networks.

  5. General model for the pointing error analysis of Risley-prism system based on ray direction deviation in light refraction

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Yuan, Yan; Su, Lijuan; Huang, Fengzhen; Bai, Qing

    2016-09-01

    The Risley-prism-based light beam steering apparatus delivers superior pointing accuracy and it is used in imaging LIDAR and imaging microscopes. A general model for pointing error analysis of the Risley prisms is proposed in this paper, based on ray direction deviation in light refraction. This model captures incident beam deviation, assembly deflections, and prism rotational error. We derive the transmission matrixes of the model firstly. Then, the independent and cumulative effects of different errors are analyzed through this model. Accuracy study of the model shows that the prediction deviation of pointing error for different error is less than 4.1×10-5° when the error amplitude is 0.1°. Detailed analyses of errors indicate that different error sources affect the pointing accuracy to varying degree, and the major error source is the incident beam deviation. The prism tilting has a relative big effect on the pointing accuracy when prism tilts in the principal section. The cumulative effect analyses of multiple errors represent that the pointing error can be reduced by tuning the bearing tilting in the same direction. The cumulative effect of rotational error is relative big when the difference of these two prism rotational angles equals 0 or π, while it is relative small when the difference equals π/2. The novelty of these results suggests that our analysis can help to uncover the error distribution and aid in measurement calibration of Risley-prism systems.

  6. Near Earth Inner-Source and Interstellar Pickup Ions Observed with the Hot Plasma Composition Analyzer of the Magnetospheric Multiscale Mission Mms-Hpca

    NASA Astrophysics Data System (ADS)

    Gomez, R. G.; Fuselier, S. A.; Mukherjee, J.; Gonzalez, C. A.

    2017-12-01

    Pickup ions found near the earth are generally picked up in the rest frame of the solar wind, and propagate radially outward from their point of origin. While propagating, they simultaneously gyrate about the magnetic field. Pickup ions come in two general populations; interstellar and inner source ions. Interstellar ions originate in the interstellar medium, enter the solar system in a neutral charge state, are gravitationally focused on the side of the sun opposite their arrival direction and, are ionized when they travel near the sun. Inner-source ions originate at a location within the solar system and between the sun and the observation point. Both pickup ion populations share similarities in composition and charge states, so measuring of their dynamics, using their velocity distribution functions, f(v)'s, is absolutely essential to distinguishing them, and to determining their spatial and temporal origins. Presented here will be the results of studies conducted with the four Hot Plasma Composition Analyzers of the Magnetospheric Multiscale Mission (MMS-HPCA). These instruments measure the full sky (4π steradians) distribution functions of near earth plasmas at a 10 second cadence in an energy-to-charge range 0.001-40 keV/e. The instruments are also capable of parsing this combined energy-solid angle phase space with 22.5° resolution polar angle, and 11.25° in azimuthal angle, allowing for clear measurement of the pitch angle scattering of the ions.

  7. Eddy covariance methane flux measurements over a grazed pasture: effect of cows as moving point sources

    NASA Astrophysics Data System (ADS)

    Felber, R.; Münger, A.; Neftel, A.; Ammann, C.

    2015-06-01

    Methane (CH4) from ruminants contributes one-third of global agricultural greenhouse gas emissions. Eddy covariance (EC) technique has been extensively used at various flux sites to investigate carbon dioxide exchange of ecosystems. Since the development of fast CH4 analyzers, the instrumentation at many flux sites has been amended for these gases. However, the application of EC over pastures is challenging due to the spatially and temporally uneven distribution of CH4 point sources induced by the grazing animals. We applied EC measurements during one grazing season over a pasture with 20 dairy cows (mean milk yield: 22.7 kg d-1) managed in a rotational grazing system. Individual cow positions were recorded by GPS trackers to attribute fluxes to animal emissions using a footprint model. Methane fluxes with cows in the footprint were up to 2 orders of magnitude higher than ecosystem fluxes without cows. Mean cow emissions of 423 ± 24 g CH4 head-1 d-1 (best estimate from this study) correspond well to animal respiration chamber measurements reported in the literature. However, a systematic effect of the distance between source and EC tower on cow emissions was found, which is attributed to the analytical footprint model used. We show that the EC method allows one to determine CH4 emissions of cows on a pasture if the data evaluation is adjusted for this purpose and if some cow distribution information is available.

  8. Eddy covariance methane flux measurements over a grazed pasture: effect of cows as moving point sources

    NASA Astrophysics Data System (ADS)

    Felber, R.; Münger, A.; Neftel, A.; Ammann, C.

    2015-02-01

    Methane (CH4) from ruminants contributes one third to global agricultural greenhouse gas emissions. Eddy covariance (EC) technique has been extensively used at various flux sites to investigate carbon dioxide exchange of ecosystems. Since the development of fast CH4 analysers the instrumentation at many flux sites have been amended for these gases. However the application of EC over pastures is challenging due to the spatial and temporal uneven distribution of CH4 point sources induced by the grazing animals. We applied EC measurements during one grazing season over a pasture with 20 dairy cows (mean milk yield: 22.7 kg d-1) managed in a rotational grazing system. Individual cow positions were recorded by GPS trackers to attribute fluxes to animal emissions using a footprint model. Methane fluxes with cows in the footprint were up to two orders of magnitude higher than ecosystem fluxes without cows. Mean cow emissions of 423 ± 24 g CH4 head-1 d-1 (best guess of this study) correspond well to animal respiration chamber measurements reported in the literature. However a systematic effect of the distance between source and EC tower on cow emissions was found which is attributed to the analytical footprint model used. We show that the EC method allows to determine CH4 emissions of grazing cows if the data evaluation is adjusted for this purpose and if some cow distribution information is available.

  9. The Norma arm region Chandra survey catalog: X-ray populations in the spiral arms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fornasini, Francesca M.; Tomsick, John A.; Bodaghee, Arash

    2014-12-01

    We present a catalog of 1415 X-ray sources identified in the Norma Arm Region Chandra Survey (NARCS), which covers a 2° × 0.°8 region in the direction of the Norma spiral arm to a depth of ≈20 ks. Of these sources, 1130 are point-like sources detected with ≥3σ confidence in at least one of three energy bands (0.5-10, 0.5-2, and 2-10 keV), five have extended emission, and the remainder are detected at low significance. Since most sources have too few counts to permit individual classification, they are divided into five spectral groups defined by their quantile properties. We analyze stackedmore » spectra of X-ray sources within each group, in conjunction with their fluxes, variability, and infrared counterparts, to identify the dominant populations in our survey. We find that ∼50% of our sources are foreground sources located within 1-2 kpc, which is consistent with expectations from previous surveys. Approximately 20% of sources are likely located in the proximity of the Scutum-Crux and near Norma arm, while 30% are more distant, in the proximity of the far Norma arm or beyond. We argue that a mixture of magnetic and nonmagnetic cataclysmic variables dominates the Scutum-Crux and near Norma arms, while intermediate polars and high-mass stars (isolated or in binaries) dominate the far Norma arm. We also present the cumulative number count distribution for sources in our survey that are detected in the hard energy band. A population of very hard sources in the vicinity of the far Norma arm and active galactic nuclei dominate the hard X-ray emission down to f{sub X} ≈ 10{sup –14} erg cm{sup –2} s{sup –1}, but the distribution curve flattens at fainter fluxes. We find good agreement between the observed distribution and predictions based on other surveys.« less

  10. Challenging the distributed temperature sensing technique for estimating groundwater discharge to streams through controlled artificial point source experiment

    NASA Astrophysics Data System (ADS)

    Lauer, F.; Frede, H.-G.; Breuer, L.

    2012-04-01

    Spatially confined groundwater discharge can contribute significantly to stream discharge. Distributed fibre optic temperature sensing (DTS) of stream water has been successfully used to localize- and quantify groundwater discharge from this type "point sources" (PS) in small first-order streams. During periods when stream and groundwater temperatures differ PS appear as abrupt step in longitudinal stream water temperature distribution. Based on stream temperature observation up- and downstream of a point source and estimated or measured groundwater temperature the proportion of groundwater inflow to stream discharge can be quantified using simple mixing models. However so far this method has not been quantitatively verified, nor has a detailed uncertainty analysis of the method been conducted. The relative accuracy of this method is expected to decrease nonlinear with decreasing proportions of lateral inflow. Furthermore it depends on the temperature differences (ΔT) between groundwater and surface water and on the accuracy of temperature measurement itself. The latter could be affected by different sources of errors. For example it has been shown that a direct impact of solar radiation on fibre optic cables can lead to errors in temperature measurements in small streams due to low water depth. Considerable uncertainty might also be related to the determination of groundwater temperature through direct measurements or derived from the DTS signal. In order to directly validate the method and asses it's uncertainty we performed a set of artificial point source experiments with controlled lateral inflow rates to a natural stream. The experiments were carried out at the Vollnkirchener Bach, a small head water stream in Hessen, Germany in November and December 2011 during a low flow period. A DTS system was installed along a 1.2 km sub reach of the stream. Stream discharge was measured using a gauging flume installed directly upstream of the artificial PS. Lateral inflow was simulated using a pumping system connected to a 2 m3 water tank. Pumping rates were controlled using a magnetic inductive flowmeter and kept constant for a time period of 30 minutes to 1.5 hours depending on the simulated inflow rate. Different temperatures of lateral inflow were adjusted by heating the water in the tank (for summer experiments a cooling by ice cubes could be realized). With this setup, different proportions of lateral inflow to stream flow ranging from 2 to 20%, could be simulated for different ΔT's (2-7° C) between stream- and inflowing water. Results indicate that the estimation of groundwater discharge through DTS is working properly, but that the method is very sensitive to the determination of the PS groundwater temperature. The span of adjusted ΔT and inflow rates of the artificial system are currently used to perform a thorough uncertainty analysis of the DTS method and to derive thresholds for detection limits.

  11. DEEP WIDEBAND SINGLE POINTINGS AND MOSAICS IN RADIO INTERFEROMETRY: HOW ACCURATELY DO WE RECONSTRUCT INTENSITIES AND SPECTRAL INDICES OF FAINT SOURCES?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rau, U.; Bhatnagar, S.; Owen, F. N., E-mail: rurvashi@nrao.edu

    Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1–2 GHz)) and 46-pointing mosaic (D-array, C-Band (4–8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μ Jy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in themore » reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures.« less

  12. Adaptive selective relaying in cooperative free-space optical systems over atmospheric turbulence and misalignment fading channels.

    PubMed

    Boluda-Ruiz, Rubén; García-Zambrana, Antonio; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz

    2014-06-30

    In this paper, a novel adaptive cooperative protocol with multiple relays using detect-and-forward (DF) over atmospheric turbulence channels with pointing errors is proposed. The adaptive DF cooperative protocol here analyzed is based on the selection of the optical path, source-destination or different source-relay links, with a greater value of fading gain or irradiance, maintaining a high diversity order. Closed-form asymptotic bit error-rate (BER) expressions are obtained for a cooperative free-space optical (FSO) communication system with Nr relays, when the irradiance of the transmitted optical beam is susceptible to either a wide range of turbulence conditions, following a gamma-gamma distribution of parameters α and β, or pointing errors, following a misalignment fading model where the effect of beam width, detector size and jitter variance is considered. A greater robustness for different link distances and pointing errors is corroborated by the obtained results if compared with similar cooperative schemes or equivalent multiple-input multiple-output (MIMO) systems. Simulation results are further demonstrated to confirm the accuracy and usefulness of the derived results.

  13. Cylindrical angular spectrum using Fourier coefficients of point light source and its application to fast hologram calculation.

    PubMed

    Oh, Seungtaik; Jeong, Il Kwon

    2015-11-16

    We will introduce a new simple analytic formula of the Fourier coefficient of the 3D field distribution of a point light source to generate a cylindrical angular spectrum which captures the object wave in 360° in the 3D Fourier space. Conceptually, the cylindrical angular spectrum can be understood as a cylindrical version of the omnidirectional spectral approach of Sando et al. Our Fourier coefficient formula is based on an intuitive observation that a point light radiates uniformly in all directions. Our formula is defined over all frequency vectors lying on the entire sphere in the 3D Fourier space and is more natural and computationally more efficient for all around recording of the object wave than that of the previous omnidirectional spectral method. A generalized frequency-based occlusion culling method for an arbitrary complex object is also proposed to enhance the 3D quality of a hologram. As a practical application of the cylindrical angular spectrum, an interactive hologram example is presented together with implementation details.

  14. A Model for Selection of Eyespots on Butterfly Wings.

    PubMed

    Sekimura, Toshio; Venkataraman, Chandrasekhar; Madzvamuse, Anotida

    2015-01-01

    The development of eyespots on the wing surface of butterflies of the family Nympalidae is one of the most studied examples of biological pattern formation.However, little is known about the mechanism that determines the number and precise locations of eyespots on the wing. Eyespots develop around signaling centers, called foci, that are located equidistant from wing veins along the midline of a wing cell (an area bounded by veins). A fundamental question that remains unsolved is, why a certain wing cell develops an eyespot, while other wing cells do not. We illustrate that the key to understanding focus point selection may be in the venation system of the wing disc. Our main hypothesis is that changes in morphogen concentration along the proximal boundary veins of wing cells govern focus point selection. Based on previous studies, we focus on a spatially two-dimensional reaction-diffusion system model posed in the interior of each wing cell that describes the formation of focus points. Using finite element based numerical simulations, we demonstrate that variation in the proximal boundary condition is sufficient to robustly select whether an eyespot focus point forms in otherwise identical wing cells. We also illustrate that this behavior is robust to small perturbations in the parameters and geometry and moderate levels of noise. Hence, we suggest that an anterior-posterior pattern of morphogen concentration along the proximal vein may be the main determinant of the distribution of focus points on the wing surface. In order to complete our model, we propose a two stage reaction-diffusion system model, in which an one-dimensional surface reaction-diffusion system, posed on the proximal vein, generates the morphogen concentrations that act as non-homogeneous Dirichlet (i.e., fixed) boundary conditions for the two-dimensional reaction-diffusion model posed in the wing cells. The two-stage model appears capable of generating focus point distributions observed in nature. We therefore conclude that changes in the proximal boundary conditions are sufficient to explain the empirically observed distribution of eyespot focus points on the entire wing surface. The model predicts, subject to experimental verification, that the source strength of the activator at the proximal boundary should be lower in wing cells in which focus points form than in those that lack focus points. The model suggests that the number and locations of eyespot foci on the wing disc could be largely controlled by two kinds of gradients along two different directions, that is, the first one is the gradient in spatially varying parameters such as the reaction rate along the anterior-posterior direction on the proximal boundary of the wing cells, and the second one is the gradient in source values of the activator along the veins in the proximal-distal direction of the wing cell.

  15. Particle Dynamics at and near the Electron and Ion Diffusion Regions as a Function of Guide Field

    NASA Astrophysics Data System (ADS)

    Giles, Barbara; Burch, James; Phan, Tai; Webster, James; Avanov, Levon; Torbert, Roy; Chen, Li-Jen; Chandler, Michael; Dorelli, John; Ergun, Robert; Fuselier, Stephen; Gershman, Daniel; Lavraud, Benoit; Moore, Thomas; Paterson, William; Pollock, Craig; Russell, Christopher; Saito, Yoshifumi; Strangeway, Robert; Wang, Shan

    2017-04-01

    At the dayside magnetopause, magnetic reconnection often occurs in thin sheets of plasma carrying electrical currents and rotating magnetic fields. Charged particles interact strongly with the magnetic field and simultaneously their motions modify the fields. Researchers are able to simulate the macroscopic interactions between the two plasma domains on both sides of the magnetopause and, for precise results, include individual particle motions to better describe the microscopic scales. Here, observed ion and electron distributions are compared for asymmetric reconnection events with weak-, moderate-, and strong-guide fields. Several of the structures noted have been demonstrated in simulations and others have not been predicted or explained to date. We report on these observations and their persistence. In particular, we highlight counterstreaming low-energy ion distributions that are seen to persist regardless of increasing guide-field. Distributions of this type were first published by Burch and Phan [GRL, 2016] for an 8 Dec 2015 event and by Wang et al. [GRL, 2016] for a 16 Oct 2015 event. Wang et al. showed the distributions were produced by the reflection of magnetosheath ions by the normal electric field at the magnetopause. This report presents further results on the relationship between the counterstreaming ions with electron distributions, which show the ions traversing the magnetosheath, X-line, and in one case the electron stagnation point. We suggest the counterstreaming ions become the source of D-shaped distributions at points where the field line opening is indicated by the electron distributions. In addition, we suggest they become the source of ion crescent distributions that result from acceleration of ions by the reconnection electric field. Burch, J. L., and T. D. Phan (2016), Magnetic reconnection at the dayside magnetopause: Advances with MMS, Geophys. Res. Lett., 43, 8327-8338, doi:10.1002/2016GL069787. Wang, S., et al. (2016), Two-scale ion meandering caused by the polarization electric field during asymmetric reconnection, Geophys. Res. Lett., 43, 7831-7839, doi:10.1002/2016GL069842.

  16. Representations and uses of light distribution functions

    NASA Astrophysics Data System (ADS)

    Lalonde, Paul Albert

    1998-11-01

    At their lowest level, all rendering algorithms depend on models of local illumination to define the interplay of light with the surfaces being rendered. These models depend both on the representations of light scattering at a surface due to reflection and to an equal extent on the representation of light sources and light fields. Both emission and reflection have in common that they describe how light leaves a surface as a function of direction. Reflection also depends on an incident light direction. Emission can depend on the position on the light source We call the functions representing emission and reflection light distribution functions (LDF's). There are some difficulties to using measured light distribution functions. The data sets are very large-the size of the data grows with the fourth power of the sampling resolution. For example, a bidirectional reflectance distribution function (BRDF) sampled at five degrees angular resolution, which is arguably insufficient to capture highlights and other high frequency effects in the reflection, can easily require one and a half million samples. Once acquired this data requires some form of interpolation to use them. Any compression method used must be efficient, both in space and in the time required to evaluate the function at a point or over a range of points. This dissertation examines a wavelet representation of light distribution functions that addresses these issues. A data structure is presented that allows efficient reconstruction of LDFs for a given set of parameters, making the wavelet representation feasible for rendering tasks. Texture mapping methods that take advantage of our LDF representations are examined, as well as techniques for filtering LDFs, and methods for using wavelet compressed bidirection reflectance distribution functions (BRDFs) and light sources with Monte Carlo path tracing algorithms. The wavelet representation effectively compresses BRDF and emission data while inducing only a small error in the reconstructed signal. The representation can be used to evaluate efficiently some integrals that appear in shading computation which allows fast, accurate computation of local shading. The representation can be used to represent light fields and is used to reconstruct views of environments interactively from a precomputed set of views. The representation of the BRDF also allows the efficient generation of reflected directions for Monte Carlo array tracing applications. The method can be integrated into many different global illumination algorithms, including ray tracers and wavelet radiosity systems.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seale, Jonathan P.; Meixner, Margaret; Sewiło, Marta

    Observations from the HERschel Inventory of the Agents of Galaxy Evolution (HERITAGE) have been used to identify dusty populations of sources in the Large and Small Magellanic Clouds (LMC and SMC). We conducted the study using the HERITAGE catalogs of point sources available from the Herschel Science Center from both the Photodetector Array Camera and Spectrometer (PACS; 100 and 160 μm) and Spectral and Photometric Imaging Receiver (SPIRE; 250, 350, and 500 μm) cameras. These catalogs are matched to each other to create a Herschel band-merged catalog and then further matched to archival Spitzer IRAC and MIPS catalogs from themore » Spitzer Surveying the Agents of Galaxy Evolution (SAGE) and SAGE-SMC surveys to create single mid- to far-infrared (far-IR) point source catalogs that span the wavelength range from 3.6 to 500 μm. There are 35,322 unique sources in the LMC and 7503 in the SMC. To be bright in the FIR, a source must be very dusty, and so the sources in the HERITAGE catalogs represent the dustiest populations of sources. The brightest HERITAGE sources are dominated by young stellar objects (YSOs), and the dimmest by background galaxies. We identify the sources most likely to be background galaxies by first considering their morphology (distant galaxies are point-like at the resolution of Herschel) and then comparing the flux distribution to that of the Herschel Astrophysical Terahertz Large Area Survey (ATLAS) survey of galaxies. We find a total of 9745 background galaxy candidates in the LMC HERITAGE images and 5111 in the SMC images, in agreement with the number predicted by extrapolating from the ATLAS flux distribution. The majority of the Magellanic Cloud-residing sources are either very young, embedded forming stars or dusty clumps of the interstellar medium. Using the presence of 24 μm emission as a tracer of star formation, we identify 3518 YSO candidates in the LMC and 663 in the SMC. There are far fewer far-IR bright YSOs in the SMC than the LMC due to both the SMC's smaller size and its lower dust content. The YSO candidate lists may be contaminated at low flux levels by background galaxies, and so we differentiate between sources with a high (“probable”) and moderate (“possible”) likelihood of being a YSO. There are 2493/425 probable YSO candidates in the LMC/SMC. Approximately 73% of the Herschel YSO candidates are newly identified in the LMC, and 35% in the SMC. We further identify a small population of dusty objects in the late stages of stellar evolution including extreme and post-asymptotic giant branch, planetary nebulae, and supernova remnants. These populations are identified by matching the HERITAGE catalogs to lists of previously identified objects in the literature. Approximately half of the LMC sources and one quarter of the SMC sources are too faint to obtain accurate ample FIR photometry and are unclassified.« less

  18. Quantum key distribution in a multi-user network at gigahertz clock rates

    NASA Astrophysics Data System (ADS)

    Fernandez, Veronica; Gordon, Karen J.; Collins, Robert J.; Townsend, Paul D.; Cova, Sergio D.; Rech, Ivan; Buller, Gerald S.

    2005-07-01

    In recent years quantum information research has lead to the discovery of a number of remarkable new paradigms for information processing and communication. These developments include quantum cryptography schemes that offer unconditionally secure information transport guaranteed by quantum-mechanical laws. Such potentially disruptive security technologies could be of high strategic and economic value in the future. Two major issues confronting researchers in this field are the transmission range (typically <100km) and the key exchange rate, which can be as low as a few bits per second at long optical fiber distances. This paper describes further research of an approach to significantly enhance the key exchange rate in an optical fiber system at distances in the range of 1-20km. We will present results on a number of application scenarios, including point-to-point links and multi-user networks. Quantum key distribution systems have been developed, which use standard telecommunications optical fiber, and which are capable of operating at clock rates of up to 2GHz. They implement a polarization-encoded version of the B92 protocol and employ vertical-cavity surface-emitting lasers with emission wavelengths of 850 nm as weak coherent light sources, as well as silicon single-photon avalanche diodes as the single photon detectors. The point-to-point quantum key distribution system exhibited a quantum bit error rate of 1.4%, and an estimated net bit rate greater than 100,000 bits-1 for a 4.2 km transmission range.

  19. Modification and validation of an analytical source model for external beam radiotherapy Monte Carlo dose calculations.

    PubMed

    Davidson, Scott E; Cui, Jing; Kry, Stephen; Deasy, Joseph O; Ibbott, Geoffrey S; Vicic, Milos; White, R Allen; Followill, David S

    2016-08-01

    A dose calculation tool, which combines the accuracy of the dose planning method (DPM) Monte Carlo code and the versatility of a practical analytical multisource model, which was previously reported has been improved and validated for the Varian 6 and 10 MV linear accelerators (linacs). The calculation tool can be used to calculate doses in advanced clinical application studies. One shortcoming of current clinical trials that report dose from patient plans is the lack of a standardized dose calculation methodology. Because commercial treatment planning systems (TPSs) have their own dose calculation algorithms and the clinical trial participant who uses these systems is responsible for commissioning the beam model, variation exists in the reported calculated dose distributions. Today's modern linac is manufactured to tight specifications so that variability within a linac model is quite low. The expectation is that a single dose calculation tool for a specific linac model can be used to accurately recalculate dose from patient plans that have been submitted to the clinical trial community from any institution. The calculation tool would provide for a more meaningful outcome analysis. The analytical source model was described by a primary point source, a secondary extra-focal source, and a contaminant electron source. Off-axis energy softening and fluence effects were also included. The additions of hyperbolic functions have been incorporated into the model to correct for the changes in output and in electron contamination with field size. A multileaf collimator (MLC) model is included to facilitate phantom and patient dose calculations. An offset to the MLC leaf positions was used to correct for the rudimentary assumed primary point source. Dose calculations of the depth dose and profiles for field sizes 4 × 4 to 40 × 40 cm agree with measurement within 2% of the maximum dose or 2 mm distance to agreement (DTA) for 95% of the data points tested. The model was capable of predicting the depth of the maximum dose within 1 mm. Anthropomorphic phantom benchmark testing of modulated and patterned MLCs treatment plans showed agreement to measurement within 3% in target regions using thermoluminescent dosimeters (TLD). Using radiochromic film normalized to TLD, a gamma criteria of 3% of maximum dose and 2 mm DTA was applied with a pass rate of least 85% in the high dose, high gradient, and low dose regions. Finally, recalculations of patient plans using DPM showed good agreement relative to a commercial TPS when comparing dose volume histograms and 2D dose distributions. A unique analytical source model coupled to the dose planning method Monte Carlo dose calculation code has been modified and validated using basic beam data and anthropomorphic phantom measurement. While this tool can be applied in general use for a particular linac model, specifically it was developed to provide a singular methodology to independently assess treatment plan dose distributions from those clinical institutions participating in National Cancer Institute trials.

  20. On the power output of some idealized source configurations with one or more characteristic dimensions

    NASA Technical Reports Server (NTRS)

    Levine, H.

    1982-01-01

    The calculation of power output from a (finite) linear array of equidistant point sources is investigated with allowance for a relative phase shift and particular focus on the circumstances of small/large individual source separation. A key role is played by the estimates found for a twin parameter definite integral that involves the Fejer kernel functions, where N denotes a (positive) integer; these results also permit a quantitative accounting of energy partition between the principal and secondary lobes of the array pattern. Continuously distributed sources along a finite line segment or an open ended circular cylindrical shell are considered, and estimates for the relatively lower output in the latter configuration are made explicit when the shell radius is small compared to the wave length. A systematic reduction of diverse integrals which characterize the energy output from specific line and strip sources is investigated.

  1. Ion energy distribution near a plasma meniscus with beam extraction for multi element focused ion beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mathew, Jose V.; Paul, Samit; Bhattacharjee, Sudeep

    2010-05-15

    An earlier study of the axial ion energy distribution in the extraction region (plasma meniscus) of a compact microwave plasma ion source showed that the axial ion energy spread near the meniscus is small ({approx}5 eV) and comparable to that of a liquid metal ion source, making it a promising candidate for focused ion beam (FIB) applications [J. V. Mathew and S. Bhattacharjee, J. Appl. Phys. 105, 96101 (2009)]. In the present work we have investigated the radial ion energy distribution (IED) under the influence of beam extraction. Initially a single Einzel lens system has been used for beam extractionmore » with potentials up to -6 kV for obtaining parallel beams. In situ measurements of IED with extraction voltages upto -5 kV indicates that beam extraction has a weak influence on the energy spread ({+-}0.5 eV) which is of significance from the point of view of FIB applications. It is found that by reducing the geometrical acceptance angle at the ion energy analyzer probe, close to unidirectional distribution can be obtained with a spread that is smaller by at least 1 eV.« less

  2. A diabatic circulation two-dimensional model with photochemistry - Simulations of ozone and long-lived tracers with surface sources

    NASA Technical Reports Server (NTRS)

    Stordal, F.; Isaksen, I. S. A.; Horntveth, K.

    1985-01-01

    Numerous studies have been concerned with the possibility of a reduction of the stratospheric ozone layer. Such a reduction could lead to an enhanced penetration of ultraviolet (UV) radiation to the ground, and, as a result, to damage in the case of several biological processes. It is pointed out that the distributions of many trace gases, such as ozone, are governed in part by transport processes. The present investigation presents a two-dimensional photochemistry-transport model using the residual circulation. The global distribution of both ozone and components with ground sources computed in this model is in good agreement with the observations even though slow diffusion is adopted. The agreement is particularly good in the Northern Hemisphere. The results provide additional support for the idea that tracer transport in the stratosphere is mainly of advective nature.

  3. Water security-National and global issues

    USGS Publications Warehouse

    Tindall, James A.; Campbell, Andrew A.

    2010-01-01

    Potable or clean freshwater availability is crucial to life and economic, environmental, and social systems. The amount of freshwater is finite and makes up approximately 2.5 percent of all water on the Earth. Freshwater supplies are small and randomly distributed, so water resources can become points of conflict. Freshwater availability depends upon precipitation patterns, changing climate, and whether the source of consumed water comes directly from desalination, precipitation, or surface and (or) groundwater. At local to national levels, difficulties in securing potable water sources increase with growing populations and economies. Available water improves living standards and drives urbanization, which increases average water consumption per capita. Commonly, disruptions in sustainable supplies and distribution of potable water and conflicts over water resources become major security issues for Government officials. Disruptions are often influenced by land use, human population, use patterns, technological advances, environmental impacts, management processes and decisions, transnational boundaries, and so forth.

  4. Performance of Four-Leg VSC based DSTATCOM using Single Phase P-Q Theory

    NASA Astrophysics Data System (ADS)

    Jampana, Bangarraju; Veramalla, Rajagopal; Askani, Jayalaxmi

    2017-02-01

    This paper presents single-phase P-Q theory for four-leg VSC based distributed static compensator (DSTATCOM) in the distribution system. The proposed DSTATCOM maintains unity power factor at source, zero voltage regulation, eliminates current harmonics, load balancing and neutral current compensation. The advantage of using four-leg VSC based DSTATCOM is to eliminate isolated/non-isolated transformer connection at point of common coupling (PCC) for neutral current compensation. The elimination of transformer connection at PCC with proposed topology will reduce cost of DSTATCOM. The single-phase P-Q theory control algorithm is used to extract fundamental component of active and reactive currents for generation of reference source currents which is based on indirect current control method. The proposed DSTATCOM is modelled and the results are validated with various consumer loads under unity power factor and zero voltage regulation modes in the MATLAB R2013a environment using simpower system toolbox.

  5. High resolution energy-angle correlation measurement of hard x rays from laser-Thomson backscattering.

    PubMed

    Jochmann, A; Irman, A; Bussmann, M; Couperus, J P; Cowan, T E; Debus, A D; Kuntzsch, M; Ledingham, K W D; Lehnert, U; Sauerbrey, R; Schlenvoigt, H P; Seipt, D; Stöhlker, Th; Thorn, D B; Trotsenko, S; Wagner, A; Schramm, U

    2013-09-13

    Thomson backscattering of intense laser pulses from relativistic electrons not only allows for the generation of bright x-ray pulses but also for the investigation of the complex particle dynamics at the interaction point. For this purpose a complete spectral characterization of a Thomson source powered by a compact linear electron accelerator is performed with unprecedented angular and energy resolution. A rigorous statistical analysis comparing experimental data to 3D simulations enables, e.g., the extraction of the angular distribution of electrons with 1.5% accuracy and, in total, provides predictive capability for the future high brightness hard x-ray source PHOENIX (photon electron collider for narrow bandwidth intense x rays) and potential gamma-ray sources.

  6. Methane bubbling from northern lakes: present and future contributions to the global methane budget.

    PubMed

    Walter, Katey M; Smith, Laurence C; Chapin, F Stuart

    2007-07-15

    Large uncertainties in the budget of atmospheric methane (CH4) limit the accuracy of climate change projections. Here we describe and quantify an important source of CH4 -- point-source ebullition (bubbling) from northern lakes -- that has not been incorporated in previous regional or global methane budgets. Employing a method recently introduced to measure ebullition more accurately by taking into account its spatial patchiness in lakes, we estimate point-source ebullition for 16 lakes in Alaska and Siberia that represent several common northern lake types: glacial, alluvial floodplain, peatland and thermokarst (thaw) lakes. Extrapolation of measured fluxes from these 16 sites to all lakes north of 45 degrees N using circumpolar databases of lake and permafrost distributions suggests that northern lakes are a globally significant source of atmospheric CH4, emitting approximately 24.2+/-10.5Tg CH4yr(-1). Thermokarst lakes have particularly high emissions because they release CH4 produced from organic matter previously sequestered in permafrost. A carbon mass balance calculation of CH4 release from thermokarst lakes on the Siberian yedoma ice complex suggests that these lakes alone would emit as much as approximately 49000Tg CH4 if this ice complex was to thaw completely. Using a space-for-time substitution based on the current lake distributions in permafrost-dominated and permafrost-free terrains, we estimate that lake emissions would be reduced by approximately 12% in a more probable transitional permafrost scenario and by approximately 53% in a 'permafrost-free' Northern Hemisphere. Long-term decline in CH4 ebullition from lakes due to lake area loss and permafrost thaw would occur only after the large release of CH4 associated thermokarst lake development in the zone of continuous permafrost.

  7. Leptospirosis risk around a potential source of infection

    NASA Astrophysics Data System (ADS)

    Loaiza-Echeverry, Erica; Hincapié-Palacio, Doracelly; Ochoa Acosta, Jesús; Ospina Giraldo, Juan

    2015-05-01

    Leptospirosis is a bacterial zoonosis with world distribution and multiform clinical spectrum in men and animals. The etiology of this disease is the pathogenic species of Leptospira, which cause diverse manifestations of the disease, from mild to serious, such as the Weil disease and the lung hemorrhagic syndrome with lethal proportions of 10% - 50%. This is an emerging problem of urban health due to the growth of marginal neighborhoods without basic sanitary conditions and an increased number of rodents. The presence of rodents and the probability of having contact with their urine determine the likelihood for humans to get infected. In this paper, we simulate the spatial distribution of risk infection of human leptospirosis according to the proximity to rodent burrows considered as potential source of infection. The Bessel function K0 with an r distance from the potential point source, and the scale parameter α in meters was used. Simulation inputs were published data of leptospirosis incidence rate (range of 5 to 79 x 10 000), and a distance of 100 to 5000 meters from the source of infection. We obtained an adequate adjustment between the function and the simulated data. The risk of infection increases with the proximity of the potential source. This estimation can become a guide to propose effective measures of control and prevention.

  8. Organic pollutants in the coastal environment off San Diego, California. 1: Source identification and assessment by compositional indices of polycyclic aromatic hydrocarbons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, E.Y.; Vista, C.L.

    1997-02-01

    Samples collected in January and June 1994 from the Point Loma Wastewater Treatment Plant (PLWTP) effluent, Tijuana River runoff, and microlayer, sediment trap, and surface sediment at several locations adjacent to the PLWTP outfall, mouth of the Tijuana River, and San Diego Bay were analyzed in an attempt to identify and assess the sources of hydrocarbon inputs into the coastal marine environment off San Diego. Several compositional indices of polycyclic aromatic hydrocarbons (PAHs), for example, alkyl homologue distributions, parent compound distributions, and other individual PAH ratios, were used to identify the sources of PAHs. Partially due to the decline ofmore » PAH emission from the PLWTP outfall, PAHs found in the sea surface microlayer, sediments, and water column particulates near the PLWTP outfall were predominantly derived from nonpoint sources. The sea microlayer near the mouth of the Tijuana River appeared to accumulate enhanced amounts of PAHs and total organic carbon and total nitrogen, probably discharged from the river, although they were in extremely low abundance in the sediments at the same location. Surprisingly, PAHs detected in the microlayer and sediments in San Diego Bay were mainly derived from combustion sources rather than oil spills, despite the heavy shipping activities in the area.« less

  9. WISEGAL. WISE for the Galactic Plane

    NASA Astrophysics Data System (ADS)

    Noriega-Crespo, Alberto

    There is truly a community effort to study on a global scale the properties of the Milky Way, like its structure, its star formation and interstellar medium, and to use this knowledge to create accurate templates to understand the properties of extragalactic systems. A testimony of this effort are the multi-wavelength surveys of the Galactic Plane that have been recently carried out or are underway from both the ground (e.g. IPHAS, ATLASGAL, JCMT Galactic Plane Survey) or space (GLIMPSE, MIPSGAL, HiGAL). Adding to this wealth of data is the recent release of approximately 57 percent of the whole sky by the Wide-field Infrared Survey Explorer (WISE) team of their high angular resolution and sensitive mid-IR (3.4, 4.6, 12 and 22 micron) images and point source catalogs, encompassing nearly three quarters of the Galactic Plane, including the less studied regions of the Outer Galaxy. The WISE Atlas Images are spectacular, but to take full advantage of them, they need to be transformed from their default Data Number (DN) units into absolute surface brightness calibrated units. Furthermore, to mitigate the contamination effect of the point sources on the extended/diffuse emission, we will remove them and create residual images. This processing will enable a wide range of science projects using the Atlas Images, where measuring the spectral energy distribution of the extended emission is crucial. In this project we propose to transform the W3 (12 micron) and W4 (22 micron) images of the Galactic Plane, in particular of the Outer Galaxy where WISE provides an unique data set, into a background-calibrated, point-source subtracted images using IRIS (DIRBE IRAS Calibrated data). This transformation will allow us to carry out research projects on Massive star formation, the properties of dust in the diffuse ISM, the three dimensional distribution of the dust emission in the Galaxy and the mid/far infrared properties of Supernova Remnants, among others, and to perform a detailed comparison between the characteristics (e.g. star formation rate, dust properties) a of the Inner and Outer Galaxy. The background-calibrated point-source subtracted images will be released to the astronomical community to be fully exploited and to be used in many other science projects, beyond those proposed in this proposal.

  10. Point to point multispectral light projection applied to cultural heritage

    NASA Astrophysics Data System (ADS)

    Vázquez, D.; Alvarez, A.; Canabal, H.; Garcia, A.; Mayorga, S.; Muro, C.; Galan, T.

    2017-09-01

    Use of new of light sources based on LED technology should allow the develop of systems that combine conservation and exhibition requirements and allow to make these art goods available to the next generations according to sustainability principles. The goal of this work is to develop light systems and sources with an optimized spectral distribution for each specific point of the art piece. This optimization process implies to maximize the color fidelity reproduction and the same time to minimize the photochemical damage. Perceived color under these sources will be similar (metameric) to technical requirements given by the restoration team uncharged of the conservation and exhibition of the goods of art. Depending of the fragility of the exposed art objects (i.e. spectral responsivity of the material) the irradiance must be kept under a critical level. Therefore, it is necessary to develop a mathematical model that simulates with enough accuracy both the visual effect of the illumination and the photochemical impact of the radiation. Spectral reflectance of a reference painting The mathematical model is based on a merit function that optimized the individual intensity of the LED-light sources taking into account the damage function of the material and color space coordinates. Moreover the algorithm used weights for damage and color fidelity in order to adapt the model to a specific museal application. In this work we show a sample of this technology applied to a picture of Sorolla (1863-1923) an important Spanish painter title "woman walking at the beach".

  11. SU-F-T-336: A Quick Auto-Planning (QAP) Method for Patient Intensity Modulated Radiotherapy (IMRT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, J; Zhang, Z; Wang, J

    2016-06-15

    Purpose: The aim of this study is to develop a quick auto-planning system that permits fast patient IMRT planning with conformal dose to the target without manual field alignment and time-consuming dose distribution optimization. Methods: The planning target volume (PTV) of the source and the target patient were projected to the iso-center plane in certain beameye- view directions to derive the 2D projected shapes. Assuming the target interior was isotropic for each beam direction boundary analysis under polar coordinate was performed to map the source shape boundary to the target shape boundary to derive the source-to-target shape mapping function. Themore » derived shape mapping function was used to morph the source beam aperture to the target beam aperture over all segments in each beam direction. The target beam weights were re-calculated to deliver the same dose to the reference point (iso-center) as the source beam did in the source plan. The approach was tested on two rectum patients (one source patient and one target patient). Results: The IMRT planning time by QAP was 5 seconds on a laptop computer. The dose volume histograms and the dose distribution showed the target patient had the similar PTV dose coverage and OAR dose sparing with the source patient. Conclusion: The QAP system can instantly and automatically finish the IMRT planning without dose optimization.« less

  12. On the properties of dust and gas in the environs of V838 Monocerotis

    NASA Astrophysics Data System (ADS)

    Exter, K. M.; Cox, N. L. J.; Swinyard, B. M.; Matsuura, M.; Mayer, A.; De Beck, E.; Decin, L.

    2016-12-01

    Aims: We aim to probe the close and distant circumstellar environments of the stellar outburst object V838 Mon. Methods: Herschel far-infrared imaging and spectroscopy were taken at several epochs to probe the central point source and the extended environment of V838 Mon. PACS and SPIRE maps were used to obtain photometry of the dust immediately around V838 Mon, and in the surrounding infrared-bright region. These maps were fitted in 1d and 2d to measure the temperature, mass, and β of the two dust sources. PACS and SPIRE spectra were used to detect emission lines from the extended atmosphere of the star, which were then modelled to study the physical conditions in the emitting material. HIFI spectra were taken to measure the kinematics of the extended atmosphere but unfortunately yielded no detections. Results: Fitting of the far-infrared imaging of V838 Mon reveals 0.5-0.6 M⊙ of ≈19 K dust in the environs (≈2.7 pc) surrounding V838 Mon. The surface-integrated infrared flux (signifying the thermal light echo), and derived dust properties do not vary significantly between the different epochs. We measured the photometry of the point source. As the peak of the SED (Spectral Energy Distribution) lies outside the Herschel spectral range, it is only by incorporating data from other observatories and previous epochs that we can usefully fit the SED; with this we explicitly assume no evolution of the point source between the epochs. We find that warm dust with a temperature 300 K distributed over a radius of 150-200 AU. We fit the far-infrared lines of CO arising from the point source, from an extended environment around V838 Mon. Assuming a model of a spherical shell for this gas, we find that the CO appears to arise from two temperature zones: a cold zone (Tkin ≈ 18 K) that could be associated with the ISM or possibly with a cold layer in the outermost part of the shell, and a warm (Tkin ≈ 400 K) zone that is associated with the extended environment of V838 Mon within a region of radius of ≈210 AU. The SiO lines arise from a warm/hot zone. We did not fit the lines of H2O as they are far more dependent on the model assumed. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  13. Theoretical Pressure Distribution, Apparent Mass, and Moment of Inertia of a Disk Pendulum Oscillating at Low Frequency. M.S. Thesis - George Washington Univ., Washington, D. C.

    NASA Technical Reports Server (NTRS)

    Dunning, R. S.

    1973-01-01

    Equations are developed which give the pressure profile, the forces and torques on a disk pendulum by means of point source wave theory from acoustics. The pressure, force and torque equations for an unbaffled disk are developed. These equations are then used to calculate the apparent mass and apparent inertia for the pendulum.

  14. Gamma-Ray Astronomy Across 6 Decades of Energy: Synergy between Fermi, IACTs, and HAWC

    NASA Technical Reports Server (NTRS)

    Hui, C. Michelle

    2017-01-01

    Gamma Ray Observatories, Gamma-Ray Astrophysics, GeV TeV Sky Survey, Galaxy, Galactic Plane, Source Distribution, The gamma-ray sky is currently well-monitored with good survey coverage. Many instruments from different waveband/messenger (X rays, gamma rays, neutrinos, gravitational waves) available for simultaneous observations. Both wide-field and pointing instruments in development and coming online in the next decade LIGO

  15. A Survey of nearby, nearly face-on spiral galaxies

    NASA Astrophysics Data System (ADS)

    Garmire, Gordon

    2014-09-01

    This is a continuation of a survey of nearby, nearly face-on spiral galaxies. The main purpose is to search for evidence of collisions with small galaxies that show up in X-rays by the generation of hot shocked gas from the collision. Secondary objectives include study of the spatial distribution point sources in the galaxy and to detect evidence for a central massive blackhole. These are alternate targets.

  16. Velocity: Speed with Direction. The Professional Career of Gen Jerome F. O’Malley

    DTIC Science & Technology

    2007-09-01

    per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing...other US government agency. Cleared for public release: distribution unlimited. Muir S. Fairchild Research Information Center Cataloging Data Casey...www.west-point.org/users/ usma1983/40768/docs/taylor.html (accessed 24 April 2007). 16. Officer Effectiveness Report, unpublished data , 2 January 1956. 17

  17. Modeling of Ultrasonic and Terahertz Radiations in Defective Tiles for Condition Monitoring of Thermal Protection Systems

    DTIC Science & Technology

    2013-04-01

    different ultrasonic and electromagnetic field modeling problems for NDE (nondestructive evaluation) applications [5- 14]. 2d . Use of the...transient ultrasonic wave propagation using the Distributed Point Source Method”, IEEE Transactions on Ultrasonics, Ferroelectric and Frequency Control...Cavity”, IEEE Transactions on Ultrasonics, Ferroelectric and Frequency Control, Vol. 57(6), pp. 1396-1404, 2010. [10] A. Shelke, S. Das and T. Kundu

  18. Increasingly minimal bias routing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bataineh, Abdulla; Court, Thomas; Roweth, Duncan

    2017-02-21

    A system and algorithm configured to generate diversity at the traffic source so that packets are uniformly distributed over all of the available paths, but to increase the likelihood of taking a minimal path with each hop the packet takes. This is achieved by configuring routing biases so as to prefer non-minimal paths at the injection point, but increasingly prefer minimal paths as the packet proceeds, referred to herein as Increasing Minimal Bias (IMB).

  19. Monitoring bacterial contamination of piped water supply in rural coastal Bangladesh.

    PubMed

    Ahsan, Md Sabbir; Akber, Md Ali; Islam, Md Atikul; Kabir, Md Pervez; Hoque, Md Ikramul

    2017-10-31

    Safe drinking water is scarce in southwest coastal Bangladesh because of unavailability of fresh water. Given the high salinity of both groundwater and surface water in this area, harvested rainwater and rain-fed pond water became the main sources of drinking water. Both the government and non-government organizations have recently introduced pipe water supply in the rural coastal areas to ensure safe drinking water. We assessed the bacteriological quality of water at different points along the piped water distribution system (i.e., the source, treatment plant, household taps, street hydrants, and household storage containers) of Mongla municipality under Mongla Upazila in Bagerhat district. Water samples were collected at 2-month interval from May 2014 to March 2015. Median E. coli and total coliform counts at source, treatment plant, household taps, street hydrants, and household storage containers were respectively 225, 4, 7, 7, and 15 cfu/100 ml and 42,000, 545, 5000, 6150, and 18,800 cfu/100 ml. Concentrations of both of the indicator bacteria reduced after treatment, although it did not satisfy the WHO drinking water standards. However, re-contamination in distribution systems and household storage containers indicate improper maintenance of distribution system and lack of personal hygiene.

  20. Water budget analysis and management for Bangkok Metropolis, Thailand.

    PubMed

    Singkran, Nuanchan

    2017-09-01

    The water budget of the Bangkok Metropolis system was analyzed using a material flow analysis model. Total imported flows into the system were 80,080 million m 3 per year (Mm 3 y -1 ) including inflows from the Chao Phraya and Mae Klong rivers and rainwater. Total exported flows out of the system were 78,528 Mm 3 y -1 including outflow into the lower Chao Phraya River and tap water (TW) distributed to suburbs. Total rates of stock exchange (1,552 Mm 3 y -1 ) were found in the processes of water recycling, TW distribution, domestic use, swine farming, aquaculture, and paddy fields. Only 21% of the total amount of wastewater (1,255 Mm 3 y -1 ) was collected, with insufficient treatment capacity of about 415 Mm 3 y -1 . Domestic and business (industrial and commercial sectors) areas were major point sources, whereas paddy fields were a major non-point source of wastewater. To manage Bangkok's water budget, critical measures have to be considered. Wastewater treatment capacity and efficiency of wastewater collection should be improved. On-site wastewater treatment plants for residential areas should be installed. Urban planning and land use zoning are suggested to control land use activities. Green technology should be supported to reduce wastewater from farming.

  1. Time-dependent source model of the Lusi mud volcano

    NASA Astrophysics Data System (ADS)

    Shirzaei, M.; Rudolph, M. L.; Manga, M.

    2014-12-01

    The Lusi mud eruption, near Sidoarjo, East Java, Indonesia, began erupting in May 2006 and continues to erupt today. Previous analyses of surface deformation data suggested an exponential decay of the pressure in the mud source, but did not constrain the geometry and evolution of the source(s) from which the erupting mud and fluids ascend. To understand the spatiotemporal evolution of the mud and fluid sources, we apply a time-dependent inversion scheme to a densely populated InSAR time series of the surface deformation at Lusi. The SAR data set includes 50 images acquired on 3 overlapping tracks of the ALOS L-band satellite between May 2006 and April 2011. Following multitemporal analysis of this data set, the obtained surface deformation time series is inverted in a time-dependent framework to solve for the volume changes of distributed point sources in the subsurface. The volume change distribution resulting from this modeling scheme shows two zones of high volume change underneath Lusi at 0.5-1.5 km and 4-5.5km depth as well as another shallow zone, 7 km to the west of Lusi and underneath the Wunut gas field. The cumulative volume change within the shallow source beneath Lusi is ~2-4 times larger than that of the deep source, whilst the ratio of the Lusi shallow source volume change to that of Wunut gas field is ~1. This observation and model suggest that the Lusi shallow source played a key role in eruption process and mud supply, but that additional fluids do ascend from depths >4 km on eruptive timescales.

  2. Characterization of sources and loadings of fecal pollutants using microbial source tracking assays in urban and rural areas of the Grand River Watershed, Southwestern Ontario.

    PubMed

    Lee, Dae-Young; Lee, Hung; Trevors, Jack T; Weir, Susan C; Thomas, Janis L; Habash, Marc

    2014-04-15

    Sources of fecal water pollution were assessed in the Grand River and two of its tributaries (Ontario, Canada) using total and host-specific (human and bovine) Bacteroidales genetic markers in conjunction with reference information, such as land use and weather. In-stream levels of the markers and culturable Escherichia coli were also monitored during multiple rain events to gain information on fecal loadings to catchment from diffuse sources. Elevated human-specific marker levels were accurately identified in river water impacted by a municipal wastewater treatment plant (WWTP) effluent and at a downstream site in the Grand River. In contrast, the bovine-specific marker showed high levels of cattle fecal pollution in two tributaries, both of which are characterized as intensely farmed areas. The bovine-specific Bacteroidales marker increased with rainfall in the agricultural tributaries, indicating enhanced loading of cattle-derived fecal pollutants to river from non-point sources following rain events. However, rain-triggered fecal loading was not substantiated in urban settings, indicating continuous inputs of human-originated fecal pollutants from point sources, such as WWTP effluent. This study demonstrated that the Bacteroidales source tracking assays, in combination with land use information and hydrological data, may provide additional insight into the spatial and temporal distribution of source-specific fecal contamination in streams impacted by varying land uses. Using the approach described in this study may help to characterize impacted water sources and to design targeted land use management plans in other watersheds in the future. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Statistics of intensity in adaptive-optics images and their usefulness for detection and photometry of exoplanets.

    PubMed

    Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C

    2010-11-01

    This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.

  4. Resin Flow Behavior Simulation of Grooved Foam Sandwich Composites with the Vacuum Assisted Resin Infusion (VARI) Molding Process

    PubMed Central

    Zhao, Chenhui; Zhang, Guangcheng; Wu, Yibo

    2012-01-01

    The resin flow behavior in the vacuum assisted resin infusion molding process (VARI) of foam sandwich composites was studied by both visualization flow experiments and computer simulation. Both experimental and simulation results show that: the distribution medium (DM) leads to a shorter molding filling time in grooved foam sandwich composites via the VARI process, and the mold filling time is linearly reduced with the increase of the ratio of DM/Preform. Patterns of the resin sources have a significant influence on the resin filling time. The filling time of center source is shorter than that of edge pattern. Point pattern results in longer filling time than of linear source. Short edge/center patterns need a longer time to fill the mould compared with Long edge/center sources.

  5. Investigating the effects of point source and nonpoint source pollution on the water quality of the East River (Dongjiang) in South China

    USGS Publications Warehouse

    Wu, Yiping; Chen, Ji

    2013-01-01

    Understanding the physical processes of point source (PS) and nonpoint source (NPS) pollution is critical to evaluate river water quality and identify major pollutant sources in a watershed. In this study, we used the physically-based hydrological/water quality model, Soil and Water Assessment Tool, to investigate the influence of PS and NPS pollution on the water quality of the East River (Dongjiang in Chinese) in southern China. Our results indicate that NPS pollution was the dominant contribution (>94%) to nutrient loads except for mineral phosphorus (50%). A comprehensive Water Quality Index (WQI) computed using eight key water quality variables demonstrates that water quality is better upstream than downstream despite the higher level of ammonium nitrogen found in upstream waters. Also, the temporal (seasonal) and spatial distributions of nutrient loads clearly indicate the critical time period (from late dry season to early wet season) and pollution source areas within the basin (middle and downstream agricultural lands), which resource managers can use to accomplish substantial reduction of NPS pollutant loadings. Overall, this study helps our understanding of the relationship between human activities and pollutant loads and further contributes to decision support for local watershed managers to protect water quality in this region. In particular, the methods presented such as integrating WQI with watershed modeling and identifying the critical time period and pollutions source areas can be valuable for other researchers worldwide.

  6. Constraining the redshift distribution of ultrahigh-energy-cosmic-ray sources by isotropic gamma-ray background

    NASA Astrophysics Data System (ADS)

    Liu, Ruo-Yu; Taylor, Andrew; Wang, Xiang-Yu; Aharonian, Felix

    2017-01-01

    By interacting with the cosmic background photons during their propagation through intergalactic space, ultrahigh energy cosmic rays (UHECRs) produce energetic electron/positron pairs and photons which will initiate electromagnetic cascades, contributing to the isotropic gamma-ray background (IGRB). The generated gamma-ray flux level highly depends on the redshift evolution of the UHECR sources. Recently, the Fermi-LAT collaboration reported that 86-14+16 of the total extragalactic gamma-ray flux comes from extragalactic point sources including those unresolved ones. This leaves a limited room for the diffusive gamma ray generated via UHECR propagation, and subsequently constrains their source distribution in the Universe. Normalizing the total cosmic ray energy budget with the observed UHECR flux in the energy band of (1-4)×1018 eV, we calculate the diffuse gamma-ray flux generated through UHECR propagation. We find that in order to not overshoot the new IGRB limit, these sub-ankle UHECRs should be produced mainly by nearby sources, with a possible non-negligible contribution from our Galaxy. The distance for the majority of UHECR sources can be further constrained if a given fraction of the observed IGRB at 820 GeV originates from UHECR. We note that our result should be conservative since there may be various other contributions to the IGRB that is not included here.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murray, S. G.; Trott, C. M.; Jordan, C. H.

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions formore » the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.« less

  8. Economics of electricity

    NASA Astrophysics Data System (ADS)

    Erdmann, G.

    2015-08-01

    The following text is an introduction into the economic theory of electricity supply and demand. The basic approach of economics has to reflect the physical peculiarities of electric power that is based on the directed movement of electrons from the minus pole to the plus pole of a voltage source. The regular grid supply of electricity is characterized by a largely constant frequency and voltage. Thus, from a physical point of view electricity is a homogeneous product. But from an economic point of view, electricity is not homogeneous. Wholesale electricity prices show significant fluctuations over time and between regions, because this product is not storable (in relevant quantities) and there may be bottlenecks in the transmission and distribution grids. The associated non-homogeneity is the starting point of the economic analysis of electricity markets.

  9. Supersonic propulsion simulation by incorporating component models in the large perturbation inlet (LAPIN) computer code

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Richard, Jacques C.

    1991-01-01

    An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.

  10. The proton and helium anomalies in the light of the Myriad model

    NASA Astrophysics Data System (ADS)

    Salati, Pierre; Génolini, Yoann; Serpico, Pasquale; Taillet, Richard

    2017-03-01

    A hardening of the proton and helium fluxes is observed above a few hundreds of GeV/nuc. The distribution of local sources of primary cosmic rays has been suggested as a potential solution to this puzzling behavior. Some authors even claim that a single source is responsible for the observed anomalies. But how probable these explanations are? To answer that question, our current description of cosmic ray Galactic propagation needs to be replaced by the Myriad model. In the former approach, sources of protons and helium nuclei are treated as a jelly continuously spread over space and time. A more accurate description is provided by the Myriad model where sources are considered as point-like events. This leads to a probabilistic derivation of the fluxes of primary species, and opens the possibility that larger-than-average values may be observed at the Earth. For a long time though, a major obstacle has been the infinite variance associated to the probability distribution function which the fluxes follow. Several suggestions have been made to cure this problem but none is entirely satisfactory. We go a step further here and solve the infinite variance problem of the Myriad model by making use of the generalized central limit theorem. We find that primary fluxes are distributed according to a stable law with heavy tail, well-known to financial analysts. The probability that the proton and helium anomalies are sourced by local SNR can then be calculated. The p-values associated to the CREAM measurements turn out to be small, unless somewhat unrealistic propagation parameters are assumed.

  11. Factors affecting continued use of ceramic water purifiers distributed to tsunami-affected communities in Sri Lanka.

    PubMed

    Casanova, Lisa M; Walters, Adam; Naghawatte, Ajith; Sobsey, Mark D

    2012-11-01

    There is little information about continued use of point-of-use technologies after disaster relief efforts. After the 2004 tsunami, the Red Cross distributed ceramic water filters in Sri Lanka. This study determined factors associated with filter disuse and evaluate the quality of household drinking water. A cross-sectional survey of water sources and treatment, filter use and household characteristics was administered by in-person oral interview, and household water quality was tested. Multivariable logistic regression was used to model probability of filter non-use. At the time of survey, 24% of households (107/452) did not use filters; the most common reason given was breakage (42%). The most common household water sources were taps and wells. Wells were used by 45% of filter users and 28% of non-users. Of households with taps, 75% had source water Escherichia coli in the lowest World Health Organisation risk category (<1/100 ml), vs. only 30% of households reporting wells did. Tap households were approximately four times more likely to discontinue filter use than well households. After 2 years, 24% of households were non-users. The main factors were breakage and household water source; households with taps were more likely to stop use than households with wells. Tap water users also had higher-quality source water, suggesting that disuse is not necessarily negative and monitoring of water quality can aid decision-making about continued use. To promote continued use, disaster recovery filter distribution efforts must be joined with capacity building for long-term water monitoring, supply chains and local production. © 2012 Blackwell Publishing Ltd.

  12. Statistical approaches for the determination of cut points in anti-drug antibody bioassays.

    PubMed

    Schaarschmidt, Frank; Hofmann, Matthias; Jaki, Thomas; Grün, Bettina; Hothorn, Ludwig A

    2015-03-01

    Cut points in immunogenicity assays are used to classify future specimens into anti-drug antibody (ADA) positive or negative. To determine a cut point during pre-study validation, drug-naive specimens are often analyzed on multiple microtiter plates taking sources of future variability into account, such as runs, days, analysts, gender, drug-spiked and the biological variability of un-spiked specimens themselves. Five phenomena may complicate the statistical cut point estimation: i) drug-naive specimens may contain already ADA-positives or lead to signals that erroneously appear to be ADA-positive, ii) mean differences between plates may remain after normalization of observations by negative control means, iii) experimental designs may contain several factors in a crossed or hierarchical structure, iv) low sample sizes in such complex designs lead to low power for pre-tests on distribution, outliers and variance structure, and v) the choice between normal and log-normal distribution has a serious impact on the cut point. We discuss statistical approaches to account for these complex data: i) mixture models, which can be used to analyze sets of specimens containing an unknown, possibly larger proportion of ADA-positive specimens, ii) random effects models, followed by the estimation of prediction intervals, which provide cut points while accounting for several factors, and iii) diagnostic plots, which allow the post hoc assessment of model assumptions. All methods discussed are available in the corresponding R add-on package mixADA. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Cardea: Dynamic Access Control in Distributed Systems

    NASA Technical Reports Server (NTRS)

    Lepro, Rebekah

    2004-01-01

    Modern authorization systems span domains of administration, rely on many different authentication sources, and manage complex attributes as part of the authorization process. This . paper presents Cardea, a distributed system that facilitates dynamic access control, as a valuable piece of an inter-operable authorization framework. First, the authorization model employed in Cardea and its functionality goals are examined. Next, critical features of the system architecture and its handling of the authorization process are then examined. Then the S A M L and XACML standards, as incorporated into the system, are analyzed. Finally, the future directions of this project are outlined and connection points with general components of an authorization system are highlighted.

  14. Infrared imaging spectroscopy of the Galactic center - Distribution and motions of the ionized gas

    NASA Technical Reports Server (NTRS)

    Herbst, T. M.; Beckwith, S. V. W.; Forrest, W. J.; Pipher, J. L.

    1993-01-01

    High spatial spectral resolution IR images of the Galactic center in the Br-gamma recombination line of hydrogen were taken. A coherent filament of gas extending from north of IRS 1, curving around IRS 16/Sgr A complex, and continuing to the southwest, is seen. Nine stellar sources have associated Br-gamma emission. The total Br-gamma line flux in the filament is approximately 3 x 10 exp -15 W/sq m. The distribution and kinematics of the northern arm suggest orbital motion; the observations are accordingly fit with elliptical orbits in the field of a central point of mass.

  15. Speech-Message Extraction from Interference Introduced by External Distributed Sources

    NASA Astrophysics Data System (ADS)

    Kanakov, V. A.; Mironov, N. A.

    2017-08-01

    The problem of this study involves the extraction of a speech signal originating from a certain spatial point and calculation of the intelligibility of the extracted voice message. It is solved by the method of decreasing the influence of interference from the speech-message sources on the extracted signal. This method is based on introducing the time delays, which depend on the spatial coordinates, to the recording channels. Audio records of the voices of eight different people were used as test objects during the studies. It is proved that an increase in the number of microphones improves intelligibility of the speech message which is extracted from interference.

  16. Acoustic field in unsteady moving media

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Maestrello, L.; Ting, L.

    1995-01-01

    In the interaction of an acoustic field with a moving airframe the authors encounter a canonical initial value problem for an acoustic field induced by an unsteady source distribution, q(t,x) with q equivalent to 0 for t less than or equal to 0, in a medium moving with a uniform unsteady velocity U(t)i in the coordinate system x fixed on the airframe. Signals issued from a source point S in the domain of dependence D of an observation point P at time t will arrive at point P more than once corresponding to different retarded times, Tau in the interval (0, t). The number of arrivals is called the multiplicity of the point S. The multiplicity equals 1 if the velocity U remains subsonic and can be greater when U becomes supersonic. For an unsteady uniform flow U(t)i, rules are formulated for defining the smallest number of I subdomains V(sub i) of D with the union of V(sub i) equal to D. Each subdomain has multiplicity 1 and a formula for the corresponding retarded time. The number of subdomains V(sub i) with nonempty intersection is the multiplicity m of the intersection. The multiplicity is at most I. Examples demonstrating these rules are presented for media at accelerating and/or decelerating supersonic speed.

  17. Stochastic sensitivity analysis of nitrogen pollution to climate change in a river basin with complex pollution sources.

    PubMed

    Yang, Xiaoying; Tan, Lit; He, Ruimin; Fu, Guangtao; Ye, Jinyin; Liu, Qun; Wang, Guoqing

    2017-12-01

    It is increasingly recognized that climate change could impose both direct and indirect impacts on the quality of the water environment. Previous studies have mostly concentrated on evaluating the impacts of climate change on non-point source pollution in agricultural watersheds. Few studies have assessed the impacts of climate change on the water quality of river basins with complex point and non-point pollution sources. In view of the gap, this paper aims to establish a framework for stochastic assessment of the sensitivity of water quality to future climate change in a river basin with complex pollution sources. A sub-daily soil and water assessment tool (SWAT) model was developed to simulate the discharge, transport, and transformation of nitrogen from multiple point and non-point pollution sources in the upper Huai River basin of China. A weather generator was used to produce 50 years of synthetic daily weather data series for all 25 combinations of precipitation (changes by - 10, 0, 10, 20, and 30%) and temperature change (increases by 0, 1, 2, 3, and 4 °C) scenarios. The generated daily rainfall series was disaggregated into the hourly scale and then used to drive the sub-daily SWAT model to simulate the nitrogen cycle under different climate change scenarios. Our results in the study region have indicated that (1) both total nitrogen (TN) loads and concentrations are insensitive to temperature change; (2) TN loads are highly sensitive to precipitation change, while TN concentrations are moderately sensitive; (3) the impacts of climate change on TN concentrations are more spatiotemporally variable than its impacts on TN loads; and (4) wide distributions of TN loads and TN concentrations under individual climate change scenario illustrate the important role of climatic variability in affecting water quality conditions. In summary, the large variability in SWAT simulation results within and between each climate change scenario highlights the uncertainty of the impacts of climate change and the need to incorporate extreme conditions in managing water environment and developing climate change adaptation and mitigation strategies.

  18. Landscape and flow metrics affecting the distribution of a federally-threatened fish: Improving management, model fit, and model transferability

    USGS Publications Warehouse

    Brewer, Shannon K.; Worthington, Thomas A.; Zhang, Tianjioa; Logue, Daniel R.; Mittelstet, Aaron R.

    2016-01-01

    Truncated distributions of pelagophilic fishes have been observed across the Great Plains of North America, with water use and landscape fragmentation implicated as contributing factors. Developing conservation strategies for these species is hindered by the existence of multiple competing flow regime hypotheses related to species persistence. Our primary study objective was to compare the predicted distributions of one pelagophil, the Arkansas River Shiner Notropis girardi, constructed using different flow regime metrics. Further, we investigated different approaches for improving temporal transferability of the species distribution model (SDM). We compared four hypotheses: mean annual flow (a baseline), the 75th percentile of daily flow, the number of zero-flow days, and the number of days above 55th percentile flows, to examine the relative importance of flows during the spawning period. Building on an earlier SDM, we added covariates that quantified wells in each catchment, point source discharges, and non-native species presence to a structured variable framework. We assessed the effects on model transferability and fit by reducing multicollinearity using Spearman’s rank correlations, variance inflation factors, and principal component analysis, as well as altering the regularization coefficient (β) within MaxEnt. The 75th percentile of daily flow was the most important flow metric related to structuring the species distribution. The number of wells and point source discharges were also highly ranked. At the default level of β, model transferability was improved using all methods to reduce collinearity; however, at higher levels of β, the correlation method performed best. Using β = 5 provided the best model transferability, while retaining the majority of variables that contributed 95% to the model. This study provides a workflow for improving model transferability and also presents water-management options that may be considered to improve the conservation status of pelagophils.

  19. Relationship between the Prediction Accuracy of Tsunami Inundation and Relative Distribution of Tsunami Source and Observation Arrays: A Case Study in Tokyo Bay

    NASA Astrophysics Data System (ADS)

    Takagawa, T.

    2017-12-01

    A rapid and precise tsunami forecast based on offshore monitoring is getting attention to reduce human losses due to devastating tsunami inundation. We developed a forecast method based on the combination of hierarchical Bayesian inversion with pre-computed database and rapid post-computing of tsunami inundation. The method was applied to Tokyo bay to evaluate the efficiency of observation arrays against three tsunamigenic earthquakes. One is a scenario earthquake at Nankai trough and the other two are historic ones of Genroku in 1703 and Enpo in 1677. In general, rich observation array near the tsunami source has an advantage in both accuracy and rapidness of tsunami forecast. To examine the effect of observation time length we used four types of data with the lengths of 5, 10, 20 and 45 minutes after the earthquake occurrences. Prediction accuracy of tsunami inundation was evaluated by the simulated tsunami inundation areas around Tokyo bay due to target earthquakes. The shortest time length of accurate prediction varied with target earthquakes. Here, accurate prediction means the simulated values fall within the 95% credible intervals of prediction. In Enpo earthquake case, 5-minutes observation is enough for accurate prediction for Tokyo bay, but 10-minutes and 45-minutes are needed in the case of Nankai trough and Genroku, respectively. The difference of the shortest time length for accurate prediction shows the strong relationship with the relative distance from the tsunami source and observation arrays. In the Enpo case, offshore tsunami observation points are densely distributed even in the source region. So, accurate prediction can be rapidly achieved within 5 minutes. This precise prediction is useful for early warnings. Even in the worst case of Genroku, where less observation points are available near the source, accurate prediction can be obtained within 45 minutes. This information can be useful to figure out the outline of the hazard in an early stage of reaction.

  20. Organic Compounds in Running Gutter Brook Water Used for Public Supply near Hatfield, Massachusetts, 2003-05

    USGS Publications Warehouse

    Brown, Craig J.; Trombley, Thomas J.

    2009-01-01

    The 258 organic compounds studied in this U.S. Geological Survey (USGS) assessment generally are man-made, including pesticides, solvents, gasoline hydrocarbons, personal-care and domestic-use products, and pavement and combustion-derived compounds. Of these 258 compounds, 26 (about 10 percent) were detected at least once among the 31 samples collected approximately monthly during 2003-05 at the intake of a flowthrough reservoir on Running Gutter Brook in Massachusetts, one of several community water systems on tributaries of the Connecticut River. About 81 percent of the watershed is forested, 14 percent is agricultural land, and 5 percent is urban land. In most source-water samples collected at Running Gutter Brook, fewer compounds were detected and their concentrations were low (less than 0.1 micrograms per liter) when compared with compounds detected at other stream sites across the country that drain watersheds that have a larger percentage of agricultural and urban areas. The relatively few compounds detected at low concentrations reflect the largely undeveloped land use at Running Gutter Brook. Despite the absence of wastewater discharge points on the stream, however, the compounds that were detected could indicate different sources and uses (point sources, precipitation, domestic, and agricultural) and different pathways to drinking-water supplies (overland runoff, groundwater discharge, leaking of treated water from distribution lines, and formation during treatment). Six of the 10 compounds detected most commonly (in at least 20 percent of the samples) in source water also were detected commonly in finished water (after treatment but prior to distribution). Concentrations in source and finished water generally were below 0.1 micrograms per liter and always less than humanhealth benchmarks, which are available for about one-half of the compounds detected. On the basis of this screening-level assessment, adverse effects to human health are expected to be negligible (subject to limitations of available humanhealth benchmarks).

  1. An assessment of SEVIRI imagery at different temporal resolutions and the effect on accurate dust emission mapping

    NASA Astrophysics Data System (ADS)

    Hennen, Mark; White, Kevin; Shahgedanova, Maria

    2017-04-01

    This paper compares Dust RGB products derived from the Spinning Enhanced Visible and Infrared Imager (SEVIRI) data at 15 minute, 30 minute and hourly temporal resolutions. From January 2006 to December 2006, observations of dust emission point sources were observed at each temporal resolution across the entire Middle East region (38.50N; 30.00E - 10.00N; 65.50E). Previous work has demonstrated that 15-minute resolution SEVIRI data can be used to map dust sources across the Sahara by observing dust storms back through sequential images to the point of first emission (Schepanski et al., 2007; 2009; 2012). These observations have improved upon lower resolution maps, based on daily retrievals of aerosol optical depth (AOD), whose maxima can be biased by prevalent transport routes, not necessarily coinciding with sources of emissions. Based on the thermal contrast of atmospheric dust to the surface, brightness temperature differences (BTD's) in the thermal infrared (TIR) wavelengths (8.7, 10.8 and 12.0 µm) highlight dust in the scene irrespective of solar illumination, giving both increased accuracy of dust source areas and a greater understanding of diurnal emission behaviour. However, the highest temporal resolution available (15-minute repeat capture) produces 96 images per day, resulting in significantly higher data storage demands than 30 minute or hourly data. To aid future research planning, this paper investigates what effect lowering the temporal resolution has on the number and spatial distribution of the observed dust sources. The results show a reduction in number of dust emission events observed with each step decrease in temporal resolution, reducing by 17% for 30-minute resolution and 50% for hourly. These differences change seasonally, with the highest reduction observed in summer (34% and 64% reduction respectively). Each resolution shows a similar spatial distribution, with the biggest difference seen near the coastlines, where near-shore convective cloud patterns obscure atmospheric dust soon after emission, restricting the opportunity to be observed at hourly resolution.

  2. Concentration distribution and assessment of several heavy metals in sediments of west-four Pearl River Estuary

    NASA Astrophysics Data System (ADS)

    Wang, Shanshan; Cao, Zhimin; Lan, Dongzhao; Zheng, Zhichang; Li, Guihai

    2008-09-01

    Grain size parameters, trace metals (Co, Cu, Ni, Pb, Cr, Zn, Ba, Zr and Sr) and total organic matter (TOM) of 38 surficial sediments and a sediment core of west-four Pearl River Estuary region were analyzed. The spacial distribution and the transportation procession of the chemical element in surficial sediments were studied mainly. Multivariate statistics are used to analyses the interrelationship of metal elements, TOM and the grain size parameters. The results demonstrated that terrigenous sediment taken by the rivers are main sources of the trace metal elements and TOM, and the lithology of parent material is a dominating factor controlling the trace metal composition in the surficial sediment. In addition, the hydrodynamic condition and landform are the dominating factors controlling the large-scale distribution, while the anthropogenic input in the coastal area alters the regional distribution of heavy metal elements Co, Cu, Ni, Pb, Cr and Zn. The enrichment factor (EF) analysis was used for the differentiation of the metal source between anthropogenic and naturally occurring, and for the assessment of the anthropogenic influence, the deeper layer content of heavy metals were calculated as the background values and Zr was chosen as the reference element for Co, Cu, Ni, Pb, Cr and Zn. The result indicate prevalent enrichment of Co, Cu, Ni, Pb and Cr, and the contamination of Pb is most obvious, further more, the peculiar high EF value sites of Zn and Pb probably suggest point source input.

  3. Commissioning a CT-compatible LDR tandem and ovoid applicator using Monte Carlo calculation and 3D dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adamson, Justus; Newton, Joseph; Yang Yun

    2012-07-15

    Purpose: To determine the geometric and dose attenuation characteristics of a new commercially available CT-compatible LDR tandem and ovoid (T and O) applicator using Monte Carlo calculation and 3D dosimetry. Methods: For geometric characterization, we quantified physical dimensions and investigated a systematic difference found to exist between nominal ovoid angle and the angle at which the afterloading buckets fall within the ovoid. For dosimetric characterization, we determined source attenuation through asymmetric gold shielding in the buckets using Monte Carlo simulations and 3D dosimetry. Monte Carlo code MCNP5 was used to simulate 1.5 Multiplication-Sign 10{sup 9} photon histories from a {supmore » 137}Cs source placed in the bucket to achieve statistical uncertainty of 1% at a 6 cm distance. For 3D dosimetry, the distribution about an unshielded source was first measured to evaluate the system for {sup 137}Cs, after which the distribution was measured about sources placed in each bucket. Cylindrical PRESAGE{sup Registered-Sign} dosimeters (9.5 cm diameter, 9.2 cm height) with a central channel bored for source placement were supplied by Heuris Inc. The dosimeters were scanned with the Duke Large field of view Optical CT-Scanner before and after delivering a nominal dose at 1 cm of 5-8 Gy. During irradiation the dosimeter was placed in a water phantom to provide backscatter. Optical CT scan time lasted 15 min during which 720 projections were acquired at 0.5 Degree-Sign increments, and a 3D distribution was reconstructed with a (0.05 cm){sup 3} isotropic voxel size. The distributions about the buckets were used to calculate a 3D distribution of transmission rate through the bucket, which was applied to a clinical CT-based T and O implant plan. Results: The systematic difference in bucket angle relative to the nominal ovoid angle (105 Degree-Sign ) was 3.1 Degree-Sign -4.7 Degree-Sign . A systematic difference in bucket angle of 1 Degree-Sign , 5 Degree-Sign , and 10 Degree-Sign caused a 1%{+-} 0.1%, 1.7%{+-} 0.4%, and 2.6%{+-} 0.7% increase in rectal dose, respectively, with smaller effect to dose to Point A, bladder, sigmoid, and bowel. For 3D dosimetry, 90.6% of voxels had a 3D {gamma}-index (criteria = 0.1 cm, 3% local signal) below 1.0 when comparing measured and expected dose about the unshielded source. Dose transmission through the gold shielding at a radial distance of 1 cm was 85.9%{+-} 0.2%, 83.4%{+-} 0.7%, and 82.5%{+-} 2.2% for Monte Carlo, and measurement for left and right buckets, respectively. Dose transmission was lowest at oblique angles from the bucket with a minimum of 56.7%{+-} 0.8%, 65.6%{+-} 1.7%, and 57.5%{+-} 1.6%, respectively. For a clinical T and O plan, attenuation from the buckets leads to a decrease in average Point A dose of {approx}3.2% and decrease in D{sub 2cc} to bladder, rectum, bowel, and sigmoid of 5%, 18%, 6%, and 12%, respectively. Conclusions: Differences between dummy and afterloading bucket position in the ovoids is minor compared to effects from asymmetric ovoid shielding, for which rectal dose is most affected. 3D dosimetry can fulfill a novel role in verifying Monte Carlo calculations of complex dose distributions as are common about brachytherapy sources and applicators.« less

  4. Commissioning a CT-compatible LDR tandem and ovoid applicator using Monte Carlo calculation and 3D dosimetry.

    PubMed

    Adamson, Justus; Newton, Joseph; Yang, Yun; Steffey, Beverly; Cai, Jing; Adamovics, John; Oldham, Mark; Chino, Junzo; Craciunescu, Oana

    2012-07-01

    To determine the geometric and dose attenuation characteristics of a new commercially available CT-compatible LDR tandem and ovoid (T&O) applicator using Monte Carlo calculation and 3D dosimetry. For geometric characterization, we quantified physical dimensions and investigated a systematic difference found to exist between nominal ovoid angle and the angle at which the afterloading buckets fall within the ovoid. For dosimetric characterization, we determined source attenuation through asymmetric gold shielding in the buckets using Monte Carlo simulations and 3D dosimetry. Monte Carlo code MCNP5 was used to simulate 1.5 × 10(9) photon histories from a (137)Cs source placed in the bucket to achieve statistical uncertainty of 1% at a 6 cm distance. For 3D dosimetry, the distribution about an unshielded source was first measured to evaluate the system for (137)Cs, after which the distribution was measured about sources placed in each bucket. Cylindrical PRESAGE(®) dosimeters (9.5 cm diameter, 9.2 cm height) with a central channel bored for source placement were supplied by Heuris Inc. The dosimeters were scanned with the Duke Large field of view Optical CT-Scanner before and after delivering a nominal dose at 1 cm of 5-8 Gy. During irradiation the dosimeter was placed in a water phantom to provide backscatter. Optical CT scan time lasted 15 min during which 720 projections were acquired at 0.5° increments, and a 3D distribution was reconstructed with a (0.05 cm)(3) isotropic voxel size. The distributions about the buckets were used to calculate a 3D distribution of transmission rate through the bucket, which was applied to a clinical CT-based T&O implant plan. The systematic difference in bucket angle relative to the nominal ovoid angle (105°) was 3.1°-4.7°. A systematic difference in bucket angle of 1°, 5°, and 10° caused a 1% ± 0.1%, 1.7% ± 0.4%, and 2.6% ± 0.7% increase in rectal dose, respectively, with smaller effect to dose to Point A, bladder, sigmoid, and bowel. For 3D dosimetry, 90.6% of voxels had a 3D γ-index (criteria = 0.1 cm, 3% local signal) below 1.0 when comparing measured and expected dose about the unshielded source. Dose transmission through the gold shielding at a radial distance of 1 cm was 85.9% ± 0.2%, 83.4% ± 0.7%, and 82.5% ± 2.2% for Monte Carlo, and measurement for left and right buckets, respectively. Dose transmission was lowest at oblique angles from the bucket with a minimum of 56.7% ± 0.8%, 65.6% ± 1.7%, and 57.5% ± 1.6%, respectively. For a clinical T&O plan, attenuation from the buckets leads to a decrease in average Point A dose of ∼3.2% and decrease in D(2cc) to bladder, rectum, bowel, and sigmoid of 5%, 18%, 6%, and 12%, respectively. Differences between dummy and afterloading bucket position in the ovoids is minor compared to effects from asymmetric ovoid shielding, for which rectal dose is most affected. 3D dosimetry can fulfill a novel role in verifying Monte Carlo calculations of complex dose distributions as are common about brachytherapy sources and applicators.

  5. Source partitioning of anthropogenic groundwater nitrogen in a mixed-use landscape, Tutuila, American Samoa

    NASA Astrophysics Data System (ADS)

    Shuler, Christopher K.; El-Kadi, Aly I.; Dulai, Henrietta; Glenn, Craig R.; Fackrell, Joseph

    2017-12-01

    This study presents a modeling framework for quantifying human impacts and for partitioning the sources of contamination related to water quality in the mixed-use landscape of a small tropical volcanic island. On Tutuila, the main island of American Samoa, production wells in the most populated region (the Tafuna-Leone Plain) produce most of the island's drinking water. However, much of this water has been deemed unsafe to drink since 2009. Tutuila has three predominant anthropogenic non-point-groundwater-pollution sources of concern: on-site disposal systems (OSDS), agricultural chemicals, and pig manure. These sources are broadly distributed throughout the landscape and are located near many drinking-water wells. Water quality analyses show a link between elevated levels of total dissolved groundwater nitrogen (TN) and areas with high non-point-source pollution density, suggesting that TN can be used as a tracer of groundwater contamination from these sources. The modeling framework used in this study integrates land-use information, hydrological data, and water quality analyses with nitrogen loading and transport models. The approach utilizes a numerical groundwater flow model, a nitrogen-loading model, and a multi-species contaminant transport model. Nitrogen from each source is modeled as an independent component in order to trace the impact from individual land-use activities. Model results are calibrated and validated with dissolved groundwater TN concentrations and inorganic δ15N values, respectively. Results indicate that OSDS contribute significantly more TN to Tutuila's aquifers than other sources, and thus should be prioritized in future water-quality management efforts.

  6. Azimuthal Dependence of the Ground Motion Variability from Scenario Modeling of the 2014 Mw6.0 South Napa, California, Earthquake Using an Advanced Kinematic Source Model

    NASA Astrophysics Data System (ADS)

    Gallovič, F.

    2017-09-01

    Strong ground motion simulations require physically plausible earthquake source model. Here, I present the application of such a kinematic model introduced originally by Ruiz et al. (Geophys J Int 186:226-244, 2011). The model is constructed to inherently provide synthetics with the desired omega-squared spectral decay in the full frequency range. The source is composed of randomly distributed overlapping subsources with fractal number-size distribution. The position of the subsources can be constrained by prior knowledge of major asperities (stemming, e.g., from slip inversions), or can be completely random. From earthquake physics point of view, the model includes positive correlation between slip and rise time as found in dynamic source simulations. Rupture velocity and rise time follows local S-wave velocity profile, so that the rupture slows down and rise times increase close to the surface, avoiding unrealistically strong ground motions. Rupture velocity can also have random variations, which result in irregular rupture front while satisfying the causality principle. This advanced kinematic broadband source model is freely available and can be easily incorporated into any numerical wave propagation code, as the source is described by spatially distributed slip rate functions, not requiring any stochastic Green's functions. The source model has been previously validated against the observed data due to the very shallow unilateral 2014 Mw6 South Napa, California, earthquake; the model reproduces well the observed data including the near-fault directivity (Seism Res Lett 87:2-14, 2016). The performance of the source model is shown here on the scenario simulations for the same event. In particular, synthetics are compared with existing ground motion prediction equations (GMPEs), emphasizing the azimuthal dependence of the between-event ground motion variability. I propose a simple model reproducing the azimuthal variations of the between-event ground motion variability, providing an insight into possible refinement of GMPEs' functional forms.

  7. Geotechnical parameter spatial distribution stochastic analysis based on multi-precision information assimilation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Rubin, Y.

    2014-12-01

    Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.

  8. Laboratory Experiments Investigating Glacier Submarine Melt Rates and Circulation in an East Greenland Fjord

    NASA Astrophysics Data System (ADS)

    Cenedese, C.

    2014-12-01

    Idealized laboratory experiments investigate the glacier-ocean boundary dynamics near a vertical 'glacier' (i.e. no floating ice tongue) in a two-layer stratified fluid, similar to Sermilik Fjord where Helheim Glacier terminates. In summer, the discharge of surface runoff at the base of the glacier (subglacial discharge) intensifies the circulation near the glacier and increases the melt rate with respect to that in winter. In the laboratory, the effect of subglacial discharge is simulated by introducing fresh water at melting temperatures from either point or line sources at the base of an ice block representing the glacier. The circulation pattern observed both with and without subglacial discharge resembles those observed in previous studies. The buoyant plume of cold meltwater and subglacial discharge water entrains ambient water and rises vertically until it finds either the interface between the two layers or the free surface. The results suggest that the meltwater deposits within the interior of the water column and not entirely at the free surface, as confirmed by field observations. The submarine melt rate increases with the subglacial discharge rate. Furthermore, the same subglacial discharge causes greater submarine melting if it exits from a point source rather than from a line source. When the subglacial discharge exits from two point sources, two buoyant plumes are formed which rise vertically and interact. The results suggest that the distance between the two subglacial discharges influences the entrainment in the plumes and consequently the amount of submarine melting and the final location of the meltwater within the water column. Hence, the distribution and number of sources of subglacial discharge may play an important role in glacial melt rates and fjord stratification and circulation. Support was given by NSF project OCE-113008.

  9. Determination of geostatistically representative sampling locations in Porsuk Dam Reservoir (Turkey)

    NASA Astrophysics Data System (ADS)

    Aksoy, A.; Yenilmez, F.; Duzgun, S.

    2013-12-01

    Several factors such as wind action, bathymetry and shape of a lake/reservoir, inflows, outflows, point and diffuse pollution sources result in spatial and temporal variations in water quality of lakes and reservoirs. The guides by the United Nations Environment Programme and the World Health Organization to design and implement water quality monitoring programs suggest that even a single monitoring station near the center or at the deepest part of a lake will be sufficient to observe long-term trends if there is good horizontal mixing. In stratified water bodies, several samples can be required. According to the guide of sampling and analysis under the Turkish Water Pollution Control Regulation, a minimum of five sampling locations should be employed to characterize the water quality in a reservoir or a lake. The European Union Water Framework Directive (2000/60/EC) states to select a sufficient number of monitoring sites to assess the magnitude and impact of point and diffuse sources and hydromorphological pressures in designing a monitoring program. Although existing regulations and guidelines include frameworks for the determination of sampling locations in surface waters, most of them do not specify a procedure in establishment of monitoring aims with representative sampling locations in lakes and reservoirs. In this study, geostatistical tools are used to determine the representative sampling locations in the Porsuk Dam Reservoir (PDR). Kernel density estimation and kriging were used in combination to select the representative sampling locations. Dissolved oxygen and specific conductivity were measured at 81 points. Sixteen of them were used for validation. In selection of the representative sampling locations, care was given to keep similar spatial structure in distributions of measured parameters. A procedure was proposed for that purpose. Results indicated that spatial structure was lost under 30 sampling points. This was as a result of varying water quality in the reservoir due to inflows, point and diffuse inputs, and reservoir hydromorphology. Moreover, hot spots were determined based on kriging and standard error maps. Locations of minimum number of sampling points that represent the actual spatial structure of DO distribution in the Porsuk Dam Reservoir

  10. Coronae on stars

    NASA Technical Reports Server (NTRS)

    Haisch, B. M.

    1986-01-01

    Three lines of evidence are noted to point to a flare heating source for stellar coronae: a strong correlation between time-averaged flare energy release and coronal X-ray luminosity, the high temperature flare-like component of the spectral signature of coronal X-ray emission, and the observed short time scale variability that indicates continuous flare activity. It is presently suggested that flares may represent only the extreme high energy tail of a continuous distribution of coronal energy release events.

  11. Multi-spacecraft Observations of the Rotation and Nonradial Motion of a CME Flux Rope Causing an Intense Geomagnetic Storm

    NASA Astrophysics Data System (ADS)

    Liu, Yi A.; Liu, Ying D.; Hu, Huidong; Wang, Rui; Zhao, Xiaowei

    2018-02-01

    We present an investigation of the rotation and nonradial motion of a coronal mass ejection (CME) from AR 12468 on 2015 December 16 using observations from SDO, SOHO, STEREO A, and Wind. The EUV and HMI observations of the source region show that the associated magnetic flux rope (MFR) axis pointed to the east before the eruption. We use a nonlinear force-free field (NLFFF) extrapolation to determine the configuration of the coronal magnetic field and calculate the magnetic energy density distributions at different heights. The distribution of the magnetic energy density shows a strong gradient toward the northeast. The propagation direction of the CME from a Graduated Cylindrical Shell (GCS) modeling deviates from the radial direction of the source region by about 45° in longitude and about 30° in latitude, which is consistent with the gradient of the magnetic energy distribution around the AR. The MFR axis determined by the GCS modeling points southward, which has rotated counterclockwise by about 95° compared with the orientation of the MFR in the low corona. The MFR reconstructed by a Grad–Shafranov (GS) method at 1 au has almost the same orientation as the MFR from the GCS modeling, which indicates that the MFR rotation occurred in the low corona. It is the rotation of the MFR that caused the intense geomagnetic storm with the minimum D st of ‑155 nT. These results suggest that the coronal magnetic field surrounding the MFR plays a crucial role in the MFR rotation and propagation direction.

  12. The environmental neurotoxin β-N-methylamino-l-alanine (l-BMAA) is deposited into birds' eggs.

    PubMed

    Andersson, Marie; Karlsson, Oskar; Brandt, Ingvar

    2018-01-01

    The neurotoxic amino acid β-N-methylamino-L-alanine (BMAA) has been implicated in the etiology of neurodegenerative disorders. BMAA is also a known developmental neurotoxin and research indicates that the sources of human and wildlife exposure may be more diverse than previously anticipated. The aim of the present study was therefore to examine whether BMAA can be transferred into birds' eggs. Egg laying quail were dosed with 14 C-labeled BMAA. The distribution of radioactivity in the birds and their laid eggs was then examined at different time points by autoradiography and phosphoimaging analysis. To evaluate the metabolic stability of the BMAA molecule, the distribution of 14 C-methyl- and 14 C-carboxyl-labeled BMAA were compared. The results revealed a pronounced incorporation of radioactivity in the eggs, predominantly in the yolk but also in the albumen. Imaging analysis showed that the concentrations of radioactivity in the liver decreased about seven times between the 24h and the 72h time points, while the concentrations in egg yolk remained largely unchanged. At 72h the egg yolk contained about five times the concentration of radioactivity in the liver. Both BMAA preparations gave rise to similar distribution pattern in the bird tissues and in the eggs, indicating metabolic stability of the labeled groups. The demonstrated deposition into eggs warrants studies of BMAAs effects on bird development. Moreover, birds' eggs may be a source of human BMAA exposure, provided that the laying birds are exposed to BMAA via their diet. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Magneto-acousto-electrical tomography: a potential method for imaging current density and electrical impedance.

    PubMed

    Haider, S; Hrbek, A; Xu, Y

    2008-06-01

    Primarily this report outlines our investigation on utilizing magneto-acousto-electrical-tomography (MAET) to image the lead field current density in volume conductors. A lead field current density distribution is obtained when a current/voltage source is applied to a sample via a pair of electrodes. This is the first time a high-spatial-resolution image of current density is presented using MAET. We also compare an experimental image of current density in a sample with its corresponding numerical simulation. To image the lead field current density, rather than applying a current/voltage source directly to the sample, we place the sample in a static magnetic field and focus an ultrasonic pulse on the sample to simulate a point-like current dipole source at the focal point. Then by using electrodes we measure the voltage/current signal which, based on the reciprocity theorem, is proportional to a component of the lead field current density. In the theory section, we derive the equation relating the measured voltage to the lead field current density and the displacement velocity caused by ultrasound. The experimental data include the MAET signal and an image of the lead field current density for a thin sample. In addition, we discuss the potential improvements for MAET especially to overcome the limitation created by the observation that no signal was detected from the interior of a region having a uniform conductivity. As an auxiliary we offer a mathematical formula whereby the lead field current density may be utilized to reconstruct the distribution of the electrical impedance in a piecewise smooth object.

  14. High-field neutral beam injection for improving the Q of a gas dynamic trap-based fusion neutron source

    NASA Astrophysics Data System (ADS)

    Zeng, Qiusun; Chen, Dehong; Wang, Minghuang

    2017-12-01

    In order to improve the fusion energy gain (Q) of a gas dynamic trap (GDT)-based fusion neutron source, a method in which the neutral beam is obliquely injected at a higher magnetic field position rather than at the mid-plane of the GDT is proposed. This method is beneficial for confining a higher density of fast ions at the turning point in the zone with a higher magnetic field, as well as obtaining a higher mirror ratio by reducing the mid-plane field rather than increasing the mirror field. In this situation, collision scattering loss of fast ions with higher density will occur and change the confinement time, power balance and particle balance. Using an updated calculation model with high-field neutral beam injection for a GDT-based fusion neutron source conceptual design, we got four optimal design schemes for a GDT-based fusion neutron source in which Q was improved to two- to three-fold compared with a conventional design scheme and considering the limitation for avoiding plasma instabilities, especially the fire-hose instability. The distribution of fast ions could be optimized by building a proper magnetic field configuration with enough space for neutron shielding and by multi-beam neutral particle injection at different axial points.

  15. [A landscape ecological approach for urban non-point source pollution control].

    PubMed

    Guo, Qinghai; Ma, Keming; Zhao, Jingzhu; Yang, Liu; Yin, Chengqing

    2005-05-01

    Urban non-point source pollution is a new problem appeared with the speeding development of urbanization. The particularity of urban land use and the increase of impervious surface area make urban non-point source pollution differ from agricultural non-point source pollution, and more difficult to control. Best Management Practices (BMPs) are the effective practices commonly applied in controlling urban non-point source pollution, mainly adopting local repairing practices to control the pollutants in surface runoff. Because of the close relationship between urban land use patterns and non-point source pollution, it would be rational to combine the landscape ecological planning with local BMPs to control the urban non-point source pollution, which needs, firstly, analyzing and evaluating the influence of landscape structure on water-bodies, pollution sources and pollutant removal processes to define the relationships between landscape spatial pattern and non-point source pollution and to decide the key polluted fields, and secondly, adjusting inherent landscape structures or/and joining new landscape factors to form new landscape pattern, and combining landscape planning and management through applying BMPs into planning to improve urban landscape heterogeneity and to control urban non-point source pollution.

  16. X-ray Counterparts of Infrared Faint Radio Sources

    NASA Astrophysics Data System (ADS)

    Schartel, Norbert

    2011-10-01

    Infrared Faint Radio Sources (IFRS) are radio sources with extremely faint or even absent infrared emission in deep Spitzer Surveys. Models of their spectral energy distributions, the ratios of radio to infrared flux densities and their steep radio spectra strongly suggest that IFRS are AGN at high redshifts (2

  17. SU-C-201-03: Coded Aperture Gamma-Ray Imaging Using Pixelated Semiconductor Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joshi, S; Kaye, W; Jaworski, J

    2015-06-15

    Purpose: Improved localization of gamma-ray emissions from radiotracers is essential to the progress of nuclear medicine. Polaris is a portable, room-temperature operated gamma-ray imaging spectrometer composed of two 3×3 arrays of thick CdZnTe (CZT) detectors, which detect gammas between 30keV and 3MeV with energy resolution of <1% FWHM at 662keV. Compton imaging is used to map out source distributions in 4-pi space; however, is only effective above 300keV where Compton scatter is dominant. This work extends imaging to photoelectric energies (<300keV) using coded aperture imaging (CAI), which is essential for localization of Tc-99m (140keV). Methods: CAI, similar to the pinholemore » camera, relies on an attenuating mask, with open/closed elements, placed between the source and position-sensitive detectors. Partial attenuation of the source results in a “shadow” or count distribution that closely matches a portion of the mask pattern. Ideally, each source direction corresponds to a unique count distribution. Using backprojection reconstruction, the source direction is determined within the field of view. The knowledge of 3D position of interaction results in improved image quality. Results: Using a single array of detectors, a coded aperture mask, and multiple Co-57 (122keV) point sources, image reconstruction is performed in real-time, on an event-by-event basis, resulting in images with an angular resolution of ∼6 degrees. Although material nonuniformities contribute to image degradation, the superposition of images from individual detectors results in improved SNR. CAI was integrated with Compton imaging for a seamless transition between energy regimes. Conclusion: For the first time, CAI has been applied to thick, 3D position sensitive CZT detectors. Real-time, combined CAI and Compton imaging is performed using two 3×3 detector arrays, resulting in a source distribution in space. This system has been commercialized by H3D, Inc. and is being acquired for various applications worldwide, including proton therapy imaging R&D.« less

  18. Magnetoacoustic tomography with magnetic induction for high-resolution bioimepedance imaging through vector source reconstruction under the static field of MRI magnet

    PubMed Central

    Mariappan, Leo; Hu, Gang; He, Bin

    2014-01-01

    Purpose: Magnetoacoustic tomography with magnetic induction (MAT-MI) is an imaging modality to reconstruct the electrical conductivity of biological tissue based on the acoustic measurements of Lorentz force induced tissue vibration. This study presents the feasibility of the authors' new MAT-MI system and vector source imaging algorithm to perform a complete reconstruction of the conductivity distribution of real biological tissues with ultrasound spatial resolution. Methods: In the present study, using ultrasound beamformation, imaging point spread functions are designed to reconstruct the induced vector source in the object which is used to estimate the object conductivity distribution. Both numerical studies and phantom experiments are performed to demonstrate the merits of the proposed method. Also, through the numerical simulations, the full width half maximum of the imaging point spread function is calculated to estimate of the spatial resolution. The tissue phantom experiments are performed with a MAT-MI imaging system in the static field of a 9.4 T magnetic resonance imaging magnet. Results: The image reconstruction through vector beamformation in the numerical and experimental studies gives a reliable estimate of the conductivity distribution in the object with a ∼1.5 mm spatial resolution corresponding to the imaging system frequency of 500 kHz ultrasound. In addition, the experiment results suggest that MAT-MI under high static magnetic field environment is able to reconstruct images of tissue-mimicking gel phantoms and real tissue samples with reliable conductivity contrast. Conclusions: The results demonstrate that MAT-MI is able to image the electrical conductivity properties of biological tissues with better than 2 mm spatial resolution at 500 kHz, and the imaging with MAT-MI under a high static magnetic field environment is able to provide improved imaging contrast for biological tissue conductivity reconstruction. PMID:24506649

  19. SU-E-T-149: Brachytherapy Patient Specific Quality Assurance for a HDR Vaginal Cylinder Case

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbiere, J; Napoli, J; Ndlovu, A

    2015-06-15

    Purpose: Commonly Ir-192 HDR treatment planning system commissioning is only based on a single absolute measurement of source activity supplemented by tabulated parameters for multiple factors without independent verification that the planned distribution corresponds to the actual delivered dose. The purpose on this work is to present a methodology using Gafchromic film with a statistically valid calibration curve that can be used to validate clinical HDR vaginal cylinder cases by comparing the calculated plan dose distribution in a plane with the corresponding measured planar dose. Methods: A vaginal cylinder plan was created with Oncentra treatment planning system. The 3D dosemore » matrix was exported to a Varian Eclipse work station for convenient extraction of a 2D coronal dose plane corresponding to the film position. The plan was delivered with a sheet of Gafchromic EBT3 film positioned 1mm from the catheter using an Ir-192 Nucletron HDR source. The film was then digitized with an Epson 10000 XL color scanner. Film analysis is performed with MatLab imaging toolbox. A density to dose calibration curve was created using TG43 formalism for a single dwell position exposure at over 100 points for statistical accuracy. The plan and measured film dose planes were registered using a known dwell position relative to four film marks. The plan delivered 500 cGy to points 2 cm from the sources. Results: The distance to agreement of the 500 cGy isodose between the plan and film measurement laterally was 0.5 mm but can be as much as 1.5 mm superior and inferior. The difference between the computed plan dose and film measurement was calculated per pixel. The greatest errors up to 50 cGy are near the apex. Conclusion: The methodology presented will be useful to implement more comprehensive quality assurance to verify patient-specific dose distributions.« less

  20. nSTAT: Open-Source Neural Spike Train Analysis Toolbox for Matlab

    PubMed Central

    Cajigas, I.; Malik, W.Q.; Brown, E.N.

    2012-01-01

    Over the last decade there has been a tremendous advance in the analytical tools available to neuroscientists to understand and model neural function. In particular, the point process - Generalized Linear Model (PPGLM) framework has been applied successfully to problems ranging from neuro-endocrine physiology to neural decoding. However, the lack of freely distributed software implementations of published PP-GLM algorithms together with problem-specific modifications required for their use, limit wide application of these techniques. In an effort to make existing PP-GLM methods more accessible to the neuroscience community, we have developed nSTAT – an open source neural spike train analysis toolbox for Matlab®. By adopting an Object-Oriented Programming (OOP) approach, nSTAT allows users to easily manipulate data by performing operations on objects that have an intuitive connection to the experiment (spike trains, covariates, etc.), rather than by dealing with data in vector/matrix form. The algorithms implemented within nSTAT address a number of common problems including computation of peri-stimulus time histograms, quantification of the temporal response properties of neurons, and characterization of neural plasticity within and across trials. nSTAT provides a starting point for exploratory data analysis, allows for simple and systematic building and testing of point process models, and for decoding of stimulus variables based on point process models of neural function. By providing an open-source toolbox, we hope to establish a platform that can be easily used, modified, and extended by the scientific community to address limitations of current techniques and to extend available techniques to more complex problems. PMID:22981419

  1. Estimation of Phosphorus Emissions in the Upper Iguazu Basin (brazil) Using GIS and the More Model

    NASA Astrophysics Data System (ADS)

    Acosta Porras, E. A.; Kishi, R. T.; Fuchs, S.; Hilgert, S.

    2016-06-01

    Pollution emissions into the drainage basin have direct impact on surface water quality. These emissions result from human activities that turn into pollution loads when they reach the water bodies, as point or diffuse sources. Their pollution potential depends on the characteristics and quantity of the transported materials. The estimation of pollution loads can assist decision-making in basin management. Knowledge about the potential pollution sources allows for a prioritization of pollution control policies to achieve the desired water quality. Consequently, it helps avoiding problems such as eutrophication of water bodies. The focus of the research described in this study is related to phosphorus emissions into river basins. The study area is the upper Iguazu basin that lies in the northeast region of the State of Paraná, Brazil, covering about 2,965 km2 and around 4 million inhabitants live concentrated on just 16% of its area. The MoRE (Modeling of Regionalized Emissions) model was used to estimate phosphorus emissions. MoRE is a model that uses empirical approaches to model processes in analytical units, capable of using spatially distributed parameters, covering both, emissions from point sources as well as non-point sources. In order to model the processes, the basin was divided into 152 analytical units with an average size of 20 km2. Available data was organized in a GIS environment. Using e.g. layers of precipitation, the Digital Terrain Model from a 1:10000 scale map as well as soils and land cover, which were derived from remote sensing imagery. Further data is used, such as point pollution discharges and statistical socio-economic data. The model shows that one of the main pollution sources in the upper Iguazu basin is the domestic sewage that enters the river as point source (effluents of treatment stations) and/or as diffuse pollution, caused by failures of sanitary sewer systems or clandestine sewer discharges, accounting for about 56% of the emissions. Second significant shares of emissions come from direct runoff or groundwater, being responsible for 32% of the total emissions. Finally, agricultural erosion and industry pathways represent 12% of emissions. This study shows that MoRE is capable of producing valid emission calculation on a relatively reduced input data basis.

  2. A numerical experiment on light pollution from distant sources

    NASA Astrophysics Data System (ADS)

    Kocifaj, M.

    2011-08-01

    To predict the light pollution of the night-time sky realistically over any location or measuring point on the ground presents quite a difficult calculation task. Light pollution of the local atmosphere is caused by stray light, light loss or reflection of artificially illuminated ground objects or surfaces such as streets, advertisement boards or building interiors. Thus it depends on the size, shape, spatial distribution, radiative pattern and spectral characteristics of many neighbouring light sources. The actual state of the atmospheric environment and the orography of the surrounding terrain are also relevant. All of these factors together influence the spectral sky radiance/luminance in a complex manner. Knowledge of the directional behaviour of light pollution is especially important for the correct interpretation of astronomical observations. From a mathematical point of view, the light noise or veil luminance of a specific sky element is given by a superposition of scattered light beams. Theoretical models that simulate light pollution typically take into account all ground-based light sources, thus imposing great requirements on CPU and MEM. As shown in this paper, a contribution of distant sources to the light pollution might be essential under specific conditions of low turbidity and/or Garstang-like radiative patterns. To evaluate the convergence of the theoretical model, numerical experiments are made for different light sources, spectral bands and atmospheric conditions. It is shown that in the worst case the integration limit is approximately 100 km, but it can be significantly shortened for light sources with cosine-like radiative patterns.

  3. High-energy neutrinos from FR0 radio galaxies?

    NASA Astrophysics Data System (ADS)

    Tavecchio, F.; Righi, C.; Capetti, A.; Grandi, P.; Ghisellini, G.

    2018-04-01

    The sources responsible for the emission of high-energy (≳100 TeV) neutrinos detected by IceCube are still unknown. Among the possible candidates, active galactic nuclei with relativistic jets are often examined, since the outflowing plasma seems to offer the ideal environment to accelerate the required parent high-energy cosmic rays. The non-detection of single-point sources or - almost equivalently - the absence, in the IceCube events, of multiplets originating from the same sky position - constrains the cosmic density and the neutrino output of these sources, pointing to a numerous population of faint sources. Here we explore the possibility that FR0 radio galaxies, the population of compact sources recently identified in large radio and optical surveys and representing the bulk of radio-loud AGN population, can represent suitable candidates for neutrino emission. Modelling the spectral energy distribution of an FR0 radio galaxy recently associated with a γ-ray source detected by the Large Area Telescope onboard Fermi, we derive the physical parameters of its jet, in particular the power carried by it. We consider the possible mechanisms of neutrino production, concluding that pγ reactions in the jet between protons and ambient radiation is too inefficient to sustain the required output. We propose an alternative scenario, in which protons, accelerated in the jet, escape from it and diffuse in the host galaxy, producing neutrinos as a result of pp scattering with the interstellar gas, in strict analogy with the processes taking place in star-forming galaxies.

  4. Measurements of scalar released from point sources in a turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Talluru, K. M.; Hernandez-Silva, C.; Philip, J.; Chauhan, K. A.

    2017-04-01

    Measurements of velocity and concentration fluctuations for a horizontal plume released at several wall-normal locations in a turbulent boundary layer (TBL) are discussed in this paper. The primary objective of this study is to establish a systematic procedure to acquire accurate single-point concentration measurements for a substantially long time so as to obtain converged statistics of long tails of probability density functions of concentration. Details of the calibration procedure implemented for long measurements are presented, which include sensor drift compensation to eliminate the increase in average background concentration with time. While most previous studies reported measurements where the source height is limited to, {{s}z}/δ ≤slant 0.2 , where s z is the wall-normal source height and δ is the boundary layer thickness, here results of concentration fluctuations when the plume is released in the outer layer are emphasised. Results of mean and root-mean-square (r.m.s.) profiles of concentration for elevated sources agree with the well-accepted reflected Gaussian model (Fackrell and Robins 1982 J. Fluid. Mech. 117). However, there is clear deviation from the reflected Gaussian model for source in the intermittent region of TBL particularly at locations higher than the source itself. Further, we find that the plume half-widths are different for the mean and r.m.s. concentration profiles. Long sampling times enabled us to calculate converged probability density functions at high concentrations and these are found to exhibit exponential distribution.

  5. Real-time Estimation of Fault Rupture Extent for Recent Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Yamada, M.; Mori, J. J.

    2009-12-01

    Current earthquake early warning systems assume point source models for the rupture. However, for large earthquakes, the fault rupture length can be of the order of tens to hundreds of kilometers, and the prediction of ground motion at a site requires the approximated knowledge of the rupture geometry. Early warning information based on a point source model may underestimate the ground motion at a site, if a station is close to the fault but distant from the epicenter. We developed an empirical function to classify seismic records into near-source (NS) or far-source (FS) records based on the past strong motion records (Yamada et al., 2007). Here, we defined the near-source region as an area with a fault rupture distance less than 10km. If we have ground motion records at a station, the probability that the station is located in the near-source region is; P = 1/(1+exp(-f)) f = 6.046log10(Za) + 7.885log10(Hv) - 27.091 where Za and Hv denote the peak values of the vertical acceleration and horizontal velocity, respectively. Each observation provides the probability that the station is located in near-source region, so the resolution of the proposed method depends on the station density. The information of the fault rupture location is a group of points where the stations are located. However, for practical purposes, the 2-dimensional configuration of the fault is required to compute the ground motion at a site. In this study, we extend the methodology of NS/FS classification to characterize 2-dimensional fault geometries and apply them to strong motion data observed in recent large earthquakes. We apply a cosine-shaped smoothing function to the probability distribution of near-source stations, and convert the point fault location to 2-dimensional fault information. The estimated rupture geometry for the 2007 Niigata-ken Chuetsu-oki earthquake 10 seconds after the origin time is shown in Figure 1. Furthermore, we illustrate our method with strong motion data of the 2007 Noto-hanto earthquake, 2008 Iwate-Miyagi earthquake, and 2008 Wenchuan earthquake. The on-going rupture extent can be estimated for all datasets as the rupture propagates. For earthquakes with magnitude about 7.0, the determination of the fault parameters converges to the final geometry within 10 seconds.

  6. Dynamic strain distribution of FRP plate under blast loading

    NASA Astrophysics Data System (ADS)

    Saburi, T.; Yoshida, M.; Kubota, S.

    2017-02-01

    The dynamic strain distribution of a fiber re-enforced plastic (FRP) plate under blast loading was investigated using a Digital Image Correlation (DIC) image analysis method. The testing FRP plates were mounted in parallel to each other on a steel frame. 50 g of composition C4 explosive was used as a blast loading source and set in the center of the FRP plates. The dynamic behavior of the FRP plate under blast loading were observed by two high-speed video cameras. The set of two high-speed video image sequences were used to analyze the FRP three-dimensional strain distribution by means of DIC method. A point strain profile extracted from the analyzed strain distribution data was compared with a directly observed strain profile using a strain gauge and it was shown that the strain profile under the blast loading by DIC method is quantitatively accurate.

  7. A Complete Public Archive for the Einstein Imaging Proportional Counter

    NASA Technical Reports Server (NTRS)

    Helfand, David J.

    1996-01-01

    Consistent with our proposal to the Astrophysics Data Program in 1992, we have completed the design, construction, documentation, and distribution of a flexible and complete archive of the data collected by the Einstein Imaging Proportional Counter. Along with software and data delivered to the High Energy Astrophysics Science Archive Research Center at Goddard Space Flight Center, we have compiled and, where appropriate, published catalogs of point sources, soft sources, hard sources, extended sources, and transient flares detected in the database along with extensive analyses of the instrument's backgrounds and other anomalies. We include in this document a brief summary of the archive's functionality, a description of the scientific catalogs and other results, a bibliography of publications supported in whole or in part under this contract, and a list of personnel whose pre- and post-doctoral education consisted in part in participation in this project.

  8. Aureole radiance field about a source in a scattering-absorbing medium.

    PubMed

    Zachor, A S

    1978-06-15

    A technique is described for computing the aureole radiance field about a point source in a medium that absorbs and scatters according to an arbitrary phase function. When applied to an isotropic source in a homogenous medium, the method uses a double-integral transform which is evaluated recursively to obtain the aureole radiances contributed by successive scattering orders, as in the Neumann solution of the radiative transfer equation. The normalized total radiance field distribution and the variation of flux with field of view and range are given for three wavelengths in the uv and one in the visible, for a sea-level model atmosphere assumed to scatter according to a composite of the Rayleigh and modified Henyey-Greenstein phase functions. These results have application to the detection and measurement of uncollimated uv and visible sources at short ranges in the lower atmosphere.

  9. 4 years of PM10 pollution in Poland - observations and modelling

    NASA Astrophysics Data System (ADS)

    Durka, Pawel; Struzewska, Joanna; Kaminski, Jacek W.

    2017-04-01

    Poor air quality is a health issue in Poland, especially during winter. In central and northern part of the country, the primary source is low-level domestic emissions. In larger cities and agglomerations traffic emissions are also an issue. Quantification of the contribution of transboundary pollution sources is still an open issue. Analyses of 60 episodes for the period 2013-2016 with high PM10 concentrations were carried out under a contract from the Chief Inspectorate of Environmental Protection in Poland. Analyses of synoptic conditions and calculation of back trajectories were undertaken. A tropospheric chemistry model GEM-AQ was run at 10km resolution to calculate contributions from surface, line and point sources. We will present trajectories for different types of episodes, maps with contributions for specific emission sources and transboundary pollution. Also, mean distribution of PM10 concentrations during episodes will be shown.

  10. OMCat: Catalogue of Serendipitous Sources Detected with the XMM-Newton Optical Monitor

    NASA Technical Reports Server (NTRS)

    Kuntz, K. D.; Harrus, Ilana; McGlynn, Thomas A.; Mushotsky, Richard F.; Snowden, Steven L.

    2007-01-01

    The Optical Monitor Catalogue of serendipitous sources (OMCat) contains entries for every source detected in the publically available XMM-Newton Optical Monitor (OM) images taken in either the imaging or "fast" modes. Since the OM records data simultaneously with the X-ray telescopes on XMM-Newton, it typically produces images in one or more near-UV/optical bands for every pointing of the observatory. As of the beginning of 2006, the public archive had covered roughly 0.5% of the sky in 2950 fields. The OMCat is not dominated by sources previously undetected at other wavelengths; the bulk of objects have optical counterparts. However, the OMCat can be used to extend optical or X-ray spectral energy distributions for known objects into the ultraviolet, to study at higher angular resolution objects detected with GALEX, or to find high-Galactic-latitude objects of interest for UV spectroscopy.

  11. Numerical Simulation of Dispersion from Urban Greenhouse Gas Sources

    NASA Astrophysics Data System (ADS)

    Nottrott, Anders; Tan, Sze; He, Yonggang; Winkler, Renato

    2017-04-01

    Cities are characterized by complex topography, inhomogeneous turbulence, and variable pollutant source distributions. These features create a scale separation between local sources and urban scale emissions estimates known as the Grey-Zone. Modern computational fluid dynamics (CFD) techniques provide a quasi-deterministic, physically based toolset to bridge the scale separation gap between source level dynamics, local measurements, and urban scale emissions inventories. CFD has the capability to represent complex building topography and capture detailed 3D turbulence fields in the urban boundary layer. This presentation discusses the application of OpenFOAM to urban CFD simulations of natural gas leaks in cities. OpenFOAM is an open source software for advanced numerical simulation of engineering and environmental fluid flows. When combined with free or low cost computer aided drawing and GIS, OpenFOAM generates a detailed, 3D representation of urban wind fields. OpenFOAM was applied to model scalar emissions from various components of the natural gas distribution system, to study the impact of urban meteorology on mobile greenhouse gas measurements. The numerical experiments demonstrate that CH4 concentration profiles are highly sensitive to the relative location of emission sources and buildings. Sources separated by distances of 5-10 meters showed significant differences in vertical dispersion of plumes, due to building wake effects. The OpenFOAM flow fields were combined with an inverse, stochastic dispersion model to quantify and visualize the sensitivity of point sensors to upwind sources in various built environments. The Boussinesq approximation was applied to investigate the effects of canopy layer temperature gradients and convection on sensor footprints.

  12. Probing the gamma-ray emission from HESS J1834-087 using H.E.S.S. and Fermi LAT observations

    NASA Astrophysics Data System (ADS)

    H. E. S. S. Collaboration; Abramowski, A.; Aharonian, F.; Ait Benkhali, F.; Akhperjanian, A. G.; Angüner, E.; Anton, G.; Backes, M.; Balenderan, S.; Balzer, A.; Barnacka, A.; Becherini, Y.; Becker Tjus, J.; Bernlöhr, K.; Birsin, E.; Bissaldi, E.; Biteau, J.; Böttcher, M.; Boisson, C.; Bolmont, J.; Bordas, P.; Brucker, J.; Brun, F.; Brun, P.; Bulik, T.; Carrigan, S.; Casanova, S.; Chadwick, P. M.; Chalme-Calvet, R.; Chaves, R. C. G.; Cheesebrough, A.; Chrétien, M.; Colafrancesco, S.; Cologna, G.; Conrad, J.; Couturier, C.; Cui, Y.; Dalton, M.; Daniel, M. K.; Davids, I. D.; Degrange, B.; Deil, C.; deWilt, P.; Dickinson, H. J.; Djannati-Ataï, A.; Domainko, W.; O'C. Drury, L.; Dubus, G.; Dutson, K.; Dyks, J.; Dyrda, M.; Edwards, T.; Egberts, K.; Eger, P.; Espigat, P.; Farnier, C.; Fegan, S.; Feinstein, F.; Fernandes, M. V.; Fernandez, D.; Fiasson, A.; Fontaine, G.; Förster, A.; Füßling, M.; Gajdus, M.; Gallant, Y. A.; Garrigoux, T.; Giavitto, G.; Giebels, B.; Glicenstein, J. F.; Grondin, M.-H.; Grudzińska, M.; Häffner, S.; Hahn, J.; Harris, J.; Heinzelmann, G.; Henri, G.; Hermann, G.; Hervet, O.; Hillert, A.; Hinton, J. A.; Hofmann, W.; Hofverberg, P.; Holler, M.; Horns, D.; Jacholkowska, A.; Jahn, C.; Jamrozy, M.; Janiak, M.; Jankowsky, F.; Jung, I.; Kastendieck, M. A.; Katarzyński, K.; Katz, U.; Kaufmann, S.; Khélifi, B.; Kieffer, M.; Klepser, S.; Klochkov, D.; Kluźniak, W.; Kneiske, T.; Kolitzus, D.; Komin, Nu.; Kosack, K.; Krakau, S.; Krayzel, F.; Krüger, P. P.; Laffon, H.; Lamanna, G.; Lefaucheur, J.; Lemière, A.; Lemoine-Goumard, M.; Lenain, J.-P.; Lohse, T.; Lopatin, A.; Lu, C.-C.; Marandon, V.; Marcowith, A.; Marx, R.; Maurin, G.; Maxted, N.; Mayer, M.; McComb, T. J. L.; Méhault, J.; Meintjes, P. J.; Menzler, U.; Meyer, M.; Moderski, R.; Mohamed, M.; Moulin, E.; Murach, T.; Naumann, C. L.; de Naurois, M.; Niemiec, J.; Nolan, S. J.; Oakes, L.; Odaka, H.; Ohm, S.; de Oña Wilhelmi, E.; Opitz, B.; Ostrowski, M.; Oya, I.; Panter, M.; Parsons, R. D.; Paz Arribas, M.; Pekeur, N. W.; Pelletier, G.; Perez, J.; Petrucci, P.-O.; Peyaud, B.; Pita, S.; Poon, H.; Pühlhofer, G.; Punch, M.; Quirrenbach, A.; Raab, S.; Raue, M.; Reichardt, I.; Reimer, A.; Reimer, O.; Renaud, M.; de los Reyes, R.; Rieger, F.; Rob, L.; Romoli, C.; Rosier-Lees, S.; Rowell, G.; Rudak, B.; Rulten, C. B.; Sahakian, V.; Sanchez, D. A.; Santangelo, A.; Schlickeiser, R.; Schüssler, F.; Schulz, A.; Schwanke, U.; Schwarzburg, S.; Schwemmer, S.; Sol, H.; Spengler, G.; Spies, F.; Stawarz, Ł.; Steenkamp, R.; Stegmann, C.; Stinzing, F.; Stycz, K.; Sushch, I.; Tavernet, J.-P.; Tavernier, T.; Taylor, A. M.; Terrier, R.; Tluczykont, M.; Trichard, C.; Valerius, K.; van Eldik, C.; van Soelen, B.; Vasileiadis, G.; Venter, C.; Viana, A.; Vincent, P.; Völk, H. J.; Volpe, F.; Vorster, M.; Vuillaume, T.; Wagner, S. J.; Wagner, P.; Wagner, R. M.; Ward, M.; Weidinger, M.; Weitzel, Q.; White, R.; Wierzcholska, A.; Willmann, P.; Wörnlein, A.; Wouters, D.; Yang, R.; Zabalza, V.; Zacharias, M.; Zdziarski, A. A.; Zech, A.; Zechlin, H.-S.

    2015-02-01

    Aims: Previous observations with the High Energy Stereoscopic System (H.E.S.S.) have revealed an extended very-high-energy (VHE; E> 100 GeV) γ-ray source, HESS J1834-087, coincident with the supernova remnant (SNR) W41. The origin of the γ-ray emission was investigated in more detail with the H.E.S.S. array and the Large Area Telescope (LAT) onboard the Fermi Gamma-ray Space Telescope. Methods: The γ-ray data provided by 61 h of observations with H.E.S.S., and four years with the Fermi LAT were analyzed, covering over five decades in energy from 1.8 GeV up to 30 TeV. The morphology and spectrum of the TeV and GeV sources were studied and multiwavelength data were used to investigate the origin of the γ-ray emission toward W41. Results: The TeV source can be modeled with a sum of two components: one point-like and one significantly extended (σTeV = 0.17° ± 0.01°), both centered on SNR W41 and exhibiting spectra described by a power law with index ΓTeV ≃ 2.6. The GeV source detected with Fermi LAT is extended (σGeV = 0.15° ± 0.03°) and morphologically matches the VHE emission. Its spectrum can be described by a power-law model with an index ΓGeV = 2.15 ± 0.12 and smoothly joins the spectrum of the whole TeV source. A break appears in the γ-ray spectra around 100 GeV. No pulsations were found in the GeV range. Conclusions: Two main scenarios are proposed to explain the observed emission: a pulsar wind nebula (PWN) or the interaction of SNR W41 with an associated molecular cloud. X-ray observations suggest the presence of a point-like source (a pulsar candidate) near the center of the remnant and nonthermal X-ray diffuse emission that could arise from the possibly associated PWN. The PWN scenario is supported by the compatible positions of the TeV and GeV sources with the putative pulsar. However, the spectral energy distribution from radio to γ-rays is reproduced by a one-zone leptonic model only if an excess of low-energy electrons is injected following a Maxwellian distribution by a pulsar with a high spin-down power (>1037 erg s-1). This additional low-energy component is not needed if we consider that the point-like TeV source is unrelated to the extended GeV and TeV sources. The interacting SNR scenario is supported by the spatial coincidence between the γ-ray sources, the detection of OH (1720 MHz) maser lines, and the hadronic modeling.

  13. Probing the gamma-ray emission from HESS J1834–087 using H.E.S.S. and FermiLAT observations

    DOE PAGES

    Abramowski, A.; Aharonian, F.; Ait Benkhali, F.; ...

    2015-01-20

    Aims. Previous observations with the High Energy Stereoscopic System (H.E.S.S.) have revealed an extended very-high-energy (VHE; E> 100 GeV) γ-ray source, HESS J1834-087, coincident with the supernova remnant (SNR) W41. The origin of the γ-ray emission was investigated in more detail with the H.E.S.S. array and the Large Area Telescope (LAT) onboard the Fermi Gamma-ray Space Telescope. Methods. For this research, the γ-ray data provided by 61 h of observations with H.E.S.S., and four years with the Fermi LAT were analyzed, covering over five decades in energy from 1.8 GeV up to 30 TeV. The morphology and spectrum of themore » TeV and GeV sources were studied and multiwavelength data were used to investigate the origin of the γ-ray emission toward W41. Results. The TeV source can be modeled with a sum of two components: one point-like and one significantly extended (σ TeV = 0.17° ± 0.01°), both centered on SNR W41 and exhibiting spectra described by a power law with index Γ TeV ≃ 2.6. The GeV source detected with Fermi LAT is extended (σ GeV = 0.15° ± 0.03°) and morphologically matches the VHE emission. Its spectrum can be described by a power-law model with an index Γ GeV = 2.15 ± 0.12 and smoothly joins the spectrum of the whole TeV source. A break appears in the γ-ray spectra around 100 GeV. No pulsations were found in the GeV range. Conclusions. Two main scenarios are proposed to explain the observed emission: a pulsar wind nebula (PWN) or the interaction of SNR W41 with an associated molecular cloud. X-ray observations suggest the presence of a point-like source (a pulsar candidate) near the center of the remnant and nonthermal X-ray diffuse emission that could arise from the possibly associated PWN. The PWN scenario is supported by the compatible positions of the TeV and GeV sources with the putative pulsar. However, the spectral energy distribution from radio to γ-rays is reproduced by a one-zone leptonic model only if an excess of low-energy electrons is injected following a Maxwellian distribution by a pulsar with a high spin-down power (>1037 erg s-1). This additional low-energy component is not needed if we consider that the point-like TeV source is unrelated to the extended GeV and TeV sources. Finally, the interacting SNR scenario is supported by the spatial coincidence between the γ-ray sources, the detection of OH (1720 MHz) maser lines, and the hadronic modeling.« less

  14. Examining the Fermi-LAT third source catalog in search of dark matter subhalos

    DOE PAGES

    Bertoni, Bridget; Hooper, Dan; Linden, Tim

    2015-12-17

    Dark matter annihilations taking place in nearby subhalos could appear as gamma-ray sources without detectable counterparts at other wavelengths. In this study, we consider the collection of unassociated gamma-ray sources reported by the Fermi Collaboration in an effort to identify the most promising dark matter subhalo candidates. While we identify 24 bright, high-latitude, non-variable sources with spectra that are consistent with being generated by the annihilations of ~ 20–70 GeV dark matter particles (assuming annihilations to bbar b), it is not possible at this time to distinguish these sources from radio-faint gamma-ray pulsars. Deeper multi-wavelength observations will be essential tomore » clarify the nature of these sources. It is notable that we do not find any such sources that are well fit by dark matter particles heavier than ~100 GeV. We also study the angular distribution of the gamma-rays from this set of subhalo candidates, and find that the source 3FGL J2212.5+0703 prefers a spatially extended profile (of width ~ 0.15°) over that of a point source, with a significance of 4.2σ (3.6σ after trials factor). Although not yet definitive, this bright and high-latitude gamma-ray source is well fit as a nearby subhalo of m χ ≃ 20–50 GeV dark matter particles (annihilating to bb¯) and merits further multi-wavelength investigation. As a result, based on the subhalo distribution predicted by numerical simulations, we derive constraints on the dark matter annihilation cross section that are competitive to those resulting from gamma-ray observations of dwarf spheroidal galaxies, the Galactic Center, and the extragalactic gamma-ray background.« less

  15. Indicator microbes correlate with pathogenic bacteria, yeasts and helminthes in sand at a subtropical recreational beach site.

    PubMed

    Shah, A H; Abdelzaher, A M; Phillips, M; Hernandez, R; Solo-Gabriele, H M; Kish, J; Scorzetti, G; Fell, J W; Diaz, M R; Scott, T M; Lukasik, J; Harwood, V J; McQuaig, S; Sinigalliano, C D; Gidley, M L; Wanless, D; Ager, A; Lui, J; Stewart, J R; Plano, L R W; Fleming, L E

    2011-06-01

    Research into the relationship between pathogens, faecal indicator microbes and environmental factors in beach sand has been limited, yet vital to the understanding of the microbial relationship between sand and the water column and to the improvement of criteria for better human health protection at beaches. The objectives of this study were to evaluate the presence and distribution of pathogens in various zones of beach sand (subtidal, intertidal and supratidal) and to assess their relationship with environmental parameters and indicator microbes at a non-point source subtropical marine beach. In this exploratory study in subtropical Miami (Florida, USA), beach sand samples were collected and analysed over the course of 6 days for several pathogens, microbial source tracking markers and indicator microbes. An inverse correlation between moisture content and most indicator microbes was found. Significant associations were identified between some indicator microbes and pathogens (such as nematode larvae and yeasts in the genus Candida), which are from classes of microbes that are rarely evaluated in the context of recreational beach use. Results indicate that indicator microbes may predict the presence of some of the pathogens, in particular helminthes, yeasts and the bacterial pathogen Staphylococcus aureus including methicillin-resistant forms. Indicator microbes may thus be useful for monitoring beach sand and water quality at non-point source beaches. The presence of both indicator microbes and pathogens in beach sand provides one possible explanation for human health effects reported at non-point sources beaches. © 2011 The Authors. Journal of Applied Microbiology © 2011 The Society for Applied Microbiology.

  16. Modeling of Grain Size Distribution of Tsunami Sand Deposits in V-shaped Valley of Numanohama During the 2011 Tohoku Tsunami

    NASA Astrophysics Data System (ADS)

    Gusman, A. R.; Satake, K.; Goto, T.; Takahashi, T.

    2016-12-01

    Estimating tsunami amplitude from tsunami sand deposit has been a challenge. The grain size distribution of tsunami sand deposit may have correlation with tsunami inundation process, and further with its source characteristics. In order to test this hypothesis, we need a tsunami sediment transport model that can accurately estimate grain size distribution of tsunami deposit. Here, we built and validate a tsunami sediment transport model that can simulate grain size distribution. Our numerical model has three layers which are suspended load layer, active bed layer, and parent bed layer. The two bed layers contain information about the grain size distribution. This numerical model can handle a wide range of grain sizes from 0.063 (4 ϕ) to 5.657 mm (-2.5 ϕ). We apply the numerical model to simulate the sedimentation process during the 2011 Tohoku earthquake in Numanohama, Iwate prefecture, Japan. The grain size distributions at 15 sample points along a 900 m transect from the beach are used to validate the tsunami sediment transport model. The tsunami deposits are dominated by coarse sand with diameter of 0.5 - 1 mm and their thickness are up to 25 cm. Our tsunami model can well reproduce the observed tsunami run-ups that are ranged from 16 to 34 m along the steep valley in Numanohama. The shapes of the simulated grain size distributions at many sample points located within 300 m from the shoreline are similar to the observations. The differences between observed and simulated peak of grain size distributions are less than 1 ϕ. Our result also shows that the simulated sand thickness distribution along the transect is consistent with the observation.

  17. Vision in the deep sea.

    PubMed

    Warrant, Eric J; Locket, N Adam

    2004-08-01

    The deep sea is the largest habitat on earth. Its three great faunal environments--the twilight mesopelagic zone, the dark bathypelagic zone and the vast flat expanses of the benthic habitat--are home to a rich fauna of vertebrates and invertebrates. In the mesopelagic zone (150-1000 m), the down-welling daylight creates an extended scene that becomes increasingly dimmer and bluer with depth. The available daylight also originates increasingly from vertically above, and bioluminescent point-source flashes, well contrasted against the dim background daylight, become increasingly visible. In the bathypelagic zone below 1000 m no daylight remains, and the scene becomes entirely dominated by point-like bioluminescence. This changing nature of visual scenes with depth--from extended source to point source--has had a profound effect on the designs of deep-sea eyes, both optically and neurally, a fact that until recently was not fully appreciated. Recent measurements of the sensitivity and spatial resolution of deep-sea eyes--particularly from the camera eyes of fishes and cephalopods and the compound eyes of crustaceans--reveal that ocular designs are well matched to the nature of the visual scene at any given depth. This match between eye design and visual scene is the subject of this review. The greatest variation in eye design is found in the mesopelagic zone, where dim down-welling daylight and bio-luminescent point sources may be visible simultaneously. Some mesopelagic eyes rely on spatial and temporal summation to increase sensitivity to a dim extended scene, while others sacrifice this sensitivity to localise pinpoints of bright bioluminescence. Yet other eyes have retinal regions separately specialised for each type of light. In the bathypelagic zone, eyes generally get smaller and therefore less sensitive to point sources with increasing depth. In fishes, this insensitivity, combined with surprisingly high spatial resolution, is very well adapted to the detection and localisation of point-source bioluminescence at ecologically meaningful distances. At all depths, the eyes of animals active on and over the nutrient-rich sea floor are generally larger than the eyes of pelagic species. In fishes, the retinal ganglion cells are also frequently arranged in a horizontal visual streak, an adaptation for viewing the wide flat horizon of the sea floor, and all animals living there. These and many other aspects of light and vision in the deep sea are reviewed in support of the following conclusion: it is not only the intensity of light at different depths, but also its distribution in space, which has been a major force in the evolution of deep-sea vision.

  18. A Method Based on Wavelet Transforms for Source Detection in Photon-counting Detector Images. II. Application to ROSAT PSPC Images

    NASA Astrophysics Data System (ADS)

    Damiani, F.; Maggio, A.; Micela, G.; Sciortino, S.

    1997-07-01

    We apply to the specific case of images taken with the ROSAT PSPC detector our wavelet-based X-ray source detection algorithm presented in a companion paper. Such images are characterized by the presence of detector ``ribs,'' strongly varying point-spread function, and vignetting, so that their analysis provides a challenge for any detection algorithm. First, we apply the algorithm to simulated images of a flat background, as seen with the PSPC, in order to calibrate the number of spurious detections as a function of significance threshold and to ascertain that the spatial distribution of spurious detections is uniform, i.e., unaffected by the ribs; this goal was achieved using the exposure map in the detection procedure. Then, we analyze simulations of PSPC images with a realistic number of point sources; the results are used to determine the efficiency of source detection and the accuracy of output quantities such as source count rate, size, and position, upon a comparison with input source data. It turns out that sources with 10 photons or less may be confidently detected near the image center in medium-length (~104 s), background-limited PSPC exposures. The positions of sources detected near the image center (off-axis angles < 15') are accurate to within a few arcseconds. Output count rates and sizes are in agreement with the input quantities, within a factor of 2 in 90% of the cases. The errors on position, count rate, and size increase with off-axis angle and for detections of lower significance. We have also checked that the upper limits computed with our method are consistent with the count rates of undetected input sources. Finally, we have tested the algorithm by applying it on various actual PSPC images, among the most challenging for automated detection procedures (crowded fields, extended sources, and nonuniform diffuse emission). The performance of our method in these images is satisfactory and outperforms those of other current X-ray detection techniques, such as those employed to produce the MPE and WGA catalogs of PSPC sources, in terms of both detection reliability and efficiency. We have also investigated the theoretical limit for point-source detection, with the result that even sources with only 2-3 photons may be reliably detected using an efficient method in images with sufficiently high resolution and low background.

  19. Changes in the zero-point energy of the protons as the source of the binding energy of water to A-phase DNA.

    PubMed

    Reiter, G F; Senesi, R; Mayers, J

    2010-10-01

    The measured changes in the zero-point kinetic energy of the protons are entirely responsible for the binding energy of water molecules to A phase DNA at the concentration of 6  water molecules/base pair. The changes in kinetic energy can be expected to be a significant contribution to the energy balance in intracellular biological processes and the properties of nano-confined water. The shape of the momentum distribution in the dehydrated A phase is consistent with coherent delocalization of some of the protons in a double well potential, with a separation of the wells of 0.2 Å.

  20. ON THE CONNECTION OF THE APPARENT PROPER MOTION AND THE VLBI STRUCTURE OF COMPACT RADIO SOURCES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moor, A.; Frey, S.; Lambert, S. B.

    2011-06-15

    Many of the compact extragalactic radio sources that are used as fiducial points to define the celestial reference frame are known to have proper motions detectable with long-term geodetic/astrometric very long baseline interferometry (VLBI) measurements. These changes can be as high as several hundred microarcseconds per year for certain objects. When imaged with VLBI at milliarcsecond (mas) angular resolution, these sources (radio-loud active galactic nuclei) typically show structures dominated by a compact, often unresolved 'core' and a one-sided 'jet'. The positional instability of compact radio sources is believed to be connected with changes in their brightness distribution structure. For themore » first time, we test this assumption in a statistical sense on a large sample rather than on only individual objects. We investigate a sample of 62 radio sources for which reliable long-term time series of astrometric positions as well as detailed 8 GHz VLBI brightness distribution models are available. We compare the characteristic direction of their extended jet structure and the direction of their apparent proper motion. We present our data and analysis method, and conclude that there is indeed a correlation between the two characteristic directions. However, there are cases where the {approx}1-10 mas scale VLBI jet directions are significantly misaligned with respect to the apparent proper motion direction.« less

  1. Modification and validation of an analytical source model for external beam radiotherapy Monte Carlo dose calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidson, Scott E., E-mail: sedavids@utmb.edu

    Purpose: A dose calculation tool, which combines the accuracy of the dose planning method (DPM) Monte Carlo code and the versatility of a practical analytical multisource model, which was previously reported has been improved and validated for the Varian 6 and 10 MV linear accelerators (linacs). The calculation tool can be used to calculate doses in advanced clinical application studies. One shortcoming of current clinical trials that report dose from patient plans is the lack of a standardized dose calculation methodology. Because commercial treatment planning systems (TPSs) have their own dose calculation algorithms and the clinical trial participant who usesmore » these systems is responsible for commissioning the beam model, variation exists in the reported calculated dose distributions. Today’s modern linac is manufactured to tight specifications so that variability within a linac model is quite low. The expectation is that a single dose calculation tool for a specific linac model can be used to accurately recalculate dose from patient plans that have been submitted to the clinical trial community from any institution. The calculation tool would provide for a more meaningful outcome analysis. Methods: The analytical source model was described by a primary point source, a secondary extra-focal source, and a contaminant electron source. Off-axis energy softening and fluence effects were also included. The additions of hyperbolic functions have been incorporated into the model to correct for the changes in output and in electron contamination with field size. A multileaf collimator (MLC) model is included to facilitate phantom and patient dose calculations. An offset to the MLC leaf positions was used to correct for the rudimentary assumed primary point source. Results: Dose calculations of the depth dose and profiles for field sizes 4 × 4 to 40 × 40 cm agree with measurement within 2% of the maximum dose or 2 mm distance to agreement (DTA) for 95% of the data points tested. The model was capable of predicting the depth of the maximum dose within 1 mm. Anthropomorphic phantom benchmark testing of modulated and patterned MLCs treatment plans showed agreement to measurement within 3% in target regions using thermoluminescent dosimeters (TLD). Using radiochromic film normalized to TLD, a gamma criteria of 3% of maximum dose and 2 mm DTA was applied with a pass rate of least 85% in the high dose, high gradient, and low dose regions. Finally, recalculations of patient plans using DPM showed good agreement relative to a commercial TPS when comparing dose volume histograms and 2D dose distributions. Conclusions: A unique analytical source model coupled to the dose planning method Monte Carlo dose calculation code has been modified and validated using basic beam data and anthropomorphic phantom measurement. While this tool can be applied in general use for a particular linac model, specifically it was developed to provide a singular methodology to independently assess treatment plan dose distributions from those clinical institutions participating in National Cancer Institute trials.« less

  2. Evaluation of the levels of alcohol sulfates and ethoxysulfates in marine sediments near wastewater discharge points along the coast of Tenerife Island.

    PubMed

    Fernández-Ramos, C; Ballesteros, O; Zafra-Gómez, A; Camino-Sánchez, F J; Blanc, R; Navalón, A; Pérez-Trujillo, J P; Vílchez, J L

    2014-02-15

    Alcohol sulfates (AS) and alcohol ethoxysulfates (AES) are all High Production Volume and 'down-the-drain' chemicals used globally in detergent and personal care products, resulting in low levels ultimately released to the environment via wastewater treatment plant effluents. They have a strong affinity for sorption to sediments. Almost 50% of Tenerife Island surface area is environmentally protected. Therefore, determination of concentration levels of AS/AES in marine sediments near wastewater discharge points along the coast of the Island is of interest. These data were obtained after pressurized liquid extraction and liquid chromatography-tandem mass spectrometry analysis. Short chains of AES and especially of AS dominated the homologue distribution for AES. The Principal Components Analysis was used. The results showed that the sources of AS and AES were the same and that both compounds exhibit similar behavior. Three different patterns in the distribution for homologues and ethoxymers were found. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Evaluation of the image quality of telescopes using the star test

    NASA Astrophysics Data System (ADS)

    Vazquez y Monteil, Sergio; Salazar Romero, Marcos A.; Gale, David M.

    2004-10-01

    The Point Spread Function (PSF) or star test is one of the main criteria to be considered in the quality of the image formed by a telescope. In a real system the distribution of irradiance in the image of a point source is given by the PSF, a function which is highly sensitive to aberrations. The PSF of a telescope may be determined by measuring the intensity distribution in the image of a star. Alternatively, if we already know the aberrations present in the optical system, then we may use diffraction theory to calculate the function. In this paper we propose a method for determining the wavefront aberrations from the PSF, using Genetic Algorithms to perform an optimization process starting from the PSF instead of the more traditional method of adjusting an aberration polynomial. We show that this method of phase recuperation is immune to noise-induced errors arising during image aquisition and registration. Some practical results are shown.

  4. Optimal Operation Method of Smart House by Controllable Loads based on Smart Grid Topology

    NASA Astrophysics Data System (ADS)

    Yoza, Akihiro; Uchida, Kosuke; Yona, Atsushi; Senju, Tomonobu

    2013-08-01

    From the perspective of global warming suppression and depletion of energy resources, renewable energy such as wind generation (WG) and photovoltaic generation (PV) are getting attention in distribution systems. Additionally, all electrification apartment house or residence such as DC smart house have increased in recent years. However, due to fluctuating power from renewable energy sources and loads, supply-demand balancing fluctuations of power system become problematic. Therefore, "smart grid" has become very popular in the worldwide. This article presents a methodology for optimal operation of a smart grid to minimize the interconnection point power flow fluctuations. To achieve the proposed optimal operation, we use distributed controllable loads such as battery and heat pump. By minimizing the interconnection point power flow fluctuations, it is possible to reduce the maximum electric power consumption and the electric cost. This system consists of photovoltaics generator, heat pump, battery, solar collector, and load. In order to verify the effectiveness of the proposed system, MATLAB is used in simulations.

  5. Uncertainty of exploitation estimates made from tag returns

    USGS Publications Warehouse

    Miranda, L.E.; Brock, R.E.; Dorr, B.S.

    2002-01-01

    Over 6,000 crappies Pomoxis spp. were tagged in five water bodies to estimate exploitation rates by anglers. Exploitation rates were computed as the percentage of tags returned after adjustment for three sources of uncertainty: postrelease mortality due to the tagging process, tag loss, and the reporting rate of tagged fish. Confidence intervals around exploitation rates were estimated by resampling from the probability distributions of tagging mortality, tag loss, and reporting rate. Estimates of exploitation rates ranged from 17% to 54% among the five study systems. Uncertainty around estimates of tagging mortality, tag loss, and reporting resulted in 90% confidence intervals around the median exploitation rate as narrow as 15 percentage points and as broad as 46 percentage points. The greatest source of estimation error was uncertainty about tag reporting. Because the large investments required by tagging and reward operations produce imprecise estimates of the exploitation rate, it may be worth considering other approaches to estimating it or simply circumventing the exploitation question altogether.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Getman, Konstantin V.; Broos, Patrick S.; Feigelson, Eric D.

    The Star Formation in Nearby Clouds (SFiNCs) project is aimed at providing a detailed study of the young stellar populations and of star cluster formation in the nearby 22 star-forming regions (SFRs) for comparison with our earlier MYStIX survey of richer, more distant clusters. As a foundation for the SFiNCs science studies, here, homogeneous data analyses of the Chandra X-ray and Spitzer mid-infrared archival SFiNCs data are described, and the resulting catalogs of over 15,300 X-ray and over 1,630,000 mid-infrared point sources are presented. On the basis of their X-ray/infrared properties and spatial distributions, nearly 8500 point sources have been identifiedmore » as probable young stellar members of the SFiNCs regions. Compared to the existing X-ray/mid-infrared publications, the SFiNCs member list increases the census of YSO members by 6%–200% for individual SFRs and by 40% for the merged sample of all 22 SFiNCs SFRs.« less

  7. Aerosol concentration and size distribution measured below, in, and above cloud from the DOE G-1 during VOCALS-REx

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kleinman L. I.; Daum, P. H.; Lee, Y.-N.

    2012-01-04

    During the VOCALS Regional Experiment, the DOE G-1 aircraft was used to sample a varying aerosol environment pertinent to properties of stratocumulus clouds over a longitude band extending 800 km west from the Chilean coast at Arica. Trace gas and aerosol measurements are presented as a function of longitude, altitude, and dew point in this study. Spatial distributions are consistent with an upper atmospheric source for O{sub 3} and South American coastal sources for marine boundary layer (MBL) CO and aerosol, most of which is acidic sulfate. Pollutant layers in the free troposphere (FT) can be a result of emissionsmore » to the north in Peru or long range transport from the west. At a given altitude in the FT (up to 3 km), dew point varies by 40 C with dry air descending from the upper atmospheric and moist air having a boundary layer (BL) contribution. Ascent of BL air to a cold high altitude results in the condensation and precipitation removal of all but a few percent of BL water along with aerosol that served as CCN. Thus, aerosol volume decreases with dew point in the FT. Aerosol size spectra have a bimodal structure in the MBL and an intermediate diameter unimodal distribution in the FT. Comparing cloud droplet number concentration (CDNC) and pre-cloud aerosol (D{sub p} > 100 nm) gives a linear relation up to a number concentration of {approx}150 cm{sup -3}, followed by a less than proportional increase in CDNC at higher aerosol number concentration. A number balance between below cloud aerosol and cloud droplets indicates that {approx}25 % of aerosol with D{sub p} > 100 nm are interstitial (not activated). A direct comparison of pre-cloud and in-cloud aerosol yields a higher estimate. Artifacts in the measurement of interstitial aerosol due to droplet shatter and evaporation are discussed. Within each of 102 constant altitude cloud transects, CDNC and interstitial aerosol were anti-correlated. An examination of one cloud as a case study shows that the interstitial aerosol appears to have a background, upon which is superimposed a high frequency signal that contains the anti-correlation. The anti-correlation is a possible source of information on particle activation or evaporation.« less

  8. Aerosol concentration and size distribution measured below, in, and above cloud from the DOE G-1 during VOCALS-REx

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kleinman, L.I.; Daum, P. H.; Lee, Y.-N.

    2011-06-21

    During the VOCALS Regional Experiment, the DOE G-1 aircraft was used to sample a varying aerosol environment pertinent to properties of stratocumulus clouds over a longitude band extending 800 km west from the Chilean coast at Arica. Trace gas and aerosol measurements are presented as a function of longitude, altitude, and dew point in this study. Spatial distributions are consistent with an upper atmospheric source for O{sub 3} and South American coastal sources for marine boundary layer (MBL) CO and aerosol, most of which is acidic sulfate in agreement with the dominant pollution source being SO{sub 2} from Cu smeltersmore » and power plants. Pollutant layers in the free troposphere (FT) can be a result of emissions to the north in Peru or long range transport from the west. At a given altitude in the FT (up to 3 km), dew point varies by 40 C with dry air descending from the upper atmospheric and moist air having a BL contribution. Ascent of BL air to a cold high altitude results in the condensation and precipitation removal of all but a few percent of BL water along with aerosol that served as CCN. Thus, aerosol volume decreases with dew point in the FT. Aerosol size spectra have a bimodal structure in the MBL and an intermediate diameter unimodal distribution in the FT. Comparing cloud droplet number concentration (CDNC) and pre-cloud aerosol (Dp > 100 nm) gives a linear relation up to a number concentration of {approx}150 cm{sup -3}, followed by a less than proportional increase in CDNC at higher aerosol number concentration. A number balance between below cloud aerosol and cloud droplets indicates that {approx}25% of aerosol in the PCASP size range are interstitial (not activated). One hundred and two constant altitude cloud transects were identified and used to determine properties of interstitial aerosol. One transect is examined in detail as a case study. Approximately 25 to 50% of aerosol with D{sub p} > 110 nm were not activated, the difference between the two approaches possibly representing shattered cloud droplets or unknown artifact. CDNC and interstitial aerosol were anti-correlated in all cloud transects, consistent with the occurrence of dry in-cloud areas due to entrainment or circulation mixing.« less

  9. A stress ecology framework for comprehensive risk assessment of diffuse pollution.

    PubMed

    van Straalen, Nico M; van Gestel, Cornelis A M

    2008-12-01

    Environmental pollution is traditionally classified as either localized or diffuse. Local pollution comes from a point source that emits a well-defined cocktail of chemicals, distributed in the environment in the form of a gradient around the source. Diffuse pollution comes from many sources, small and large, that cause an erratic distribution of chemicals, interacting with those from other sources into a complex mixture of low to moderate concentrations over a large area. There is no good method for ecological risk assessment of such types of pollution. We argue that effects of diffuse contamination in the field must be analysed in the wider framework of stress ecology. A multivariate approach can be applied to filter effects of contaminants from the many interacting factors at the ecosystem level. Four case studies are discussed (1) functional and structural properties of terrestrial model ecosystems, (2) physiological profiles of microbial communities, (3) detritivores in reedfield litter, and (4) benthic invertebrates in canal sediment. In each of these cases the data were analysed by multivariate statistics and associations between ecological variables and the levels of contamination were established. We argue that the stress ecology framework is an appropriate assessment instrument for discriminating effects of pollution from other anthropogenic disturbances and naturally varying factors.

  10. Inferring Models of Bacterial Dynamics toward Point Sources

    PubMed Central

    Jashnsaz, Hossein; Nguyen, Tyler; Petrache, Horia I.; Pressé, Steve

    2015-01-01

    Experiments have shown that bacteria can be sensitive to small variations in chemoattractant (CA) concentrations. Motivated by these findings, our focus here is on a regime rarely studied in experiments: bacteria tracking point CA sources (such as food patches or even prey). In tracking point sources, the CA detected by bacteria may show very large spatiotemporal fluctuations which vary with distance from the source. We present a general statistical model to describe how bacteria locate point sources of food on the basis of stochastic event detection, rather than CA gradient information. We show how all model parameters can be directly inferred from single cell tracking data even in the limit of high detection noise. Once parameterized, our model recapitulates bacterial behavior around point sources such as the “volcano effect”. In addition, while the search by bacteria for point sources such as prey may appear random, our model identifies key statistical signatures of a targeted search for a point source given any arbitrary source configuration. PMID:26466373

  11. The source mechanisms of low frequency events in volcanoes - a comparison of synthetic and real seismic data on Soufriere Hills Volcano, Montserrat

    NASA Astrophysics Data System (ADS)

    Karl, S.; Neuberg, J. W.

    2012-04-01

    Low frequency seismic signals are one class of volcano seismic earthquakes that have been observed at many volcanoes around the world, and are thought to be associated with resonating fluid-filled conduits or fluid movements. Amongst others, Neuberg et al. (2006) proposed a conceptual model for the trigger of low frequency events at Montserrat involving the brittle failure of magma in the glass transition in response to high shear stresses during the upwards movement of magma in the volcanic edifice. For this study, synthetic seismograms were generated following the proposed concept of Neuberg et al. (2006) by using an extended source modelled as an octagonal arrangement of double couples approximating a circular ringfault. For comparison, synthetic seismograms were generated using single forces only. For both scenarios, synthetic seismograms were generated using a seismic station distribution as encountered on Soufriere Hills Volcano, Montserrat. To gain a better quantitative understanding of the driving forces of low frequency events, inversions for the physical source mechanisms have become increasingly common. Therefore, we perform moment tensor inversions (Dreger, 2003) using the synthetic data as well as a chosen set of seismograms recorded on Soufriere Hills Volcano. The inversions are carried out under the (wrong) assumption to have an underlying point source rather than an extended source as the trigger mechanism of the low frequency seismic events. We will discuss differences between inversion results, and how to interpret the moment tensor components (double couple, isotropic, or CLVD), which were based on a point source, in terms of an extended source.

  12. Study of dose calculation on breast brachytherapy using prism TPS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fendriani, Yoza; Haryanto, Freddy

    2015-09-30

    PRISM is one of non-commercial Treatment Planning System (TPS) and is developed at the University of Washington. In Indonesia, many cancer hospitals use expensive commercial TPS. This study aims to investigate Prism TPS which been applied to the dose distribution of brachytherapy by taking into account the effect of source position and inhomogeneities. The results will be applicable for clinical Treatment Planning System. Dose calculation has been implemented for water phantom and CT scan images of breast cancer using point source and line source. This study used point source and line source and divided into two cases. On the firstmore » case, Ir-192 seed source is located at the center of treatment volume. On the second case, the source position is gradually changed. The dose calculation of every case performed on a homogeneous and inhomogeneous phantom with dimension 20 × 20 × 20 cm{sup 3}. The inhomogeneous phantom has inhomogeneities volume 2 × 2 × 2 cm{sup 3}. The results of dose calculations using PRISM TPS were compared to literature data. From the calculation of PRISM TPS, dose rates show good agreement with Plato TPS and other study as published by Ramdhani. No deviations greater than ±4% for all case. Dose calculation in inhomogeneous and homogenous cases show similar result. This results indicate that Prism TPS is good in dose calculation of brachytherapy but not sensitive for inhomogeneities. Thus, the dose calculation parameters developed in this study were found to be applicable for clinical treatment planning of brachytherapy.« less

  13. A Method for Identifying Pollution Sources of Heavy Metals and PAH for a Risk-Based Management of a Mediterranean Harbour

    PubMed Central

    Moranda, Arianna

    2017-01-01

    A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities. PMID:29270328

  14. A Method for Identifying Pollution Sources of Heavy Metals and PAH for a Risk-Based Management of a Mediterranean Harbour.

    PubMed

    Paladino, Ombretta; Moranda, Arianna; Seyedsalehi, Mahdi

    2017-01-01

    A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities.

  15. NON-POINT SOURCE POLLUTION

    EPA Science Inventory

    Non-point source pollution is a diffuse source that is difficult to measure and is highly variable due to different rain patterns and other climatic conditions. In many areas, however, non-point source pollution is the greatest source of water quality degradation. Presently, stat...

  16. Coordinated Control Method of Voltage and Reactive Power for Active Distribution Networks Based on Soft Open Point

    DOE PAGES

    Li, Peng; Ji, Haoran; Wang, Chengshan; ...

    2017-03-22

    The increasing penetration of distributed generators (DGs) exacerbates the risk of voltage violations in active distribution networks (ADNs). The conventional voltage regulation devices limited by the physical constraints are difficult to meet the requirement of real-time voltage and VAR control (VVC) with high precision when DGs fluctuate frequently. But, soft open point (SOP), a flexible power electronic device, can be used as the continuous reactive power source to realize the fast voltage regulation. Considering the cooperation of SOP and multiple regulation devices, this paper proposes a coordinated VVC method based on SOP for ADNs. Firstly, a time-series model of coordi-natedmore » VVC is developed to minimize operation costs and eliminate voltage violations of ADNs. Then, by applying the linearization and conic relaxation, the original nonconvex mixed-integer non-linear optimization model is converted into a mixed-integer second-order cone programming (MISOCP) model which can be efficiently solved to meet the requirement of voltage regulation rapidity. Here, we carried out some case studies on the IEEE 33-node system and IEEE 123-node system to illustrate the effectiveness of the proposed method.« less

  17. Coordinated Control Method of Voltage and Reactive Power for Active Distribution Networks Based on Soft Open Point

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Peng; Ji, Haoran; Wang, Chengshan

    The increasing penetration of distributed generators (DGs) exacerbates the risk of voltage violations in active distribution networks (ADNs). The conventional voltage regulation devices limited by the physical constraints are difficult to meet the requirement of real-time voltage and VAR control (VVC) with high precision when DGs fluctuate frequently. But, soft open point (SOP), a flexible power electronic device, can be used as the continuous reactive power source to realize the fast voltage regulation. Considering the cooperation of SOP and multiple regulation devices, this paper proposes a coordinated VVC method based on SOP for ADNs. Firstly, a time-series model of coordi-natedmore » VVC is developed to minimize operation costs and eliminate voltage violations of ADNs. Then, by applying the linearization and conic relaxation, the original nonconvex mixed-integer non-linear optimization model is converted into a mixed-integer second-order cone programming (MISOCP) model which can be efficiently solved to meet the requirement of voltage regulation rapidity. Here, we carried out some case studies on the IEEE 33-node system and IEEE 123-node system to illustrate the effectiveness of the proposed method.« less

  18. Numerical modeling of laser assisted tape winding process

    NASA Astrophysics Data System (ADS)

    Zaami, Amin; Baran, Ismet; Akkerman, Remko

    2017-10-01

    Laser assisted tape winding (LATW) has become more and more popular way of producing new thermoplastic products such as ultra-deep sea water riser, gas tanks, structural parts for aerospace applications. Predicting the temperature in LATW has been a source of great interest since the temperature at nip-point plays a key role for mechanical interface performance. Modeling the LATW process includes several challenges such as the interaction of optics and heat transfer. In the current study, numerical modeling of the optical behavior of laser radiation on circular surfaces is investigated based on a ray tracing and non-specular reflection model. The non-specular reflection is implemented considering the anisotropic reflective behavior of the fiber-reinforced thermoplastic tape using a bidirectional reflectance distribution function (BRDF). The proposed model in the present paper includes a three-dimensional circular geometry, in which the effects of reflection from different ranges of the circular surface as well as effect of process parameters on temperature distribution are studied. The heat transfer model is constructed using a fully implicit method. The effect of process parameters on the nip-point temperature is examined. Furthermore, several laser distributions including Gaussian and linear are examined which has not been considered in literature up to now.

  19. Synoptic, Global Mhd Model For The Solar Corona

    NASA Astrophysics Data System (ADS)

    Cohen, Ofer; Sokolov, I. V.; Roussev, I. I.; Gombosi, T. I.

    2007-05-01

    The common techniques for mimic the solar corona heating and the solar wind acceleration in global MHD models are as follow. 1) Additional terms in the momentum and energy equations derived from the WKB approximation for the Alfv’en wave turbulence; 2) some empirical heat source in the energy equation; 3) a non-uniform distribution of the polytropic index, γ, used in the energy equation. In our model, we choose the latter approach. However, in order to get a more realistic distribution of γ, we use the empirical Wang-Sheeley-Arge (WSA) model to constrain the MHD solution. The WSA model provides the distribution of the asymptotic solar wind speed from the potential field approximation; therefore it also provides the distribution of the kinetic energy. Assuming that far from the Sun the total energy is dominated by the energy of the bulk motion and assuming the conservation of the Bernoulli integral, we can trace the total energy along a magnetic field line to the solar surface. On the surface the gravity is known and the kinetic energy is negligible. Therefore, we can get the surface distribution of γ as a function of the final speed originating from this point. By interpolation γ to spherically uniform value on the source surface, we use this spatial distribution of γ in the energy equation to obtain a self-consistent, steady state MHD solution for the solar corona. We present the model result for different Carrington Rotations.

  20. Construction of a 1 MeV Electron Accelerator for High Precision Beta Decay Studies

    NASA Astrophysics Data System (ADS)

    Longfellow, Brenden

    2014-09-01

    Beta decay energy calibration for detectors is typically established using conversion sources. However, the calibration points from conversion sources are not evenly distributed over the beta energy spectrum and the foil backing of the conversion sources produces perturbations in the calibration spectrum. To improve this, an external, tunable electron beam coupled by a magnetic field can be used to calibrate the detector. The 1 MeV electron accelerator in development at Triangle Universities Nuclear Laboratory (TUNL) utilizes a pelletron charging system. The electron gun shoots 104 electrons per second with an energy range of 50 keV to 1 MeV and is pulsed at a 10 kHz rate with a few ns width. The magnetic field in the spectrometer is 1 T and guiding fields of 0.01 to 0.05 T for the electron gun are used to produce a range of pitch angles. This accelerator can be used to calibrate detectors evenly over its energy range and determine the detector response over a range of pitch angles. Beta decay energy calibration for detectors is typically established using conversion sources. However, the calibration points from conversion sources are not evenly distributed over the beta energy spectrum and the foil backing of the conversion sources produces perturbations in the calibration spectrum. To improve this, an external, tunable electron beam coupled by a magnetic field can be used to calibrate the detector. The 1 MeV electron accelerator in development at Triangle Universities Nuclear Laboratory (TUNL) utilizes a pelletron charging system. The electron gun shoots 104 electrons per second with an energy range of 50 keV to 1 MeV and is pulsed at a 10 kHz rate with a few ns width. The magnetic field in the spectrometer is 1 T and guiding fields of 0.01 to 0.05 T for the electron gun are used to produce a range of pitch angles. This accelerator can be used to calibrate detectors evenly over its energy range and determine the detector response over a range of pitch angles. TUNL REU Program.

Top