Sample records for point source models

  1. Inferring Models of Bacterial Dynamics toward Point Sources

    PubMed Central

    Jashnsaz, Hossein; Nguyen, Tyler; Petrache, Horia I.; Pressé, Steve

    2015-01-01

    Experiments have shown that bacteria can be sensitive to small variations in chemoattractant (CA) concentrations. Motivated by these findings, our focus here is on a regime rarely studied in experiments: bacteria tracking point CA sources (such as food patches or even prey). In tracking point sources, the CA detected by bacteria may show very large spatiotemporal fluctuations which vary with distance from the source. We present a general statistical model to describe how bacteria locate point sources of food on the basis of stochastic event detection, rather than CA gradient information. We show how all model parameters can be directly inferred from single cell tracking data even in the limit of high detection noise. Once parameterized, our model recapitulates bacterial behavior around point sources such as the “volcano effect”. In addition, while the search by bacteria for point sources such as prey may appear random, our model identifies key statistical signatures of a targeted search for a point source given any arbitrary source configuration. PMID:26466373

  2. An improved DPSM technique for modelling ultrasonic fields in cracked solids

    NASA Astrophysics Data System (ADS)

    Banerjee, Sourav; Kundu, Tribikram; Placko, Dominique

    2007-04-01

    In recent years Distributed Point Source Method (DPSM) is being used for modelling various ultrasonic, electrostatic and electromagnetic field modelling problems. In conventional DPSM several point sources are placed near the transducer face, interface and anomaly boundaries. The ultrasonic or the electromagnetic field at any point is computed by superimposing the contributions of different layers of point sources strategically placed. The conventional DPSM modelling technique is modified in this paper so that the contributions of the point sources in the shadow region can be removed from the calculations. For this purpose the conventional point sources that radiate in all directions are replaced by Controlled Space Radiation (CSR) sources. CSR sources can take care of the shadow region problem to some extent. Complete removal of the shadow region problem can be achieved by introducing artificial interfaces. Numerically synthesized fields obtained by the conventional DPSM technique that does not give any special consideration to the point sources in the shadow region and the proposed modified technique that nullifies the contributions of the point sources in the shadow region are compared. One application of this research can be found in the improved modelling of the real time ultrasonic non-destructive evaluation experiments.

  3. Strong ground motion simulation of the 2016 Kumamoto earthquake of April 16 using multiple point sources

    NASA Astrophysics Data System (ADS)

    Nagasaka, Yosuke; Nozu, Atsushi

    2017-02-01

    The pseudo point-source model approximates the rupture process on faults with multiple point sources for simulating strong ground motions. A simulation with this point-source model is conducted by combining a simple source spectrum following the omega-square model with a path spectrum, an empirical site amplification factor, and phase characteristics. Realistic waveforms can be synthesized using the empirical site amplification factor and phase models even though the source model is simple. The Kumamoto earthquake occurred on April 16, 2016, with M JMA 7.3. Many strong motions were recorded at stations around the source region. Some records were considered to be affected by the rupture directivity effect. This earthquake was suitable for investigating the applicability of the pseudo point-source model, the current version of which does not consider the rupture directivity effect. Three subevents (point sources) were located on the fault plane, and the parameters of the simulation were determined. The simulated results were compared with the observed records at K-NET and KiK-net stations. It was found that the synthetic Fourier spectra and velocity waveforms generally explained the characteristics of the observed records, except for underestimation in the low frequency range. Troughs in the observed Fourier spectra were also well reproduced by placing multiple subevents near the hypocenter. The underestimation is presumably due to the following two reasons. The first is that the pseudo point-source model targets subevents that generate strong ground motions and does not consider the shallow large slip. The second reason is that the current version of the pseudo point-source model does not consider the rupture directivity effect. Consequently, strong pulses were not reproduced enough at stations northeast of Subevent 3 such as KMM004, where the effect of rupture directivity was significant, while the amplitude was well reproduced at most of the other stations. This result indicates the necessity for improving the pseudo point-source model, by introducing azimuth-dependent corner frequency for example, so that it can incorporate the effect of rupture directivity.[Figure not available: see fulltext.

  4. Calculation and analysis of the non-point source pollution in the upstream watershed of the Panjiakou Reservoir, People's Republic of China

    NASA Astrophysics Data System (ADS)

    Zhang, S.; Tang, L.

    2007-05-01

    Panjiakou Reservoir is an important drinking water resource in Haihe River Basin, Hebei Province, People's Republic of China. The upstream watershed area is about 35,000 square kilometers. Recently, the water pollution in the reservoir is becoming more serious owing to the non-point pollution as well as point source pollution on the upstream watershed. To effectively manage the reservoir and watershed and develop a plan to reduce pollutant loads, the loading of non-point and point pollution and their distribution on the upstream watershed must be understood fully. The SWAT model is used to simulate the production and transportation of the non-point source pollutants in the upstream watershed of the Panjiakou Reservoir. The loadings of non-point source pollutants are calculated for different hydrologic years and the spatial and temporal characteristics of non-point source pollution are studied. The stream network and topographic characteristics of the stream network and sub-basins are all derived from the DEM by ArcGIS software. The soil and land use data are reclassified and the soil physical properties database file is created for the model. The SWAT model was calibrated with observed data of several hydrologic monitoring stations in the study area. The results of the calibration show that the model performs fairly well. Then the calibrated model was used to calculate the loadings of non-point source pollutants for a wet year, a normal year and a dry year respectively. The time and space distribution of flow, sediment and non-point source pollution were analyzed depending on the simulated results. The comparison of different hydrologic years on calculation results is dramatic. The loading of non-point source pollution in the wet year is relatively larger but smaller in the dry year since the non-point source pollutants are mainly transported through the runoff. The pollution loading within a year is mainly produced in the flood season. Because SWAT is a distributed model, it is possible to view model output as it varies across the basin, so the critical areas and reaches can be found in the study area. According to the simulation results, it is found that different land uses can yield different results and fertilization in rainy season has an important impact on the non- point source pollution. The limitations of the SWAT model are also discussed and the measures of the control and prevention of non- point source pollution for Panjiakou Reservoir are presented according to the analysis of model calculation results.

  5. Modeling deep brain stimulation: point source approximation versus realistic representation of the electrode

    NASA Astrophysics Data System (ADS)

    Zhang, Tianhe C.; Grill, Warren M.

    2010-12-01

    Deep brain stimulation (DBS) has emerged as an effective treatment for movement disorders; however, the fundamental mechanisms by which DBS works are not well understood. Computational models of DBS can provide insights into these fundamental mechanisms and typically require two steps: calculation of the electrical potentials generated by DBS and, subsequently, determination of the effects of the extracellular potentials on neurons. The objective of this study was to assess the validity of using a point source electrode to approximate the DBS electrode when calculating the thresholds and spatial distribution of activation of a surrounding population of model neurons in response to monopolar DBS. Extracellular potentials in a homogenous isotropic volume conductor were calculated using either a point current source or a geometrically accurate finite element model of the Medtronic DBS 3389 lead. These extracellular potentials were coupled to populations of model axons, and thresholds and spatial distributions were determined for different electrode geometries and axon orientations. Median threshold differences between DBS and point source electrodes for individual axons varied between -20.5% and 9.5% across all orientations, monopolar polarities and electrode geometries utilizing the DBS 3389 electrode. Differences in the percentage of axons activated at a given amplitude by the point source electrode and the DBS electrode were between -9.0% and 12.6% across all monopolar configurations tested. The differences in activation between the DBS and point source electrodes occurred primarily in regions close to conductor-insulator interfaces and around the insulating tip of the DBS electrode. The robustness of the point source approximation in modeling several special cases—tissue anisotropy, a long active electrode and bipolar stimulation—was also examined. Under the conditions considered, the point source was shown to be a valid approximation for predicting excitation of populations of neurons in response to DBS.

  6. MacBurn's cylinder test problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shestakov, Aleksei I.

    2016-02-29

    This note describes test problem for MacBurn which illustrates its performance. The source is centered inside a cylinder with axial-extent-to-radius ratio s.t. each end receives 1/4 of the thermal energy. The source (fireball) is modeled as either a point or as disk of finite radius, as described by Marrs et al. For the latter, the disk is divided into 13 equal area segments, each approximated as a point source and models a partially occluded fireball. If the source is modeled as a single point, one obtains very nearly the expected deposition, e.g., 1/4 of the flux on each end andmore » energy is conserved. If the source is modeled as a disk, both conservation and energy fraction degrade. However, errors decrease if the source radius to domain size ratio decreases. Modeling the source as a disk increases run-times.« less

  7. A guide to differences between stochastic point-source and stochastic finite-fault simulations

    USGS Publications Warehouse

    Atkinson, G.M.; Assatourians, K.; Boore, D.M.; Campbell, K.; Motazedian, D.

    2009-01-01

    Why do stochastic point-source and finite-fault simulation models not agree on the predicted ground motions for moderate earthquakes at large distances? This question was posed by Ken Campbell, who attempted to reproduce the Atkinson and Boore (2006) ground-motion prediction equations for eastern North America using the stochastic point-source program SMSIM (Boore, 2005) in place of the finite-source stochastic program EXSIM (Motazedian and Atkinson, 2005) that was used by Atkinson and Boore (2006) in their model. His comparisons suggested that a higher stress drop is needed in the context of SMSIM to produce an average match, at larger distances, with the model predictions of Atkinson and Boore (2006) based on EXSIM; this is so even for moderate magnitudes, which should be well-represented by a point-source model. Why? The answer to this question is rooted in significant differences between point-source and finite-source stochastic simulation methodologies, specifically as implemented in SMSIM (Boore, 2005) and EXSIM (Motazedian and Atkinson, 2005) to date. Point-source and finite-fault methodologies differ in general in several important ways: (1) the geometry of the source; (2) the definition and application of duration; and (3) the normalization of finite-source subsource summations. Furthermore, the specific implementation of the methods may differ in their details. The purpose of this article is to provide a brief overview of these differences, their origins, and implications. This sets the stage for a more detailed companion article, "Comparing Stochastic Point-Source and Finite-Source Ground-Motion Simulations: SMSIM and EXSIM," in which Boore (2009) provides modifications and improvements in the implementations of both programs that narrow the gap and result in closer agreement. These issues are important because both SMSIM and EXSIM have been widely used in the development of ground-motion prediction equations and in modeling the parameters that control observed ground motions.

  8. HYDROLOGY AND SEDIMENT MODELING USING THE BASINS NON-POINT SOURCE MODEL

    EPA Science Inventory

    The Non-Point Source Model (Hydrologic Simulation Program-Fortran, or HSPF) within the EPA Office of Water's BASINS watershed modeling system was used to simulate streamflow and total suspended solids within Contentnea Creek, North Carolina, which is a tributary of the Neuse Rive...

  9. An infrared sky model based on the IRAS point source data

    NASA Technical Reports Server (NTRS)

    Cohen, Martin; Walker, Russell; Wainscoat, Richard; Volk, Kevin; Walker, Helen; Schwartz, Deborah

    1990-01-01

    A detailed model for the infrared point source sky is presented that comprises geometrically and physically realistic representations of the galactic disk, bulge, spheroid, spiral arms, molecular ring, and absolute magnitudes. The model was guided by a parallel Monte Carlo simulation of the Galaxy. The content of the galactic source table constitutes an excellent match to the 12 micrometer luminosity function in the simulation, as well as the luminosity functions at V and K. Models are given for predicting the density of asteroids to be observed, and the diffuse background radiance of the Zodiacal cloud. The model can be used to predict the character of the point source sky expected for observations from future infrared space experiments.

  10. Interferometry with flexible point source array for measuring complex freeform surface and its design algorithm

    NASA Astrophysics Data System (ADS)

    Li, Jia; Shen, Hua; Zhu, Rihong; Gao, Jinming; Sun, Yue; Wang, Jinsong; Li, Bo

    2018-06-01

    The precision of the measurements of aspheric and freeform surfaces remains the primary factor restrict their manufacture and application. One effective means of measuring such surfaces involves using reference or probe beams with angle modulation, such as tilted-wave-interferometer (TWI). It is necessary to improve the measurement efficiency by obtaining the optimum point source array for different pieces before TWI measurements. For purpose of forming a point source array based on the gradients of different surfaces under test, we established a mathematical model describing the relationship between the point source array and the test surface. However, the optimal point sources are irregularly distributed. In order to achieve a flexible point source array according to the gradient of test surface, a novel interference setup using fiber array is proposed in which every point source can be independently controlled on and off. Simulations and the actual measurement examples of two different surfaces are given in this paper to verify the mathematical model. Finally, we performed an experiment of testing an off-axis ellipsoidal surface that proved the validity of the proposed interference system.

  11. Recent updates in developing a statistical pseudo-dynamic source-modeling framework to capture the variability of earthquake rupture scenarios

    NASA Astrophysics Data System (ADS)

    Song, Seok Goo; Kwak, Sangmin; Lee, Kyungbook; Park, Donghee

    2017-04-01

    It is a critical element to predict the intensity and variability of strong ground motions in seismic hazard assessment. The characteristics and variability of earthquake rupture process may be a dominant factor in determining the intensity and variability of near-source strong ground motions. Song et al. (2014) demonstrated that the variability of earthquake rupture scenarios could be effectively quantified in the framework of 1-point and 2-point statistics of earthquake source parameters, constrained by rupture dynamics and past events. The developed pseudo-dynamic source modeling schemes were also validated against the recorded ground motion data of past events and empirical ground motion prediction equations (GMPEs) at the broadband platform (BBP) developed by the Southern California Earthquake Center (SCEC). Recently we improved the computational efficiency of the developed pseudo-dynamic source-modeling scheme by adopting the nonparametric co-regionalization algorithm, introduced and applied in geostatistics initially. We also investigated the effect of earthquake rupture process on near-source ground motion characteristics in the framework of 1-point and 2-point statistics, particularly focusing on the forward directivity region. Finally we will discuss whether the pseudo-dynamic source modeling can reproduce the variability (standard deviation) of empirical GMPEs and the efficiency of 1-point and 2-point statistics to address the variability of ground motions.

  12. Distinguishing dark matter from unresolved point sources in the Inner Galaxy with photon statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Samuel K.; Lisanti, Mariangela; Safdi, Benjamin R., E-mail: samuelkl@princeton.edu, E-mail: mlisanti@princeton.edu, E-mail: bsafdi@princeton.edu

    2015-05-01

    Data from the Fermi Large Area Telescope suggests that there is an extended excess of GeV gamma-ray photons in the Inner Galaxy. Identifying potential astrophysical sources that contribute to this excess is an important step in verifying whether the signal originates from annihilating dark matter. In this paper, we focus on the potential contribution of unresolved point sources, such as millisecond pulsars (MSPs). We propose that the statistics of the photons—in particular, the flux probability density function (PDF) of the photon counts below the point-source detection threshold—can potentially distinguish between the dark-matter and point-source interpretations. We calculate the flux PDFmore » via the method of generating functions for these two models of the excess. Working in the framework of Bayesian model comparison, we then demonstrate that the flux PDF can potentially provide evidence for an unresolved MSP-like point-source population.« less

  13. A New Simplified Source Model to Explain Strong Ground Motions from a Mega-Thrust Earthquake - Application to the 2011 Tohoku Earthquake (Mw9.0) -

    NASA Astrophysics Data System (ADS)

    Nozu, A.

    2013-12-01

    A new simplified source model is proposed to explain strong ground motions from a mega-thrust earthquake. The proposed model is simpler, and involves less model parameters, than the conventional characterized source model, which itself is a simplified expression of actual earthquake source. In the proposed model, the spacio-temporal distribution of slip within a subevent is not modeled. Instead, the source spectrum associated with the rupture of a subevent is modeled and it is assumed to follow the omega-square model. By multiplying the source spectrum with the path effect and the site amplification factor, the Fourier amplitude at a target site can be obtained. Then, combining it with Fourier phase characteristics of a smaller event, the time history of strong ground motions from the subevent can be calculated. Finally, by summing up contributions from the subevents, strong ground motions from the entire rupture can be obtained. The source model consists of six parameters for each subevent, namely, longitude, latitude, depth, rupture time, seismic moment and corner frequency of the subevent. Finite size of the subevent can be taken into account in the model, because the corner frequency of the subevent is included in the model, which is inversely proportional to the length of the subevent. Thus, the proposed model is referred to as the 'pseudo point-source model'. To examine the applicability of the model, a pseudo point-source model was developed for the 2011 Tohoku earthquake. The model comprises nine subevents, located off Miyagi Prefecture through Ibaraki Prefecture. The velocity waveforms (0.2-1 Hz), the velocity envelopes (0.2-10 Hz) and the Fourier spectra (0.2-10 Hz) at 15 sites calculated with the pseudo point-source model agree well with the observed ones, indicating the applicability of the model. Then the results were compared with the results of a super-asperity (SPGA) model of the same earthquake (Nozu, 2012, AGU), which can be considered as an example of characterized source models. Although the pseudo point-source model involves much less model parameters than the super-asperity model, the errors associated with the former model were comparable to those for the latter model for velocity waveforms and envelopes. Furthermore, the errors associated with the former model were much smaller than those for the latter model for Fourier spectra. These evidences indicate the usefulness of the pseudo point-source model. Comparison of the observed (black) and synthetic (red) Fourier spectra. The spectra are the composition of two horizontal components and smoothed with a Parzen window with a band width of 0.05 Hz.

  14. Evaluation of the AnnAGNPS model for predicting runoff and sediment yield in a small Mediterranean agricultural watershed in Navarre (Spain)

    USDA-ARS?s Scientific Manuscript database

    AnnAGNPS (Annualized Agricultural Non-Point Source Pollution Model) is a system of computer models developed to predict non-point source pollutant loadings within agricultural watersheds. It contains a daily time step distributed parameter continuous simulation surface runoff model designed to assis...

  15. Point kernel calculations of skyshine exposure rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roseberry, M.L.; Shultis, J.K.

    1982-02-01

    A simple point kernel model is presented for the calculation of skyshine exposure rates arising from the atmospheric reflection of gamma radiation produced by a vertically collimated or a shielded point source. This model is shown to be in good agreement with benchmark experimental data from a /sup 60/Co source for distances out to 700 m.

  16. MODELING PHOTOCHEMISTRY AND AEROSOL FORMATION IN POINT SOURCE PLUMES WITH THE CMAQ PLUME-IN-GRID

    EPA Science Inventory

    Emissions of nitrogen oxides and sulfur oxides from the tall stacks of major point sources are important precursors of a variety of photochemical oxidants and secondary aerosol species. Plumes released from point sources exhibit rather limited dimensions and their growth is gradu...

  17. Microbial Source Module (MSM): Documenting the Science and Software for Discovery, Evaluation, and Integration

    EPA Science Inventory

    The Microbial Source Module (MSM) estimates microbial loading rates to land surfaces from non-point sources, and to streams from point sources for each subwatershed within a watershed. A subwatershed, the smallest modeling unit, represents the common basis for information consume...

  18. A model of the 8-25 micron point source infrared sky

    NASA Technical Reports Server (NTRS)

    Wainscoat, Richard J.; Cohen, Martin; Volk, Kevin; Walker, Helen J.; Schwartz, Deborah E.

    1992-01-01

    We present a detailed model for the IR point-source sky that comprises geometrically and physically realistic representations of the Galactic disk, bulge, stellar halo, spiral arms (including the 'local arm'), molecular ring, and the extragalactic sky. We represent each of the distinct Galactic components by up to 87 types of Galactic source, each fully characterized by scale heights, space densities, and absolute magnitudes at BVJHK, 12, and 25 microns. The model is guided by a parallel Monte Carlo simulation of the Galaxy at 12 microns. The content of our Galactic source table constitutes a good match to the 12 micron luminosity function in the simulation, as well as to the luminosity functions at V and K. We are able to produce differential and cumulative IR source counts for any bandpass lying fully within the IRAS Low-Resolution Spectrometer's range (7.7-22.7 microns as well as for the IRAS 12 and 25 micron bands. These source counts match the IRAS observations well. The model can be used to predict the character of the point source sky expected for observations from IR space experiments.

  19. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    PubMed

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  20. Stochastic point-source modeling of ground motions in the Cascadia region

    USGS Publications Warehouse

    Atkinson, G.M.; Boore, D.M.

    1997-01-01

    A stochastic model is used to develop preliminary ground motion relations for the Cascadia region for rock sites. The model parameters are derived from empirical analyses of seismographic data from the Cascadia region. The model is based on a Brune point-source characterized by a stress parameter of 50 bars. The model predictions are compared to ground-motion data from the Cascadia region and to data from large earthquakes in other subduction zones. The point-source simulations match the observations from moderate events (M 100 km). The discrepancy at large magnitudes suggests further work on modeling finite-fault effects and regional attenuation is warranted. In the meantime, the preliminary equations are satisfactory for predicting motions from events of M < 7 and provide conservative estimates of motions from larger events at distances less than 100 km.

  1. An analysis of lamp irradiation in ellipsoidal mirror furnaces

    NASA Astrophysics Data System (ADS)

    Rivas, Damián; Vázquez-Espí, Carlos

    2001-03-01

    The irradiation generated by halogen lamps in ellipsoidal mirror furnaces is analyzed, in configurations suited to the study of the floating-zone technique for crystal growth in microgravity conditions. A line-source model for the lamp (instead of a point source) is developed, so that the longitudinal extent of the filament is taken into account. With this model the case of defocussed lamps can be handle analytically. In the model the lamp is formed by an aggregate of point-source elements, placed along the axis of the ellipsoid. For these point sources (which, in general, are defocussed) an irradiation model is formulated, within the approximation of geometrical optics. The irradiation profiles obtained (both on the lateral surface and on the inner base of the cylindrical sample) are analyzed. They present singularities related to the caustics formed by the family of reflected rays; these caustics are also analyzed. The lamp model is combined with a conduction-radiation model to study the temperature field in the sample. The effects of defocussing the lamp (common practice in crystal growth) are studied; advantages and also some drawbacks are pointed out. Comparison with experimental results is made.

  2. Microbial Source Module (MSM): Documenting the Science ...

    EPA Pesticide Factsheets

    The Microbial Source Module (MSM) estimates microbial loading rates to land surfaces from non-point sources, and to streams from point sources for each subwatershed within a watershed. A subwatershed, the smallest modeling unit, represents the common basis for information consumed and produced by the MSM which is based on the HSPF (Bicknell et al., 1997) Bacterial Indicator Tool (EPA, 2013b, 2013c). Non-point sources include numbers, locations, and shedding rates of domestic agricultural animals (dairy and beef cows, swine, poultry, etc.) and wildlife (deer, duck, raccoon, etc.). Monthly maximum microbial storage and accumulation rates on the land surface, adjusted for die-off, are computed over an entire season for four land-use types (cropland, pasture, forest, and urbanized/mixed-use) for each subwatershed. Monthly point source microbial loadings to instream locations (i.e., stream segments that drain individual sub-watersheds) are combined and determined for septic systems, direct instream shedding by cattle, and POTWs/WWTPs (Publicly Owned Treatment Works/Wastewater Treatment Plants). The MSM functions within a larger modeling system that characterizes human-health risk resulting from ingestion of water contaminated with pathogens. The loading estimates produced by the MSM are input to the HSPF model that simulates flow and microbial fate/transport within a watershed. Microbial counts within recreational waters are then input to the MRA-IT model (Soller et

  3. PHOTOCHEMICAL SIMULATIONS OF POINT SOURCE EMISSIONS WITH THE MODELS-3 CMAQ PLUME-IN-GRID APPROACH

    EPA Science Inventory

    A plume-in-grid (PinG) approach has been designed to provide a realistic treatment for the simulation the dynamic and chemical processes impacting pollutant species in major point source plumes during a subgrid scale phase within an Eulerian grid modeling framework. The PinG sci...

  4. Modeling the contribution of point sources and non-point sources to Thachin River water pollution.

    PubMed

    Schaffner, Monika; Bader, Hans-Peter; Scheidegger, Ruth

    2009-08-15

    Major rivers in developing and emerging countries suffer increasingly of severe degradation of water quality. The current study uses a mathematical Material Flow Analysis (MMFA) as a complementary approach to address the degradation of river water quality due to nutrient pollution in the Thachin River Basin in Central Thailand. This paper gives an overview of the origins and flow paths of the various point- and non-point pollution sources in the Thachin River Basin (in terms of nitrogen and phosphorus) and quantifies their relative importance within the system. The key parameters influencing the main nutrient flows are determined and possible mitigation measures discussed. The results show that aquaculture (as a point source) and rice farming (as a non-point source) are the key nutrient sources in the Thachin River Basin. Other point sources such as pig farms, households and industries, which were previously cited as the most relevant pollution sources in terms of organic pollution, play less significant roles in comparison. This order of importance shifts when considering the model results for the provincial level. Crosschecks with secondary data and field studies confirm the plausibility of our simulations. Specific nutrient loads for the pollution sources are derived; these can be used for a first broad quantification of nutrient pollution in comparable river basins. Based on an identification of the sensitive model parameters, possible mitigation scenarios are determined and their potential to reduce the nutrient load evaluated. A comparison of simulated nutrient loads with measured nutrient concentrations shows that nutrient retention in the river system may be significant. Sedimentation in the slow flowing surface water network as well as nitrogen emission to the air from the warm oxygen deficient waters are certainly partly responsible, but also wetlands along the river banks could play an important role as nutrient sinks.

  5. Spherical-earth Gravity and Magnetic Anomaly Modeling by Gauss-legendre Quadrature Integration

    NASA Technical Reports Server (NTRS)

    Vonfrese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J. (Principal Investigator)

    1981-01-01

    The anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical Earth for an arbitrary body represented by an equivalent point source distribution of gravity poles or magnetic dipoles were calculated. The distribution of equivalent point sources was determined directly from the coordinate limits of the source volume. Variable integration limits for an arbitrarily shaped body are derived from interpolation of points which approximate the body's surface envelope. The versatility of the method is enhanced by the ability to treat physical property variations within the source volume and to consider variable magnetic fields over the source and observation surface. A number of examples verify and illustrate the capabilities of the technique, including preliminary modeling of potential field signatures for Mississippi embayment crustal structure at satellite elevations.

  6. A spatial model to aggregate point-source and nonpoint-source water-quality data for large areas

    USGS Publications Warehouse

    White, D.A.; Smith, R.A.; Price, C.V.; Alexander, R.B.; Robinson, K.W.

    1992-01-01

    More objective and consistent methods are needed to assess water quality for large areas. A spatial model, one that capitalizes on the topologic relationships among spatial entities, to aggregate pollution sources from upstream drainage areas is described that can be implemented on land surfaces having heterogeneous water-pollution effects. An infrastructure of stream networks and drainage basins, derived from 1:250,000-scale digital-elevation models, define the hydrologic system in this spatial model. The spatial relationships between point- and nonpoint pollution sources and measurement locations are referenced to the hydrologic infrastructure with the aid of a geographic information system. A maximum-branching algorithm has been developed to simulate the effects of distance from a pollutant source to an arbitrary downstream location, a function traditionally employed in deterministic water quality models. ?? 1992.

  7. Temperature Effects of Point Sources, Riparian Shading, and Dam Operations on the Willamette River, Oregon

    USGS Publications Warehouse

    Rounds, Stewart A.

    2007-01-01

    Water temperature is an important factor influencing the migration, rearing, and spawning of several important fish species in rivers of the Pacific Northwest. To protect these fish populations and to fulfill its responsibilities under the Federal Clean Water Act, the Oregon Department of Environmental Quality set a water temperature Total Maximum Daily Load (TMDL) in 2006 for the Willamette River and the lower reaches of its largest tributaries in northwestern Oregon. As a result, the thermal discharges of the largest point sources of heat to the Willamette River now are limited at certain times of the year, riparian vegetation has been targeted for restoration, and upstream dams are recognized as important influences on downstream temperatures. Many of the prescribed point-source heat-load allocations are sufficiently restrictive that management agencies may need to expend considerable resources to meet those allocations. Trading heat allocations among point-source dischargers may be a more economical and efficient means of meeting the cumulative point-source temperature limits set by the TMDL. The cumulative nature of these limits, however, precludes simple one-to-one trades of heat from one point source to another; a more detailed spatial analysis is needed. In this investigation, the flow and temperature models that formed the basis of the Willamette temperature TMDL were used to determine a spatially indexed 'heating signature' for each of the modeled point sources, and those signatures then were combined into a user-friendly, spreadsheet-based screening tool. The Willamette River Point-Source Heat-Trading Tool allows the user to increase or decrease the heating signature of each source and thereby evaluate the effects of a wide range of potential point-source heat trades. The predictions of the Trading Tool were verified by running the Willamette flow and temperature models under four different trading scenarios, and the predictions typically were accurate to within about 0.005 degrees Celsius (?C). In addition to assessing the effects of point-source heat trades, the models were used to evaluate the temperature effects of several shade-restoration scenarios. Restoration of riparian shade along the entire Long Tom River, from its mouth to Fern Ridge Dam, was calculated to have a small but significant effect on daily maximum temperatures in the main-stem Willamette River, on the order of 0.03?C where the Long Tom River enters the Willamette River, and diminishing downstream. Model scenarios also were run to assess the effects of restoring selected 5-mile reaches of riparian vegetation along the main-stem Willamette River from river mile (RM) 176.80, just upstream of the point where the McKenzie River joins the Willamette River, to RM 116.87 near Albany, which is one location where cumulative point-source heating effects are at a maximum. Restoration of riparian vegetation along the main-stem Willamette River was shown by model runs to have a significant local effect on daily maximum river temperatures (0.046 to 0.194?C) at the site of restoration. The magnitude of the cooling depends on many factors including river width, flow, time of year, and the difference in vegetation characteristics between current and restored conditions. Downstream of the restored reach, the cooling effects are complex and have a nodal nature: at one-half day of travel time downstream, shade restoration has little effect on daily maximum temperature because water passes the restoration site at night; at 1 full day of travel time downstream, cooling effects increase to a second, diminished maximum. Such spatial complexities may complicate the trading of heat allocations between point and nonpoint sources. Upstream dams have an important effect on water temperature in the Willamette River system as a result of augmented flows as well as modified temperature releases over the course of the summer and autumn. The TMDL was formulated prior t

  8. Processing Uav and LIDAR Point Clouds in Grass GIS

    NASA Astrophysics Data System (ADS)

    Petras, V.; Petrasova, A.; Jeziorska, J.; Mitasova, H.

    2016-06-01

    Today's methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.

  9. A Workflow to Model Microbial Loadings in Watersheds ...

    EPA Pesticide Factsheets

    Many watershed models simulate overland and instream microbial fate and transport, but few actually provide loading rates on land surfaces and point sources to the water body network. This paper describes the underlying general equations for microbial loading rates associated with 1) land-applied manure on undeveloped areas from domestic animals; 2) direct shedding on undeveloped lands by domestic animals and wildlife; 3) urban or engineered areas; and 4) point sources that directly discharge to streams from septic systems and shedding by domestic animals. A microbial source module, which houses these formulations, is linked within a workflow containing eight models and a set of databases that form a loosely configured modeling infrastructure which supports watershed-scale microbial source-to-receptor modeling by focusing on animal-impacted catchments. A hypothetical example application – accessing, retrieving, and using real-world data – demonstrates the ability of the infrastructure to automate many of the manual steps associated with a standard watershed assessment, culminating with calibrated flow and microbial densities at the pour point of a watershed. Presented at 2016 Biennial Conference, International Environmental Modelling & Software Society.

  10. Powerful model for the point source sky: Far-ultraviolet and enhanced midinfrared performance

    NASA Technical Reports Server (NTRS)

    Cohen, Martin

    1994-01-01

    I report further developments of the Wainscoat et al. (1992) model originally created for the point source infrared sky. The already detailed and realistic representation of the Galaxy (disk, spiral arms and local spur, molecular ring, bulge, spheroid) has been improved, guided by CO surveys of local molecular clouds, and by the inclusion of a component to represent Gould's Belt. The newest version of the model is very well validated by Infrared Astronomy Satellite (IRAS) source counts. A major new aspect is the extension of the same model down to the far ultraviolet. I compare predicted and observed far-utraviolet source counts from the Apollo 16 'S201' experiment (1400 A) and the TD1 satellite (for the 1565 A band).

  11. Spherical-earth gravity and magnetic anomaly modeling by Gauss-Legendre quadrature integration

    NASA Technical Reports Server (NTRS)

    Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J.

    1981-01-01

    Gauss-Legendre quadrature integration is used to calculate the anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical earth. The procedure involves representation of the anomalous source as a distribution of equivalent point gravity poles or point magnetic dipoles. The distribution of equivalent point sources is determined directly from the volume limits of the anomalous body. The variable limits of integration for an arbitrarily shaped body are obtained from interpolations performed on a set of body points which approximate the body's surface envelope. The versatility of the method is shown by its ability to treat physical property variations within the source volume as well as variable magnetic fields over the source and observation surface. Examples are provided which illustrate the capabilities of the technique, including a preliminary modeling of potential field signatures for the Mississippi embayment crustal structure at 450 km.

  12. Modelling of point and diffuse pollution: application of the Moneris model in the Ipojuca river basin, Pernambuco State, Brazil.

    PubMed

    de Lima Barros, Alessandra Maciel; do Carmo Sobral, Maria; Gunkel, Günter

    2013-01-01

    Emissions of pollutants and nutrients are causing several problems in aquatic ecosystems, and in general an excess of nutrients, specifically nitrogen and phosphorus, is responsible for the eutrophication process in water bodies. In most developed countries, more attention is given to diffuse pollution because problems with point pollution have already been solved. In many non-developed countries basic data for point and diffuse pollution are not available. The focus of the presented studies is to quantify nutrient emissions from point and diffuse sources in the Ipojuca river basin, Pernambuco State, Brazil, using the Moneris model (Modelling Nutrient Emissions in River Systems). This model has been developed in Germany and has already been implemented in more than 600 river basins. The model is mainly based on river flow, water quality and geographical information system data. According to the Moneris model results, untreated domestic sewage is the major source of nutrients in the Ipojuca river basin. The Moneris model has shown itself to be a useful tool that allows the identification and quantification of point and diffuse nutrient sources, thus enabling the adoption of measures to reduce them. The Moneris model, conducted for the first time in a tropical river basin with intermittent flow, can be used as a reference for implementation in other watersheds.

  13. Numerical modeling of a point-source image under relative motion of radiation receiver and atmosphere

    NASA Astrophysics Data System (ADS)

    Kucherov, A. N.; Makashev, N. K.; Ustinov, E. V.

    1994-02-01

    A procedure is proposed for numerical modeling of instantaneous and averaged (over various time intervals) distant-point-source images perturbed by a turbulent atmosphere that moves relative to the radiation receiver. Examples of image calculations under conditions of the significant effect of atmospheric turbulence in an approximation of geometrical optics are presented and analyzed.

  14. Development and application of a reactive plume-in-grid model: evaluation over Greater Paris

    NASA Astrophysics Data System (ADS)

    Korsakissok, I.; Mallet, V.

    2010-09-01

    Emissions from major point sources are badly represented by classical Eulerian models. An overestimation of the horizontal plume dilution, a bad representation of the vertical diffusion as well as an incorrect estimate of the chemical reaction rates are the main limitations of such models in the vicinity of major point sources. The plume-in-grid method is a multiscale modeling technique that couples a local-scale Gaussian puff model with an Eulerian model in order to better represent these emissions. We present the plume-in-grid model developed in the air quality modeling system Polyphemus, with full gaseous chemistry. The model is evaluated on the metropolitan Île-de-France region, during six months (summer 2001). The subgrid-scale treatment is used for 89 major point sources, a selection based on the emission rates of NOx and SO2. Results with and without the subgrid treatment of point emissions are compared, and their performance by comparison to the observations on measurement stations is assessed. A sensitivity study is also carried out, on several local-scale parameters as well as on the vertical diffusion within the urban area. Primary pollutants are shown to be the most impacted by the plume-in-grid treatment. SO2 is the most impacted pollutant, since the point sources account for an important part of the total SO2 emissions, whereas NOx emissions are mostly due to traffic. The spatial impact of the subgrid treatment is localized in the vicinity of the sources, especially for reactive species (NOx and O3). Ozone is mostly sensitive to the time step between two puff emissions which influences the in-plume chemical reactions, whereas the almost-passive species SO2 is more sensitive to the injection time, which determines the duration of the subgrid-scale treatment. Future developments include an extension to handle aerosol chemistry, and an application to the modeling of line sources in order to use the subgrid treatment with road emissions. The latter is expected to lead to more striking results, due to the importance of traffic emissions for the pollutants of interest.

  15. A Workflow to Model Microbial Loadings in Watersheds ...

    EPA Pesticide Factsheets

    Many watershed models simulate overland and instream microbial fate and transport, but few actually provide loading rates on land surfaces and point sources to the water body network. This paper describes the underlying general equations for microbial loading rates associated with 1) land-applied manure on undeveloped areas from domestic animals; 2) direct shedding on undeveloped lands by domestic animals and wildlife; 3) urban or engineered areas; and 4) point sources that directly discharge to streams from septic systems and shedding by domestic animals. A microbial source module, which houses these formulations, is linked within a workflow containing eight models and a set of databases that form a loosely configured modeling infrastructure which supports watershed-scale microbial source-to-receptor modeling by focusing on animal-impacted catchments. A hypothetical example application – accessing, retrieving, and using real-world data – demonstrates the ability of the infrastructure to automate many of the manual steps associated with a standard watershed assessment, culminating with calibrated flow and microbial densities at the pour point of a watershed. In the Proceedings of the International Environmental Modelling and Software Society (iEMSs), 8th International Congress on Environmental Modelling and Software, Toulouse, France

  16. VizieR Online Data Catalog: First Fermi-LAT Inner Galaxy point source catalog (Ajello+, 2016)

    NASA Astrophysics Data System (ADS)

    Ajello, M.; Albert, A.; Atwood, W. B.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Bissaldi, E.; Blandford, R. D.; Bloom, E. D.; Bonino, R.; Bottacini, E.; Brandt, T. J.; Bregeon, J.; Bruel, P.; Buehler, R.; Buson, S.; Caliandro, G. A.; Cameron, R. A.; Caputo, R.; Caragiulo, M.; Caraveo, P. A.; Cecchi, C.; Chekhtman, A.; Chiang, J.; Chiaro, G.; Ciprini, S.; Cohen-Tanugi, J.; Cominsky, L. R.; Conrad, J.; Cutini, S.; D'Ammando, F.; de Angelis, A.; de Palma, F.; Desiante, R.; di Venere, L.; Drell, P. S.; Favuzzi, C.; Ferrara, E. C.; Fusco, P.; Gargano, F.; Gasparrini, D.; Giglietto, N.; Giommi, P.; Giordano, F.; Giroletti, M.; Glanzman, T.; Godfrey, G.; Gomez-Vargas, G. A.; Grenier, I. A.; Guiriec, S.; Gustafsson, M.; Harding, A. K.; Hewitt, J. W.; Hill, A. B.; Horan, D.; Jogler, T.; Johannesson, G.; Johnson, A. S.; Kamae, T.; Karwin, C.; Knodlseder, J.; Kuss, M.; Larsson, S.; Latronico, L.; Li, J.; Li, L.; Longo, F.; Loparco, F.; Lovellette, M. N.; Lubrano, P.; Magill, J.; Maldera, S.; Malyshev, D.; Manfreda, A.; Mayer, M.; Mazziotta, M. N.; Michelson, P. F.; Mitthumsiri, W.; Mizuno, T.; Moiseev, A. A.; Monzani, M. E.; Morselli, A.; Moskalenko, I. V.; Murgia, S.; Nuss, E.; Ohno, M.; Ohsugi, T.; Omodei, N.; Orlando, E.; Ormes, J. F.; Paneque, D.; Pesce-Rollins, M.; Piron, F.; Pivato, G.; Porter, T. A.; Raino, S.; Rando, R.; Razzano, M.; Reimer, A.; Reimer, O.; Ritz, S.; Sanchez-Conde, M.; Parkinson, P. M. S.; Sgro, C.; Siskind, E. J.; Smith, D. A.; Spada, F.; Spandre, G.; Spinelli, P.; Suson, D. J.; Tajima, H.; Takahashi, H.; Thayer, J. B.; Torres, D. F.; Tosti, G.; Troja, E.; Uchiyama, Y.; Vianello, G.; Winer, B. L.; Wood, K. S.; Zaharijas, G.; Zimmer, S.

    2018-01-01

    The Fermi Large Area Telescope (LAT) has provided the most detailed view to date of the emission toward the Galactic center (GC) in high-energy γ-rays. This paper describes the analysis of data taken during the first 62 months of the mission in the energy range 1-100GeV from a 15°x15° region about the direction of the GC. Specialized interstellar emission models (IEMs) are constructed to enable the separation of the γ-ray emissions produced by cosmic ray particles interacting with the interstellar gas and radiation fields in the Milky Way into that from the inner ~1kpc surrounding the GC, and that from the rest of the Galaxy. A catalog of point sources for the 15°x15° region is self-consistently constructed using these IEMs: the First Fermi-LAT Inner Galaxy Point Source Catalog (1FIG). The spatial locations, fluxes, and spectral properties of the 1FIG sources are presented, and compared with γ-ray point sources over the same region taken from existing catalogs. After subtracting the interstellar emission and point-source contributions a residual is found. If templates that peak toward the GC are used to model the positive residual the agreement with the data improves, but none of the additional templates tried account for all of its spatial structure. The spectrum of the positive residual modeled with these templates has a strong dependence on the choice of IEM. (2 data files).

  17. [Estimation of nonpoint source pollutant loads and optimization of the best management practices (BMPs) in the Zhangweinan River basin].

    PubMed

    Xu, Hua-Shan; Xu, Zong-Xue; Liu, Pin

    2013-03-01

    One of the key techniques in establishing and implementing TMDL (total maximum daily load) is to utilize hydrological model to quantify non-point source pollutant loads, establish BMPs scenarios, reduce non-point source pollutant loads. Non-point source pollutant loads under different years (wet, normal and dry year) were estimated by using SWAT model in the Zhangweinan River basin, spatial distribution characteristics of non-point source pollutant loads were analyzed on the basis of the simulation result. During wet years, total nitrogen (TN) and total phosphorus (TP) accounted for 0.07% and 27.24% of the total non-point source pollutant loads, respectively. Spatially, agricultural and residential land with steep slope are the regions that contribute more non-point source pollutant loads in the basin. Compared to non-point source pollutant loads with those during the baseline period, 47 BMPs scenarios were set to simulate the reduction efficiency of different BMPs scenarios for 5 kinds of pollutants (organic nitrogen, organic phosphorus, nitrate nitrogen, dissolved phosphorus and mineral phosphorus) in 8 prior controlled subbasins. Constructing vegetation type ditch was optimized as the best measure to reduce TN and TP by comparing cost-effective relationship among different BMPs scenarios, and the costs of unit pollutant reduction are 16.11-151.28 yuan x kg(-1) for TN, and 100-862.77 yuan x kg(-1) for TP, which is the most cost-effective measure among the 47 BMPs scenarios. The results could provide a scientific basis and technical support for environmental protection and sustainable utilization of water resources in the Zhangweinan River basin.

  18. A SPATIO-TEMPORAL DOWNSCALER FOR OUTPUT FROM NUMERICAL MODELS

    EPA Science Inventory

    Often, in environmental data collection, data arise from two sources: numerical models and monitoring networks. The first source provides predictions at the level of grid cells, while the second source gives measurements at points. The first is characterized by full spatial cove...

  19. The MV model of the color glass condensate for a finite number of sources including Coulomb interactions

    DOE PAGES

    McLerran, Larry; Skokov, Vladimir V.

    2016-09-19

    We modify the McLerran–Venugopalan model to include only a finite number of sources of color charge. In the effective action for such a system of a finite number of sources, there is a point-like interaction and a Coulombic interaction. The point interaction generates the standard fluctuation term in the McLerran–Venugopalan model. The Coulomb interaction generates the charge screening originating from well known evolution in x. Such a model may be useful for computing angular harmonics of flow measured in high energy hadron collisions for small systems. In this study we provide a basic formulation of the problem on a lattice.

  20. Monte Carlo simulation for light propagation in 3D tooth model

    NASA Astrophysics Data System (ADS)

    Fu, Yongji; Jacques, Steven L.

    2011-03-01

    Monte Carlo (MC) simulation was implemented in a three dimensional tooth model to simulate the light propagation in the tooth for antibiotic photodynamic therapy and other laser therapy. The goal of this research is to estimate the light energy deposition in the target region of tooth with given light source information, tooth optical properties and tooth structure. Two use cases were presented to demonstrate the practical application of this model. One case was comparing the isotropic point source and narrow beam dosage distribution and the other case was comparing different incident points for the same light source. This model will help the doctor for PDT design in the tooth.

  1. Fermi-LAT Observations of High-Energy Gamma-Ray Emission Toward the Galactic Center

    DOE PAGES

    Ajello, M.

    2016-02-26

    The Fermi Large Area Telescope (LAT) has provided the most detailed view to date of the emission towards the Galactic centre (GC) in high-energy γ-rays. This paper describes the analysis of data taken during the first 62 months of the mission in the energy range 1 - 100 GeV from a 15° X15° region about the direction of the GC, and implications for the interstellar emissions produced by cosmic ray (CR) particles interacting with the gas and radiation fields in the inner Galaxy and for the point sources detected. Specialised interstellar emission models (IEMs) are constructed that enable separation ofmore » the γ-ray emission from the inner ~ 1 kpc about the GC from the fore- and background emission from the Galaxy. Based on these models, the interstellar emission from CR electrons interacting with the interstellar radiation field via the inverse Compton (IC) process and CR nuclei inelastically scattering off the gas producing γ-rays via π⁰ decays from the inner ~ 1 kpc is determined. The IC contribution is found to be dominant in the region and strongly enhanced compared to previous studies. A catalog of point sources for the 15 °X 15 °region is self-consistently constructed using these IEMs: the First Fermi–LAT Inner Galaxy point source Catalog (1FIG). The spatial locations, fluxes, and spectral properties of the 1FIG sources are presented, and compared with γ-ray point sources over the same region taken from existing catalogs, including the Third Fermi–LAT Source Catalog (3FGL). In general, the spatial density of 1FIG sources differs from those in the 3FGL, which is attributed to the different treatments of the interstellar emission and energy ranges used by the respective analyses. Three 1FIG sources are found to spatially overlap with supernova remnants (SNRs) listed in Green’s SNR catalog; these SNRs have not previously been associated with high-energy γ-ray sources. Most 3FGL sources with known multi-wavelength counterparts are also found. However, the majority of 1FIG point sources are unassociated. After subtracting the interstellar emission and point-source contributions from the data a residual is found that is a sub-dominant fraction of the total flux. But, it is brighter than the γ-ray emission associated with interstellar gas in the inner ~ 1 kpc derived for the IEMs used in this paper, and comparable to the integrated brightness of the point sources in the region for energies & 3 GeV. If spatial templates that peak toward the GC are used to model the positive residual and included in the total model for the 1515°X° region, the agreement with the data improves, but they do not account for all the residual structure. The spectrum of the positive residual modelled with these templates has a strong dependence on the choice of IEM.« less

  2. STATISTICS OF GAMMA-RAY POINT SOURCES BELOW THE FERMI DETECTION LIMIT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malyshev, Dmitry; Hogg, David W., E-mail: dm137@nyu.edu

    2011-09-10

    An analytic relation between the statistics of photons in pixels and the number counts of multi-photon point sources is used to constrain the distribution of gamma-ray point sources below the Fermi detection limit at energies above 1 GeV and at latitudes below and above 30 deg. The derived source-count distribution is consistent with the distribution found by the Fermi Collaboration based on the first Fermi point-source catalog. In particular, we find that the contribution of resolved and unresolved active galactic nuclei (AGNs) to the total gamma-ray flux is below 20%-25%. In the best-fit model, the AGN-like point-source fraction is 17%more » {+-} 2%. Using the fact that the Galactic emission varies across the sky while the extragalactic diffuse emission is isotropic, we put a lower limit of 51% on Galactic diffuse emission and an upper limit of 32% on the contribution from extragalactic weak sources, such as star-forming galaxies. Possible systematic uncertainties are discussed.« less

  3. Applicability of the single equivalent point dipole model to represent a spatially distributed bio-electrical source

    NASA Technical Reports Server (NTRS)

    Armoundas, A. A.; Feldman, A. B.; Sherman, D. A.; Cohen, R. J.

    2001-01-01

    Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius. Finally, a method is devised to obtain the size of the distributed source during the cardiac cycle.

  4. Analyzing Variability in Landscape Nutrient Loading Using Spatially-Explicit Maps in the Great Lakes Basin

    NASA Astrophysics Data System (ADS)

    Hamlin, Q. F.; Kendall, A. D.; Martin, S. L.; Whitenack, H. D.; Roush, J. A.; Hannah, B. A.; Hyndman, D. W.

    2017-12-01

    Excessive loading of nitrogen and phosphorous to the landscape has caused biologically and economically damaging eutrophication and harmful algal blooms in the Great Lakes Basin (GLB) and across the world. We mapped source-specific loads of nitrogen and phosphorous to the landscape using broadly available data across the GLB. SENSMap (Spatially Explicit Nutrient Source Map) is a 30m resolution snapshot of nutrient loads ca. 2010. We use these maps to study variable nutrient loading and provide this information to watershed managers through NOAA's GLB Tipping Points Planner. SENSMap individually maps nutrient point sources and six non-point sources: 1) atmospheric deposition, 2) septic tanks, 3) non-agricultural chemical fertilizer, 4) agricultural chemical fertilizer, 5) manure, and 6) nitrogen fixation from legumes. To model source-specific loads at high resolution, SENSMap synthesizes a wide range of remotely sensed, surveyed, and tabular data. Using these spatially explicit nutrient loading maps, we can better calibrate local land use-based water quality models and provide insight to watershed managers on how to focus nutrient reduction strategies. Here we examine differences in dominant nutrient sources across the GLB, and how those sources vary by land use. SENSMap's high resolution, source-specific approach offers a different lens to understand nutrient loading than traditional semi-distributed or land use based models.

  5. Application of a water quality model in the White Cart water catchment, Glasgow, UK.

    PubMed

    Liu, S; Tucker, P; Mansell, M; Hursthouse, A

    2003-03-01

    Water quality models of urban systems have previously focused on point source (sewerage system) inputs. Little attention has been given to diffuse inputs and research into diffuse pollution has been largely confined to agriculture sources. This paper reports on new research that is aimed at integrating diffuse inputs into an urban water quality model. An integrated model is introduced that is made up of four modules: hydrology, contaminant point sources, nutrient cycling and leaching. The hydrology module, T&T consists of a TOPMODEL (a TOPography-based hydrological MODEL), which simulates runoff from pervious areas and a two-tank model, which simulates runoff from impervious urban areas. Linked into the two-tank model, the contaminant point source module simulates the overflow from the sewerage system in heavy rain. The widely known SOILN (SOIL Nitrate model) is the basis of nitrogen cycle module. Finally, the leaching module consists of two functions: the production function and the transfer function. The production function is based on SLIM (Solute Leaching Intermediate Model) while the transfer function is based on the 'flushing hypothesis' which postulates a relationship between contaminant concentrations in the receiving water course and the extent to which the catchment is saturated. This paper outlines the modelling methodology and the model structures that have been developed. An application of this model in the White Cart catchment (Glasgow) is also included.

  6. Estimating rupture distances without a rupture

    USGS Publications Warehouse

    Thompson, Eric M.; Worden, Charles

    2017-01-01

    Most ground motion prediction equations (GMPEs) require distances that are defined relative to a rupture model, such as the distance to the surface projection of the rupture (RJB) or the closest distance to the rupture plane (RRUP). There are a number of situations in which GMPEs are used where it is either necessary or advantageous to derive rupture distances from point-source distance metrics, such as hypocentral (RHYP) or epicentral (REPI) distance. For ShakeMap, it is necessary to provide an estimate of the shaking levels for events without rupture models, and before rupture models are available for events that eventually do have rupture models. In probabilistic seismic hazard analysis, it is often convenient to use point-source distances for gridded seismicity sources, particularly if a preferred orientation is unknown. This avoids the computationally cumbersome task of computing rupture-based distances for virtual rupture planes across all strikes and dips for each source. We derive average rupture distances conditioned on REPI, magnitude, and (optionally) back azimuth, for a variety of assumed seismological constraints. Additionally, we derive adjustment factors for GMPE standard deviations that reflect the added uncertainty in the ground motion estimation when point-source distances are used to estimate rupture distances.

  7. Analyzing γ rays of the Galactic Center with deep learning

    NASA Astrophysics Data System (ADS)

    Caron, Sascha; Gómez-Vargas, Germán A.; Hendriks, Luc; Ruiz de Austri, Roberto

    2018-05-01

    We present the application of convolutional neural networks to a particular problem in gamma ray astronomy. Explicitly, we use this method to investigate the origin of an excess emission of GeV γ rays in the direction of the Galactic Center, reported by several groups by analyzing Fermi-LAT data. Interpretations of this excess include γ rays created by the annihilation of dark matter particles and γ rays originating from a collection of unresolved point sources, such as millisecond pulsars. We train and test convolutional neural networks with simulated Fermi-LAT images based on point and diffuse emission models of the Galactic Center tuned to measured γ ray data. Our new method allows precise measurements of the contribution and properties of an unresolved population of γ ray point sources in the interstellar diffuse emission model. The current model predicts the fraction of unresolved point sources with an error of up to 10% and this is expected to decrease with future work.

  8. Impacts of the Detection of Cassiopeia A Point Source.

    PubMed

    Umeda; Nomoto; Tsuruta; Mineshige

    2000-05-10

    Very recently the Chandra first light observation discovered a point-like source in the Cassiopeia A supernova remnant. This detection was subsequently confirmed by the analyses of the archival data from both ROSAT and Einstein observations. Here we compare the results from these observations with the scenarios involving both black holes (BHs) and neutron stars (NSs). If this point source is a BH, we offer as a promising model a disk-corona type model with a low accretion rate in which a soft photon source at approximately 0.1 keV is Comptonized by higher energy electrons in the corona. If it is an NS, the dominant radiation observed by Chandra most likely originates from smaller, hotter regions of the stellar surface, but we argue that it is still worthwhile to compare the cooler component from the rest of the surface with cooling theories. We emphasize that the detection of this point source itself should potentially provide enormous impacts on the theories of supernova explosion, progenitor scenario, compact remnant formation, accretion to compact objects, and NS thermal evolution.

  9. Differentiating Impacts of Watershed Development from Superfund Sites on Stream Macroinvertebrates

    EPA Science Inventory

    Urbanization effect models were developed and verified at whole watershed scales to predict and differentiate between effects on aquatic life from diffuse, non-point source (NPS) urbanization in the watershed and effects of known local, site-specific origin point sources, contami...

  10. Better Assessment Science Integrating Point and Nonpoint Sources

    EPA Science Inventory

    Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) is not a model per se, but is a multipurpose environmental decision support system for use by regional, state, and local agencies in performing watershed- and water-quality-based studies. BASI...

  11. Lung motion estimation using dynamic point shifting: An innovative model based on a robust point matching algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yi, Jianbing, E-mail: yijianbing8@163.com; Yang, Xuan, E-mail: xyang0520@263.net; Li, Yan-Ran, E-mail: lyran@szu.edu.cn

    2015-10-15

    Purpose: Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. Methods: An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered atmore » points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. Results: The performances of the authors’ method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors’ method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the 3000 landmark points of ten cases by the authors’ method are 1.21 and 1.04 mm. In the EMPIRE10 lung registration challenge, the authors’ method ranks 24 of 39. According to the index of the maximum shear stretch, the authors’ method is also efficient to describe the discontinuous motion at the lung boundaries. Conclusions: By establishing the correspondence of the landmark points in the source phase and the other target phases combining shape matching and image intensity matching together, the mismatching issue in the robust point matching algorithm is adequately addressed. The target registration errors are statistically reduced by shifting the virtual target points and target points. The authors’ method with consideration of sliding conditions can effectively estimate the discontinuous motion, and the estimated motion is natural. The primary limitation of the proposed method is that the temporal constraints of the trajectories of voxels are not introduced into the motion model. However, the proposed method provides satisfactory motion information, which results in precise tumor coverage by the radiation dose during radiotherapy.« less

  12. Lung motion estimation using dynamic point shifting: An innovative model based on a robust point matching algorithm.

    PubMed

    Yi, Jianbing; Yang, Xuan; Chen, Guoliang; Li, Yan-Ran

    2015-10-01

    Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered at points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. The performances of the authors' method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors' method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the 3000 landmark points of ten cases by the authors' method are 1.21 and 1.04 mm. In the EMPIRE10 lung registration challenge, the authors' method ranks 24 of 39. According to the index of the maximum shear stretch, the authors' method is also efficient to describe the discontinuous motion at the lung boundaries. By establishing the correspondence of the landmark points in the source phase and the other target phases combining shape matching and image intensity matching together, the mismatching issue in the robust point matching algorithm is adequately addressed. The target registration errors are statistically reduced by shifting the virtual target points and target points. The authors' method with consideration of sliding conditions can effectively estimate the discontinuous motion, and the estimated motion is natural. The primary limitation of the proposed method is that the temporal constraints of the trajectories of voxels are not introduced into the motion model. However, the proposed method provides satisfactory motion information, which results in precise tumor coverage by the radiation dose during radiotherapy.

  13. An evaluation of catchment-scale phosphorus mitigation using load apportionment modelling.

    PubMed

    Greene, S; Taylor, D; McElarney, Y R; Foy, R H; Jordan, P

    2011-05-01

    Functional relationships between phosphorus (P) discharge and concentration mechanisms were explored using a load apportionment model (LAM) developed for use in a freshwater catchment in Ireland with fourteen years of data (1995-2008). The aim of model conceptualisation was to infer changes in point and diffuse sources from catchment P loading during P mitigation, based upon a dataset comprising geospatial and water quality data from a 256km(2) lake catchment in an intensively farmed drumlin region of the midlands of Ireland. The model was calibrated using river total P (TP), molybdate reactive P (MRP) and runoff data from seven subcatchments. Temporal and spatial heterogeneity of P sources existed within and between subcatchments; these were attributed to differences in agricultural intensity, soil type and anthropogenically-sourced effluent P loading. Catchment rivers were sensitive to flow regime, which can result in eutrophication of rivers during summer and lake enrichment from frequent flood events. For one sewage impacted river, the LAM estimated that point sourced P contributed up to of 90% of annual MRP load delivered during a hydrological year and in this river point P sources dominated flows up to 92% of days. In the other rivers, despite diffuse P forming a majority of the annual P exports, point sources of P dominated flows for up to 64% of a hydrological year. The calibrated model demonstrated that lower P export rates followed specific P mitigation measures. The LAM estimated up to 80% decreases in point MRP load after enhanced P removal at waste water treatments plants in urban subcatchments and the implementation of septic tank and agricultural bye-laws in rural subcatchments. The LAM approach provides a way to assess the long-term effectiveness of further measures to reduce P loadings in EU (International) River Basin Districts and subcatchments. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Development and application of a reactive plume-in-grid model: evaluation over Greater Paris

    NASA Astrophysics Data System (ADS)

    Korsakissok, I.; Mallet, V.

    2010-02-01

    Emissions from major point sources are badly represented by classical Eulerian models. An overestimation of the horizontal plume dilution, a bad representation of the vertical diffusion as well as an incorrect estimate of the chemical reaction rates are the main limitations of such models in the vicinity of major point sources. The plume-in-grid method is a multiscale modeling technique that couples a local-scale Gaussian puff model with an Eulerian model in order to better represent these emissions. We present the plume-in-grid model developed in the air quality modeling system Polyphemus, with full gaseous chemistry. The model is evaluated on the metropolitan Île-de-France region, during six months (summer 2001). The subgrid-scale treatment is used for 89 major point sources, a selection based on the emission rates of NOx and SO2. Results with and without the subgrid treatment of point emissions are compared, and their performance by comparison to the observations at measurement stations is assessed. A sensitivity study is also carried out, on several local-scale parameters as well as on the vertical diffusion within the urban area. Primary pollutants are shown to be the most impacted by the plume-in-grid treatment, with a decrease in RMSE by up to about -17% for SO2 and -7% for NO at measurement stations. SO2 is the most impacted pollutant, since the point sources account for an important part of the total SO2 emissions, whereas NOx emissions are mostly due to traffic. The spatial impact of the subgrid treatment is localized in the vicinity of the sources, especially for reactive species (NOx and O3). Reactive species are mostly sensitive to the local-scale parameters, such as the time step between two puff emissions which influences the in-plume chemical reactions, whereas the almost-passive species SO2 is more sensitive to the injection time, which determines the duration of the subgrid-scale treatment. Future developments include an extension to handle aerosol chemistry, and an application to the modeling of line sources in order to use the subgrid treatment with road emissions. The latter is expected to lead to more striking results, due to the importance of traffic emissions for the pollutants of interest.

  15. Water quality modeling using geographic information system (GIS) data

    NASA Technical Reports Server (NTRS)

    Engel, Bernard A

    1992-01-01

    Protection of the environment and natural resources at the Kennedy Space Center (KSC) is of great concern. The potential for surface and ground water quality problems resulting from non-point sources of pollution was examined using models. Since spatial variation of parameters required was important, geographic information systems (GIS) and their data were used. The potential for groundwater contamination was examined using the SEEPAGE (System for Early Evaluation of the Pollution Potential of Agricultural Groundwater Environments) model. A watershed near the VAB was selected to examine potential for surface water pollution and erosion using the AGNPS (Agricultural Non-Point Source Pollution) model.

  16. Estimating abundance while accounting for rarity, correlated behavior, and other sources of variation in counts

    USGS Publications Warehouse

    Dorazio, Robert M.; Martin, Juulien; Edwards, Holly H.

    2013-01-01

    The class of N-mixture models allows abundance to be estimated from repeated, point count surveys while adjusting for imperfect detection of individuals. We developed an extension of N-mixture models to account for two commonly observed phenomena in point count surveys: rarity and lack of independence induced by unmeasurable sources of variation in the detectability of individuals. Rarity increases the number of locations with zero detections in excess of those expected under simple models of abundance (e.g., Poisson or negative binomial). Correlated behavior of individuals and other phenomena, though difficult to measure, increases the variation in detection probabilities among surveys. Our extension of N-mixture models includes a hurdle model of abundance and a beta-binomial model of detectability that accounts for additional (extra-binomial) sources of variation in detections among surveys. As an illustration, we fit this model to repeated point counts of the West Indian manatee, which was observed in a pilot study using aerial surveys. Our extension of N-mixture models provides increased flexibility. The effects of different sets of covariates may be estimated for the probability of occurrence of a species, for its mean abundance at occupied locations, and for its detectability.

  17. Estimating abundance while accounting for rarity, correlated behavior, and other sources of variation in counts.

    PubMed

    Dorazio, Robert M; Martin, Julien; Edwards, Holly H

    2013-07-01

    The class of N-mixture models allows abundance to be estimated from repeated, point count surveys while adjusting for imperfect detection of individuals. We developed an extension of N-mixture models to account for two commonly observed phenomena in point count surveys: rarity and lack of independence induced by unmeasurable sources of variation in the detectability of individuals. Rarity increases the number of locations with zero detections in excess of those expected under simple models of abundance (e.g., Poisson or negative binomial). Correlated behavior of individuals and other phenomena, though difficult to measure, increases the variation in detection probabilities among surveys. Our extension of N-mixture models includes a hurdle model of abundance and a beta-binomial model of detectability that accounts for additional (extra-binomial) sources of variation in detections among surveys. As an illustration, we fit this model to repeated point counts of the West Indian manatee, which was observed in a pilot study using aerial surveys. Our extension of N-mixture models provides increased flexibility. The effects of different sets of covariates may be estimated for the probability of occurrence of a species, for its mean abundance at occupied locations, and for its detectability.

  18. Modeling unsteady sound refraction by coherent structures in a high-speed jet

    NASA Astrophysics Data System (ADS)

    Kan, Pinqing; Lewalle, Jacques

    2011-11-01

    We construct a visual model for the unsteady refraction of sound waves from point sources in a Ma = 0.6 jet. The mass and inviscid momentum equations give an equation governing acoustic fluctuations, including anisotropic propagation, attenuation and sources; differences with Lighthill's equation will be discussed. On this basis, the theory of characteristics gives canonical equations for the acoustic paths from any source into the far field. We model a steady mean flow in the near-jet region including the potential core and the mixing region downstream of its collapse, and model the convection of coherent structures as traveling wave perturbations of this mean flow. For a regular distribution of point sources in this region, we present a visual rendition of fluctuating distortion, lensing and deaf spots from the viewpoint of a far-field observer. Supported in part by AFOSR Grant FA-9550-10-1-0536 and by a Syracuse University Graduate Fellowship.

  19. Assimilation of concentration measurements for retrieving multiple point releases in atmosphere: A least-squares approach to inverse modelling

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Rani, Raj

    2015-10-01

    The study addresses the identification of multiple point sources, emitting the same tracer, from their limited set of merged concentration measurements. The identification, here, refers to the estimation of locations and strengths of a known number of simultaneous point releases. The source-receptor relationship is described in the framework of adjoint modelling by using an analytical Gaussian dispersion model. A least-squares minimization framework, free from an initialization of the release parameters (locations and strengths), is presented to estimate the release parameters. This utilizes the distributed source information observable from the given monitoring design and number of measurements. The technique leads to an exact retrieval of the true release parameters when measurements are noise free and exactly described by the dispersion model. The inversion algorithm is evaluated using the real data from multiple (two, three and four) releases conducted during Fusion Field Trials in September 2007 at Dugway Proving Ground, Utah. The release locations are retrieved, on average, within 25-45 m of the true sources with the distance from retrieved to true source ranging from 0 to 130 m. The release strengths are also estimated within a factor of three to the true release rates. The average deviations in retrieval of source locations are observed relatively large in two release trials in comparison to three and four release trials.

  20. The Development and Application of Spatiotemporal Metrics for the Characterization of Point Source FFCO2 Emissions and Dispersion

    NASA Astrophysics Data System (ADS)

    Roten, D.; Hogue, S.; Spell, P.; Marland, E.; Marland, G.

    2017-12-01

    There is an increasing role for high resolution, CO2 emissions inventories across multiple arenas. The breadth of the applicability of high-resolution data is apparent from their use in atmospheric CO2 modeling, their potential for validation of space-based atmospheric CO2 remote-sensing, and the development of climate change policy. This work focuses on increasing our understanding of the uncertainty in these inventories and the implications on their downstream use. The industrial point sources of emissions (power generating stations, cement manufacturing plants, paper mills, etc.) used in the creation of these inventories often have robust emissions characteristics, beyond just their geographic location. Physical parameters of the emission sources such as number of exhaust stacks, stack heights, stack diameters, exhaust temperatures, and exhaust velocities, as well as temporal variability and climatic influences can be important in characterizing emissions. Emissions from large point sources can behave much differently than emissions from areal sources such as automobiles. For many applications geographic location is not an adequate characterization of emissions. This work demonstrates the sensitivities of atmospheric models to the physical parameters of large point sources and provides a methodology for quantifying parameter impacts at multiple locations across the United States. The sensitivities highlight the importance of location and timing and help to highlight potential aspects that can guide efforts to reduce uncertainty in emissions inventories and increase the utility of the models.

  1. Non-domestic phosphorus release in rivers during low-flow: Mechanisms and implications for sources identification

    NASA Astrophysics Data System (ADS)

    Dupas, Rémi; Tittel, Jörg; Jordan, Phil; Musolff, Andreas; Rode, Michael

    2018-05-01

    A common assumption in phosphorus (P) load apportionment studies is that P loads in rivers consist of flow independent point source emissions (mainly from domestic and industrial origins) and flow dependent diffuse source emissions (mainly from agricultural origin). Hence, rivers dominated by point sources will exhibit highest P concentration during low-flow, when flow dilution capacity is minimal, whereas rivers dominated by diffuse sources will exhibit highest P concentration during high-flow, when land-to-river hydrological connectivity is maximal. Here, we show that Soluble Reactive P (SRP) concentrations in three forested catchments free of point sources exhibited seasonal maxima during the summer low-flow period, i.e. a pattern expected in point source dominated areas. A load apportionment model (LAM) is used to show how point sources contribution may have been overestimated in previous studies, because of a biogeochemical process mimicking a point source signal. Almost twenty-two years (March 1995-September 2016) of monthly monitoring data of SRP, dissolved iron (Fe) and nitrate-N (NO3) were used to investigate the underlying mechanisms: SRP and Fe exhibited similar seasonal patterns and opposite to that of NO3. We hypothesise that Fe oxyhydroxide reductive dissolution might be the cause of SRP release during the summer period, and that NO3 might act as a redox buffer, controlling the seasonality of SRP release. We conclude that LAMs may overestimate the contribution of P point sources, especially during the summer low-flow period, when eutrophication risk is maximal.

  2. The importance of source configuration in quantifying footprints of regional atmospheric sulphur deposition.

    PubMed

    Vieno, M; Dore, A J; Bealey, W J; Stevenson, D S; Sutton, M A

    2010-01-15

    An atmospheric transport-chemistry model is applied to investigate the effects of source configuration in simulating regional sulphur deposition footprints from elevated point sources. Dry and wet depositions of sulphur are calculated for each of the 69 largest point sources in the UK. Deposition contributions for each point source are calculated for 2003, as well as for a 2010 emissions scenario. The 2010 emissions scenario has been chosen to simulate the Gothenburg protocol emission scenario. Point source location is found to be a major driver of the dry/wet deposition ratio for each deposition footprint, with increased precipitation scavenging of SO(x) in hill areas resulting in a larger fraction of the emitted sulphur being deposited within the UK for sources located near these areas. This reduces exported transboundary pollution, but, associated with the occurrence of sensitive soils in hill areas, increases the domestic threat of soil acidification. The simulation of plume rise using individual stack parameters for each point source demonstrates a high sensitivity of SO(2) surface concentration to effective source height. This emphasises the importance of using site-specific information for each major stack, which is rarely included in regional atmospheric pollution models, due to the difficulty in obtaining the required input data. The simulations quantify how the fraction of emitted SO(x) exported from the UK increases with source magnitude, effective source height and easterly location. The modelled reduction in SO(x) emissions, between 2003 and 2010 resulted in a smaller fraction being exported, with the result that the reductions in SO(x) deposition to the UK are less than proportionate to the emission reduction. This non-linearity is associated with a relatively larger fraction of the SO(2) being converted to sulphate aerosol for the 2010 scenario, in the presence of ammonia. The effect results in less-than-proportional UK benefits of reducing in SO(2) emissions, together with greater-than-proportional benefits in reducing export of UK SO(2) emissions. Copyright 2009 Elsevier B.V. All rights reserved.

  3. CHARACTERIZING SPATIAL AND TEMPORAL DYNAMICS: DEVELOPMENT OF A GRID-BASED WATERSHED MERCURY LOADING MODEL

    EPA Science Inventory

    A distributed grid-based watershed mercury loading model has been developed to characterize spatial and temporal dynamics of mercury from both point and non-point sources. The model simulates flow, sediment transport, and mercury dynamics on a daily time step across a diverse lan...

  4. Double point source W-phase inversion: Real-time implementation and automated model selection

    USGS Publications Warehouse

    Nealy, Jennifer; Hayes, Gavin

    2015-01-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.

  5. Aeroacoustic catastrophes: upstream cusp beaming in Lilley's equation.

    PubMed

    Stone, J T; Self, R H; Howls, C J

    2017-05-01

    The downstream propagation of high-frequency acoustic waves from a point source in a subsonic jet obeying Lilley's equation is well known to be organized around the so-called 'cone of silence', a fold catastrophe across which the amplitude may be modelled uniformly using Airy functions. Here we show that acoustic waves not only unexpectedly propagate upstream, but also are organized at constant distance from the point source around a cusp catastrophe with amplitude modelled locally by the Pearcey function. Furthermore, the cone of silence is revealed to be a cross-section of a swallowtail catastrophe. One consequence of these discoveries is that the peak acoustic field upstream is not only structurally stable but also at a similar level to the known downstream field. The fine structure of the upstream cusp is blurred out by distributions of symmetric acoustic sources, but peak upstream acoustic beaming persists when asymmetries are introduced, from either arrays of discrete point sources or perturbed continuum ring source distributions. These results may pose interesting questions for future novel jet-aircraft engine designs where asymmetric source distributions arise.

  6. Structure of the X-ray source in the Virgo cluster of galaxies

    NASA Technical Reports Server (NTRS)

    Gorenstein, P.; Fabricant, D.; Topka, K.; Tucker, W.; Harnden, F. R., Jr.

    1977-01-01

    High-angular-resolution observations in the 0.15-1.5-keV band with an imaging X-ray telescope shows the extended X-ray source in the Virgo cluster of galaxies to be a diffuse halo of about 15 arcmin core radius surrounding M87. The angular structure of the surface brightness is marginally consistent with either of two simple models: (1) an isothermal (or adiabatic or hydrostatic) sphere plus a point source at M87 accounting for 12% of the total 0.5-1.5-keV intensity or (2) a power-law function without a discrete point source. No evidence for a point source is seen in the 0.15-0.28-keV band, which is consistent with self-absorption by about 10 to the 21st power per sq cm of matter having a cosmic abundance. The power-law models are motivated by the idea that radiation losses regulate the accretion of matter onto M87 and can account for the observed difference in the size of the X-ray source as seen in the present measurements and at higher energies.

  7. NONPOINT SOURCE MODEL CALIBRATION IN HONEY CREEK WATERSHED

    EPA Science Inventory

    The U.S. EPA Non-Point Source Model has been applied and calibrated to a fairly large (187 sq. mi.) agricultural watershed in the Lake Erie Drainage basin of north central Ohio. Hydrologic and chemical routing algorithms have been developed. The model is evaluated for suitability...

  8. Export of microplastics from land to sea. A modelling approach.

    PubMed

    Siegfried, Max; Koelmans, Albert A; Besseling, Ellen; Kroeze, Carolien

    2017-12-15

    Quantifying the transport of plastic debris from river to sea is crucial for assessing the risks of plastic debris to human health and the environment. We present a global modelling approach to analyse the composition and quantity of point-source microplastic fluxes from European rivers to the sea. The model accounts for different types and sources of microplastics entering river systems via point sources. We combine information on these sources with information on sewage management and plastic retention during river transport for the largest European rivers. Sources of microplastics include personal care products, laundry, household dust and tyre and road wear particles (TRWP). Most of the modelled microplastics exported by rivers to seas are synthetic polymers from TRWP (42%) and plastic-based textiles abraded during laundry (29%). Smaller sources are synthetic polymers and plastic fibres in household dust (19%) and microbeads in personal care products (10%). Microplastic export differs largely among European rivers, as a result of differences in socio-economic development and technological status of sewage treatment facilities. About two-thirds of the microplastics modelled in this study flow into the Mediterranean and Black Sea. This can be explained by the relatively low microplastic removal efficiency of sewage treatment plants in the river basins draining into these two seas. Sewage treatment is generally more efficient in river basins draining into the North Sea, the Baltic Sea and the Atlantic Ocean. We use our model to explore future trends up to the year 2050. Our scenarios indicate that in the future river export of microplastics may increase in some river basins, but decrease in others. Remarkably, for many basins we calculate a reduction in river export of microplastics from point-sources, mainly due to an anticipated improvement in sewage treatment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Point sources from dissipative dark matter

    NASA Astrophysics Data System (ADS)

    Agrawal, Prateek; Randall, Lisa

    2017-12-01

    If a component of dark matter has dissipative interactions, it can cool to form compact astrophysical objects with higher density than that of conventional cold dark matter (sub)haloes. Dark matter annihilations might then appear as point sources, leading to novel morphology for indirect detection. We explore dissipative models where interaction with the Standard Model might provide visible signals, and show how such objects might give rise to the observed excess in gamma rays arising from the galactic center.

  10. Geometric Characterization of Multi-Axis Multi-Pinhole SPECT

    PubMed Central

    DiFilippo, Frank P.

    2008-01-01

    A geometric model and calibration process are developed for SPECT imaging with multiple pinholes and multiple mechanical axes. Unlike the typical situation where pinhole collimators are mounted directly to rotating gamma ray detectors, this geometric model allows for independent rotation of the detectors and pinholes, for the case where the pinhole collimator is physically detached from the detectors. This geometric model is applied to a prototype small animal SPECT device with a total of 22 pinholes and which uses dual clinical SPECT detectors. All free parameters in the model are estimated from a calibration scan of point sources and without the need for a precision point source phantom. For a full calibration of this device, a scan of four point sources with 360° rotation is suitable for estimating all 95 free parameters of the geometric model. After a full calibration, a rapid calibration scan of two point sources with 180° rotation is suitable for estimating the subset of 22 parameters associated with repositioning the collimation device relative to the detectors. The high accuracy of the calibration process is validated experimentally. Residual differences between predicted and measured coordinates are normally distributed with 0.8 mm full width at half maximum and are estimated to contribute 0.12 mm root mean square to the reconstructed spatial resolution. Since this error is small compared to other contributions arising from the pinhole diameter and the detector, the accuracy of the calibration is sufficient for high resolution small animal SPECT imaging. PMID:18293574

  11. Estimation of Phosphorus Emissions in the Upper Iguazu Basin (brazil) Using GIS and the More Model

    NASA Astrophysics Data System (ADS)

    Acosta Porras, E. A.; Kishi, R. T.; Fuchs, S.; Hilgert, S.

    2016-06-01

    Pollution emissions into the drainage basin have direct impact on surface water quality. These emissions result from human activities that turn into pollution loads when they reach the water bodies, as point or diffuse sources. Their pollution potential depends on the characteristics and quantity of the transported materials. The estimation of pollution loads can assist decision-making in basin management. Knowledge about the potential pollution sources allows for a prioritization of pollution control policies to achieve the desired water quality. Consequently, it helps avoiding problems such as eutrophication of water bodies. The focus of the research described in this study is related to phosphorus emissions into river basins. The study area is the upper Iguazu basin that lies in the northeast region of the State of Paraná, Brazil, covering about 2,965 km2 and around 4 million inhabitants live concentrated on just 16% of its area. The MoRE (Modeling of Regionalized Emissions) model was used to estimate phosphorus emissions. MoRE is a model that uses empirical approaches to model processes in analytical units, capable of using spatially distributed parameters, covering both, emissions from point sources as well as non-point sources. In order to model the processes, the basin was divided into 152 analytical units with an average size of 20 km2. Available data was organized in a GIS environment. Using e.g. layers of precipitation, the Digital Terrain Model from a 1:10000 scale map as well as soils and land cover, which were derived from remote sensing imagery. Further data is used, such as point pollution discharges and statistical socio-economic data. The model shows that one of the main pollution sources in the upper Iguazu basin is the domestic sewage that enters the river as point source (effluents of treatment stations) and/or as diffuse pollution, caused by failures of sanitary sewer systems or clandestine sewer discharges, accounting for about 56% of the emissions. Second significant shares of emissions come from direct runoff or groundwater, being responsible for 32% of the total emissions. Finally, agricultural erosion and industry pathways represent 12% of emissions. This study shows that MoRE is capable of producing valid emission calculation on a relatively reduced input data basis.

  12. Analytical volcano deformation modelling: A new and fast generalized point-source approach with application to the 2015 Calbuco eruption

    NASA Astrophysics Data System (ADS)

    Nikkhoo, M.; Walter, T. R.; Lundgren, P.; Prats-Iraola, P.

    2015-12-01

    Ground deformation at active volcanoes is one of the key precursors of volcanic unrest, monitored by InSAR and GPS techniques at high spatial and temporal resolution, respectively. Modelling of the observed displacements establishes the link between them and the underlying subsurface processes and volume change. The so-called Mogi model and the rectangular dislocation are two commonly applied analytical solutions that allow for quick interpretations based on the location, depth and volume change of pressurized spherical cavities and planar intrusions, respectively. Geological observations worldwide, however, suggest elongated, tabular or other non-equidimensional geometries for the magma chambers. How can these be modelled? Generalized models such as the Davis's point ellipsoidal cavity or the rectangular dislocation solutions, are geometrically limited and could barely improve the interpretation of data. We develop a new analytical artefact-free solution for a rectangular dislocation, which also possesses full rotational degrees of freedom. We construct a kinematic model in terms of three pairwise-perpendicular rectangular dislocations with a prescribed opening only. This model represents a generalized point source in the far field, and also performs as a finite dislocation model for planar intrusions in the near field. We show that through calculating the Eshelby's shape tensor the far-field displacements and stresses of any arbitrary triaxial ellipsoidal cavity can be reproduced by using this model. Regardless of its aspect ratios, the volume change of this model is simply the sum of the volume change of the individual dislocations. Our model can be integrated in any inversion scheme as simply as the Mogi model, profiting at the same time from the advantages of a generalized point source. After evaluating our model by using a boundary element method code, we apply it to ground displacements of the 2015 Calbuco eruption, Chile, observed by the Sentinel-1 satellite. We infer the parameters of a deflating elongated source located beneath Calbuco, and find significant differences to Mogi type solutions. The results imply that interpretations based on our model may help us better understand source characteristics, and in the case of Calubuco volcano infer a volcano-tectonic coupling mechanism.

  13. Refining models for quantifying the water quality benefits of improved animal management for use in water quality trading

    USDA-ARS?s Scientific Manuscript database

    Water quality trading (WQT) is a market-based approach that allows point sources of water pollution to meet their water quality obligations by purchasing credits from the reduced discharges from other point or nonpoint sources. Non-permitted animal operations and fields of permitted animal operatio...

  14. Searches for point sources in the Galactic Center region

    NASA Astrophysics Data System (ADS)

    di Mauro, Mattia; Fermi-LAT Collaboration

    2017-01-01

    Several groups have demonstrated the existence of an excess in the gamma-ray emission around the Galactic Center (GC) with respect to the predictions from a variety of Galactic Interstellar Emission Models (GIEMs) and point source catalogs. The origin of this excess, peaked at a few GeV, is still under debate. A possible interpretation is that it comes from a population of unresolved Millisecond Pulsars (MSPs) in the Galactic bulge. We investigate the detection of point sources in the GC region using new tools which the Fermi-LAT Collaboration is developing in the context of searches for Dark Matter (DM) signals. These new tools perform very fast scans iteratively testing for additional point sources at each of the pixels of the region of interest. We show also how to discriminate between point sources and structural residuals from the GIEM. We apply these methods to the GC region considering different GIEMs and testing the DM and MSPs intepretations for the GC excess. Additionally, we create a list of promising MSP candidates that could represent the brightest sources of a MSP bulge population.

  15. Finite Element modelling of deformation induced by interacting volcanic sources

    NASA Astrophysics Data System (ADS)

    Pascal, Karen; Neuberg, Jürgen; Rivalta, Eleonora

    2010-05-01

    The displacement field due to magma movements in the subsurface is commonly modelled using the solutions for a point source (Mogi, 1958), a finite spherical source (McTigue, 1987), or a dislocation source (Okada, 1992) embedded in a homogeneous elastic half-space. When the magmatic system comprises more than one source, the assumption of homogeneity in the half-space is violated and several sources are combined, their respective deformation field being summed. We have investigated the effects of neglecting the interaction between sources on the surface deformation field. To do so, we calculated the vertical and horizontal displacements for models with adjacent sources and we tested them against the solutions of corresponding numerical 3D finite element models. We implemented several models combining spherical pressure sources and dislocation sources, varying their relative position. Furthermore we considered the impact of topography, loading, and magma compressibility. To quantify the discrepancies and compare the various models, we calculated the difference between analytical and numerical maximum horizontal or vertical surface displacements.We will demonstrate that for certain conditions combining analytical sources can cause an error of up to 20%. References: McTigue, D. F. (1987), Elastic Stress and Deformation Near a Finite Spherical Magma Body: Resolution of the Point Source Paradox, J. Geophys. Res. 92, 12931-12940. Mogi, K. (1958), Relations between the eruptions of various volcanoes and the deformations of the ground surfaces around them, Bull Earthquake Res Inst, Univ Tokyo 36, 99-134. Okada, Y. (1992), Internal Deformation Due to Shear and Tensile Faults in a Half-Space, Bulletin of the Seismological Society of America 82(2), 1018-1040.

  16. From the volcano effect to banding: a minimal model for bacterial behavioral transitions near chemoattractant sources.

    PubMed

    Javens, Gregory; Jashnsaz, Hossein; Pressé, Steve

    2018-04-30

    Sharp chemoattractant (CA) gradient variations near food sources may give rise to dramatic behavioral changes of bacteria neighboring these sources. For instance, marine bacteria exhibiting run-reverse motility are known to form distinct bands around patches (large sources) of chemoattractant such as nutrient-soaked beads while run-and-tumble bacteria have been predicted to exhibit a 'volcano effect' (spherical shell-shaped density) around a small (point) source of food. Here we provide the first minimal model of banding for run-reverse bacteria and show that, while banding and the volcano effect may appear superficially similar, they are different physical effects manifested under different source emission rate (and thus effective source size). More specifically, while the volcano effect is known to arise around point sources from a bacterium's temporal differentiation of signal (and corresponding finite integration time), this effect alone is insufficient to account for banding around larger patches as bacteria would otherwise cluster around the patch without forming bands at some fixed radial distance. In particular, our model demonstrates that banding emerges from the interplay of run-reverse motility and saturation of the bacterium's chemoreceptors to CA molecules and our model furthermore predicts that run-reverse bacteria susceptible to banding behavior should also exhibit a volcano effect around sources with smaller emission rates.

  17. Is a wind turbine a point source? (L).

    PubMed

    Makarewicz, Rufin

    2011-02-01

    Measurements show that practically all noise of wind turbine noise is produced by turbine blades, sometimes a few tens of meters long, despite that the model of a point source located at the hub height is commonly used. The plane of rotating blades is the critical location of the receiver because the distances to the blades are the shortest. It is shown that such location requires certain condition to be met. The model is valid far away from the wind turbine as well.

  18. Signal-to-noise ratio for the wide field-planetary camera of the Space Telescope

    NASA Technical Reports Server (NTRS)

    Zissa, D. E.

    1984-01-01

    Signal-to-noise ratios for the Wide Field Camera and Planetary Camera of the Space Telescope were calculated as a function of integration time. Models of the optical systems and CCD detector arrays were used with a 27th visual magnitude point source and a 25th visual magnitude per arc-sq. second extended source. A 23rd visual magnitude per arc-sq. second background was assumed. The models predicted signal-to-noise ratios of 10 within 4 hours for the point source centered on a signal pixel. Signal-to-noise ratios approaching 10 are estimated for approximately 0.25 x 0.25 arc-second areas within the extended source after 10 hours integration.

  19. Instantaneous and time-averaged dispersion and measurement models for estimation theory applications with elevated point source plumes

    NASA Technical Reports Server (NTRS)

    Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.

    1977-01-01

    Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.

  20. Measuring Spatial Variability of Vapor Flux to Characterize Vadose-zone VOC Sources: Flow-cell Experiments

    DOE PAGES

    Mainhagu, Jon; Morrison, C.; Truex, Michael J.; ...

    2014-08-05

    A method termed vapor-phase tomography has recently been proposed to characterize the distribution of volatile organic contaminant mass in vadose-zone source areas, and to measure associated three-dimensional distributions of local contaminant mass discharge. The method is based on measuring the spatial variability of vapor flux, and thus inherent to its effectiveness is the premise that the magnitudes and temporal variability of vapor concentrations measured at different monitoring points within the interrogated area will be a function of the geospatial positions of the points relative to the source location. A series of flow-cell experiments was conducted to evaluate this premise. Amore » well-defined source zone was created by injection and extraction of a non-reactive gas (SF6). Spatial and temporal concentration distributions obtained from the tests were compared to simulations produced with a mathematical model describing advective and diffusive transport. Tests were conducted to characterize both areal and vertical components of the application. Decreases in concentration over time were observed for monitoring points located on the opposite side of the source zone from the local–extraction point, whereas increases were observed for monitoring points located between the local–extraction point and the source zone. We found that the results illustrate that comparison of temporal concentration profiles obtained at various monitoring points gives a general indication of the source location with respect to the extraction and monitoring points.« less

  1. AGRICULTURAL NONPOINT SOURCE POLLUTION (AGNPS)

    EPA Science Inventory

    Developed by the USDA Agricultural Research Service, Agricultural Nonpoint Source Pollution (AGNPS) model addresses concerns related to the potential impacts of point and nonpoint source pollution on surface and groundwater quality (Young et al., 1989). It was designed to quantit...

  2. User's guide for RAM. Volume II. Data preparation and listings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, D.B.; Novak, J.H.

    1978-11-01

    The information presented in this user's guide is directed to air pollution scientists having an interest in applying air quality simulation models. RAM is a method of estimating short-term dispersion using the Gaussian steady-state model. These algorithms can be used for estimating air quality concentrations of relatively nonreactive pollutants for averaging times from an hour to a day from point and area sources. The algorithms are applicable for locations with level or gently rolling terrain where a single wind vector for each hour is a good approximation to the flow over the source area considered. Calculations are performed for eachmore » hour. Hourly meteorological data required are wind direction, wind speed, temperature, stability class, and mixing height. Emission information required of point sources consists of source coordinates, emission rate, physical height, stack diameter, stack gas exit velocity, and stack gas temperature. Emission information required of area sources consists of southwest corner coordinates, source side length, total area emission rate and effective area source-height. Computation time is kept to a minimum by the manner in which concentrations from area sources are estimated using a narrow plume hypothesis and using the area source squares as given rather than breaking down all sources into an area of uniform elements. Options are available to the user to allow use of three different types of receptor locations: (1) those whose coordinates are input by the user, (2) those whose coordinates are determined by the model and are downwind of significant point and area sources where maxima are likely to occur, and (3) those whose coordinates are determined by the model to give good area coverage of a specific portion of the region. Computation time is also decreased by keeping the number of receptors to a minimum. Volume II presents RAM example outputs, typical run streams, variable glossaries, and Fortran source codes.« less

  3. Using Model Point Spread Functions to Identifying Binary Brown Dwarf Systems

    NASA Astrophysics Data System (ADS)

    Matt, Kyle; Stephens, Denise C.; Lunsford, Leanne T.

    2017-01-01

    A Brown Dwarf (BD) is a celestial object that is not massive enough to undergo hydrogen fusion in its core. BDs can form in pairs called binaries. Due to the great distances between Earth and these BDs, they act as point sources of light and the angular separation between binary BDs can be small enough to appear as a single, unresolved object in images, according to Rayleigh Criterion. It is not currently possible to resolve some of these objects into separate light sources. Stephens and Noll (2006) developed a method that used model point spread functions (PSFs) to identify binary Trans-Neptunian Objects, we will use this method to identify binary BD systems in the Hubble Space Telescope archive. This method works by comparing model PSFs of single and binary sources to the observed PSFs. We also use a method to compare model spectral data for single and binary fits to determine the best parameter values for each component of the system. We describe these methods, its challenges and other possible uses in this poster.

  4. Computational techniques in gamma-ray skyshine analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George, D.L.

    1988-12-01

    Two computer codes were developed to analyze gamma-ray skyshine, the scattering of gamma photons by air molecules. A review of previous gamma-ray skyshine studies discusses several Monte Carlo codes, programs using a single-scatter model, and the MicroSkyshine program for microcomputers. A benchmark gamma-ray skyshine experiment performed at Kansas State University is also described. A single-scatter numerical model was presented which traces photons from the source to their first scatter, then applies a buildup factor along a direct path from the scattering point to a detector. The FORTRAN code SKY, developed with this model before the present study, was modified tomore » use Gauss quadrature, recent photon attenuation data and a more accurate buildup approximation. The resulting code, SILOGP, computes response from a point photon source on the axis of a silo, with and without concrete shielding over the opening. Another program, WALLGP, was developed using the same model to compute response from a point gamma source behind a perfectly absorbing wall, with and without shielding overhead. 29 refs., 48 figs., 13 tabs.« less

  5. Fermi-Lat Observations of High-Energy Gamma-Ray Emission Toward the Galactic Center

    NASA Technical Reports Server (NTRS)

    Ajello, M.; Albert, A.; Atwood, W.B.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Bissaldi, E.; Blandford, R. D.; Brandt, T. J.; hide

    2016-01-01

    The Fermi Large Area Telescope (LAT) has provided the most detailed view to date of the emission toward the Galactic center (GC) in high-energy gamma-rays. This paper describes the analysis of data taken during the first 62 months of the mission in the energy range 1-100 GeV from a 15 degrees x 15 degrees region about the direction of the GC. Specialized interstellar emission models (IEMs) are constructed to enable the separation of the gamma-ray emissions produced by cosmic ray particles interacting with the interstellar gas and radiation fields in the Milky Way into that from the inner 1 kpc surrounding the GC, and that from the rest of the Galaxy. A catalog of point sources for the 15 degrees x 15 degrees region is self-consistently constructed using these IEMs: the First Fermi-LAT Inner Galaxy Point SourceCatalog (1FIG). The spatial locations, fluxes, and spectral properties of the 1FIG sources are presented, and compared with gamma-ray point sources over the same region taken from existing catalogs. After subtracting the interstellar emission and point-source contributions a residual is found. If templates that peak toward the GC areused to model the positive residual the agreement with the data improves, but none of the additional templates tried account for all of its spatial structure. The spectrum of the positive residual modeled with these templates has a strong dependence on the choice of IEM.

  6. Identifying equivalent sound sources from aeroacoustic simulations using a numerical phased array

    NASA Astrophysics Data System (ADS)

    Pignier, Nicolas J.; O'Reilly, Ciarán J.; Boij, Susann

    2017-04-01

    An application of phased array methods to numerical data is presented, aimed at identifying equivalent flow sound sources from aeroacoustic simulations. Based on phased array data extracted from compressible flow simulations, sound source strengths are computed on a set of points in the source region using phased array techniques assuming monopole propagation. Two phased array techniques are used to compute the source strengths: an approach using a Moore-Penrose pseudo-inverse and a beamforming approach using dual linear programming (dual-LP) deconvolution. The first approach gives a model of correlated sources for the acoustic field generated from the flow expressed in a matrix of cross- and auto-power spectral values, whereas the second approach results in a model of uncorrelated sources expressed in a vector of auto-power spectral values. The accuracy of the equivalent source model is estimated by computing the acoustic spectrum at a far-field observer. The approach is tested first on an analytical case with known point sources. It is then applied to the example of the flow around a submerged air inlet. The far-field spectra obtained from the source models for two different flow conditions are in good agreement with the spectra obtained with a Ffowcs Williams-Hawkings integral, showing the accuracy of the source model from the observer's standpoint. Various configurations for the phased array and for the sources are used. The dual-LP beamforming approach shows better robustness to changes in the number of probes and sources than the pseudo-inverse approach. The good results obtained with this simulation case demonstrate the potential of the phased array approach as a modelling tool for aeroacoustic simulations.

  7. Statistical signatures of a targeted search by bacteria

    NASA Astrophysics Data System (ADS)

    Jashnsaz, Hossein; Anderson, Gregory G.; Pressé, Steve

    2017-12-01

    Chemoattractant gradients are rarely well-controlled in nature and recent attention has turned to bacterial chemotaxis toward typical bacterial food sources such as food patches or even bacterial prey. In environments with localized food sources reminiscent of a bacterium’s natural habitat, striking phenomena—such as the volcano effect or banding—have been predicted or expected to emerge from chemotactic models. However, in practice, from limited bacterial trajectory data it is difficult to distinguish targeted searches from an untargeted search strategy for food sources. Here we use a theoretical model to identify statistical signatures of a targeted search toward point food sources, such as prey. Our model is constructed on the basis that bacteria use temporal comparisons to bias their random walk, exhibit finite memory and are subject to random (Brownian) motion as well as signaling noise. The advantage with using a stochastic model-based approach is that a stochastic model may be parametrized from individual stochastic bacterial trajectories but may then be used to generate a very large number of simulated trajectories to explore average behaviors obtained from stochastic search strategies. For example, our model predicts that a bacterium’s diffusion coefficient increases as it approaches the point source and that, in the presence of multiple sources, bacteria may take substantially longer to locate their first source giving the impression of an untargeted search strategy.

  8. A model for jet-noise analysis using pressure-gradient correlations on an imaginary cone

    NASA Technical Reports Server (NTRS)

    Norum, T. D.

    1974-01-01

    The technique for determining the near and far acoustic field of a jet through measurements of pressure-gradient correlations on an imaginary conical surface surrounding the jet is discussed. The necessary analytical developments are presented, and their feasibility is checked by using a point source as the sound generator. The distribution of the apparent sources on the cone, equivalent to the point source, is determined in terms of the pressure-gradient correlations.

  9. A clustering algorithm for sample data based on environmental pollution characteristics

    NASA Astrophysics Data System (ADS)

    Chen, Mei; Wang, Pengfei; Chen, Qiang; Wu, Jiadong; Chen, Xiaoyun

    2015-04-01

    Environmental pollution has become an issue of serious international concern in recent years. Among the receptor-oriented pollution models, CMB, PMF, UNMIX, and PCA are widely used as source apportionment models. To improve the accuracy of source apportionment and classify the sample data for these models, this study proposes an easy-to-use, high-dimensional EPC algorithm that not only organizes all of the sample data into different groups according to the similarities in pollution characteristics such as pollution sources and concentrations but also simultaneously detects outliers. The main clustering process consists of selecting the first unlabelled point as the cluster centre, then assigning each data point in the sample dataset to its most similar cluster centre according to both the user-defined threshold and the value of similarity function in each iteration, and finally modifying the clusters using a method similar to k-Means. The validity and accuracy of the algorithm are tested using both real and synthetic datasets, which makes the EPC algorithm practical and effective for appropriately classifying sample data for source apportionment models and helpful for better understanding and interpreting the sources of pollution.

  10. Facilitating open global data use in earthquake source modelling to improve geodetic and seismological approaches

    NASA Astrophysics Data System (ADS)

    Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Isken, Marius; Vasyura-Bathke, Hannes

    2017-04-01

    In the last few years impressive achievements have been made in improving inferences about earthquake sources by using InSAR (Interferometric Synthetic Aperture Radar) data. Several factors aided these developments. The open data basis of earthquake observations has expanded vastly with the two powerful Sentinel-1 SAR sensors up in space. Increasing computer power allows processing of large data sets for more detailed source models. Moreover, data inversion approaches for earthquake source inferences are becoming more advanced. By now data error propagation is widely implemented and the estimation of model uncertainties is a regular feature of reported optimum earthquake source models. Also, more regularly InSAR-derived surface displacements and seismological waveforms are combined, which requires finite rupture models instead of point-source approximations and layered medium models instead of homogeneous half-spaces. In other words the disciplinary differences in geodetic and seismological earthquake source modelling shrink towards common source-medium descriptions and a source near-field/far-field data point of view. We explore and facilitate the combination of InSAR-derived near-field static surface displacement maps and dynamic far-field seismological waveform data for global earthquake source inferences. We join in the community efforts with the particular goal to improve crustal earthquake source inferences in generally not well instrumented areas, where often only the global backbone observations of earthquakes are available provided by seismological broadband sensor networks and, since recently, by Sentinel-1 SAR acquisitions. We present our work on modelling standards for the combination of static and dynamic surface displacements in the source's near-field and far-field, e.g. on data and prediction error estimations as well as model uncertainty estimation. Rectangular dislocations and moment-tensor point sources are exchanged by simple planar finite rupture models. 1d-layered medium models are implemented for both near- and far-field data predictions. A highlight of our approach is a weak dependence on earthquake bulletin information: hypocenter locations and source origin times are relatively free source model parameters. We present this harmonized source modelling environment based on example earthquake studies, e.g. the 2010 Haiti earthquake, the 2009 L'Aquila earthquake and others. We discuss the benefit of combined-data non-linear modelling on the resolution of first-order rupture parameters, e.g. location, size, orientation, mechanism, moment/slip and rupture propagation. The presented studies apply our newly developed software tools which build up on the open-source seismological software toolbox pyrocko (www.pyrocko.org) in the form of modules. We aim to facilitate a better exploitation of open global data sets for a wide community studying tectonics, but the tools are applicable also for a large range of regional to local earthquake studies. Our developments therefore ensure a large flexibility in the parametrization of medium models (e.g. 1d to 3d medium models), source models (e.g. explosion sources, full moment tensor sources, heterogeneous slip models, etc) and of the predicted data (e.g. (high-rate) GPS, strong motion, tilt). This work is conducted within the project "Bridging Geodesy and Seismology" (www.bridges.uni-kiel.de) funded by the German Research Foundation DFG through an Emmy-Noether grant.

  11. Line-source simulation for shallow-seismic data. Part 2: full-waveform inversion—a synthetic 2-D case study

    NASA Astrophysics Data System (ADS)

    Schäfer, M.; Groos, L.; Forbriger, T.; Bohlen, T.

    2014-09-01

    Full-waveform inversion (FWI) of shallow-seismic surface waves is able to reconstruct lateral variations of subsurface elastic properties. Line-source simulation for point-source data is required when applying algorithms of 2-D adjoint FWI to recorded shallow-seismic field data. The equivalent line-source response for point-source data can be obtained by convolving the waveforms with √{t^{-1}} (t: traveltime), which produces a phase shift of π/4. Subsequently an amplitude correction must be applied. In this work we recommend to scale the seismograms with √{2 r v_ph} at small receiver offsets r, where vph is the phase velocity, and gradually shift to applying a √{t^{-1}} time-domain taper and scaling the waveforms with r√{2} for larger receiver offsets r. We call this the hybrid transformation which is adapted for direct body and Rayleigh waves and demonstrate its outstanding performance on a 2-D heterogeneous structure. The fit of the phases as well as the amplitudes for all shot locations and components (vertical and radial) is excellent with respect to the reference line-source data. An approach for 1-D media based on Fourier-Bessel integral transformation generates strong artefacts for waves produced by 2-D structures. The theoretical background for both approaches is presented in a companion contribution. In the current contribution we study their performance when applied to waves propagating in a significantly 2-D-heterogeneous structure. We calculate synthetic seismograms for 2-D structure for line sources as well as point sources. Line-source simulations obtained from the point-source seismograms through different approaches are then compared to the corresponding line-source reference waveforms. Although being derived by approximation the hybrid transformation performs excellently except for explicitly back-scattered waves. In reconstruction tests we further invert point-source synthetic seismograms by a 2-D FWI to subsurface structure and evaluate its ability to reproduce the original structural model in comparison to the inversion of line-source synthetic data. Even when applying no explicit correction to the point-source waveforms prior to inversion only moderate artefacts appear in the results. However, the overall performance is best in terms of model reproduction and ability to reproduce the original data in a 3-D simulation if inverted waveforms are obtained by the hybrid transformation.

  12. Radial Distribution of X-Ray Point Sources Near the Galactic Center

    NASA Astrophysics Data System (ADS)

    Hong, Jae Sub; van den Berg, Maureen; Grindlay, Jonathan E.; Laycock, Silas

    2009-11-01

    We present the log N-log S and spatial distributions of X-ray point sources in seven Galactic bulge (GB) fields within 4° from the Galactic center (GC). We compare the properties of 1159 X-ray point sources discovered in our deep (100 ks) Chandra observations of three low extinction Window fields near the GC with the X-ray sources in the other GB fields centered around Sgr B2, Sgr C, the Arches Cluster, and Sgr A* using Chandra archival data. To reduce the systematic errors induced by the uncertain X-ray spectra of the sources coupled with field-and-distance-dependent extinction, we classify the X-ray sources using quantile analysis and estimate their fluxes accordingly. The result indicates that the GB X-ray population is highly concentrated at the center, more heavily than the stellar distribution models. It extends out to more than 1fdg4 from the GC, and the projected density follows an empirical radial relation inversely proportional to the offset from the GC. We also compare the total X-ray and infrared surface brightness using the Chandra and Spitzer observations of the regions. The radial distribution of the total infrared surface brightness from the 3.6 band μm images appears to resemble the radial distribution of the X-ray point sources better than that predicted by the stellar distribution models. Assuming a simple power-law model for the X-ray spectra, the closer to the GC the intrinsically harder the X-ray spectra appear, but adding an iron emission line at 6.7 keV in the model allows the spectra of the GB X-ray sources to be largely consistent across the region. This implies that the majority of these GB X-ray sources can be of the same or similar type. Their X-ray luminosity and spectral properties support the idea that the most likely candidate is magnetic cataclysmic variables (CVs), primarily intermediate polars (IPs). Their observed number density is also consistent with the majority being IPs, provided the relative CV to star density in the GB is not smaller than the value in the local solar neighborhood.

  13. Source partitioning of anthropogenic groundwater nitrogen in a mixed-use landscape, Tutuila, American Samoa

    NASA Astrophysics Data System (ADS)

    Shuler, Christopher K.; El-Kadi, Aly I.; Dulai, Henrietta; Glenn, Craig R.; Fackrell, Joseph

    2017-12-01

    This study presents a modeling framework for quantifying human impacts and for partitioning the sources of contamination related to water quality in the mixed-use landscape of a small tropical volcanic island. On Tutuila, the main island of American Samoa, production wells in the most populated region (the Tafuna-Leone Plain) produce most of the island's drinking water. However, much of this water has been deemed unsafe to drink since 2009. Tutuila has three predominant anthropogenic non-point-groundwater-pollution sources of concern: on-site disposal systems (OSDS), agricultural chemicals, and pig manure. These sources are broadly distributed throughout the landscape and are located near many drinking-water wells. Water quality analyses show a link between elevated levels of total dissolved groundwater nitrogen (TN) and areas with high non-point-source pollution density, suggesting that TN can be used as a tracer of groundwater contamination from these sources. The modeling framework used in this study integrates land-use information, hydrological data, and water quality analyses with nitrogen loading and transport models. The approach utilizes a numerical groundwater flow model, a nitrogen-loading model, and a multi-species contaminant transport model. Nitrogen from each source is modeled as an independent component in order to trace the impact from individual land-use activities. Model results are calibrated and validated with dissolved groundwater TN concentrations and inorganic δ15N values, respectively. Results indicate that OSDS contribute significantly more TN to Tutuila's aquifers than other sources, and thus should be prioritized in future water-quality management efforts.

  14. Minimization of model representativity errors in identification of point source emission from atmospheric concentration measurements

    NASA Astrophysics Data System (ADS)

    Sharan, Maithili; Singh, Amit Kumar; Singh, Sarvesh Kumar

    2017-11-01

    Estimation of an unknown atmospheric release from a finite set of concentration measurements is considered an ill-posed inverse problem. Besides ill-posedness, the estimation process is influenced by the instrumental errors in the measured concentrations and model representativity errors. The study highlights the effect of minimizing model representativity errors on the source estimation. This is described in an adjoint modelling framework and followed in three steps. First, an estimation of point source parameters (location and intensity) is carried out using an inversion technique. Second, a linear regression relationship is established between the measured concentrations and corresponding predicted using the retrieved source parameters. Third, this relationship is utilized to modify the adjoint functions. Further, source estimation is carried out using these modified adjoint functions to analyse the effect of such modifications. The process is tested for two well known inversion techniques, called renormalization and least-square. The proposed methodology and inversion techniques are evaluated for a real scenario by using concentrations measurements from the Idaho diffusion experiment in low wind stable conditions. With both the inversion techniques, a significant improvement is observed in the retrieval of source estimation after minimizing the representativity errors.

  15. General model for the pointing error analysis of Risley-prism system based on ray direction deviation in light refraction

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Yuan, Yan; Su, Lijuan; Huang, Fengzhen; Bai, Qing

    2016-09-01

    The Risley-prism-based light beam steering apparatus delivers superior pointing accuracy and it is used in imaging LIDAR and imaging microscopes. A general model for pointing error analysis of the Risley prisms is proposed in this paper, based on ray direction deviation in light refraction. This model captures incident beam deviation, assembly deflections, and prism rotational error. We derive the transmission matrixes of the model firstly. Then, the independent and cumulative effects of different errors are analyzed through this model. Accuracy study of the model shows that the prediction deviation of pointing error for different error is less than 4.1×10-5° when the error amplitude is 0.1°. Detailed analyses of errors indicate that different error sources affect the pointing accuracy to varying degree, and the major error source is the incident beam deviation. The prism tilting has a relative big effect on the pointing accuracy when prism tilts in the principal section. The cumulative effect analyses of multiple errors represent that the pointing error can be reduced by tuning the bearing tilting in the same direction. The cumulative effect of rotational error is relative big when the difference of these two prism rotational angles equals 0 or π, while it is relative small when the difference equals π/2. The novelty of these results suggests that our analysis can help to uncover the error distribution and aid in measurement calibration of Risley-prism systems.

  16. A Composite Source Model With Fractal Subevent Size Distribution

    NASA Astrophysics Data System (ADS)

    Burjanek, J.; Zahradnik, J.

    A composite source model, incorporating different sized subevents, provides a pos- sible description of complex rupture processes during earthquakes. The number of subevents with characteristic dimension greater than R is proportional to R-2. The subevents do not overlap with each other, and the sum of their areas equals to the area of the target event (e.g. mainshock) . The subevents are distributed randomly over the fault. Each subevent is modeled as a finite source, using kinematic approach (radial rupture propagation, constant rupture velocity, boxcar slip-velocity function, with constant rise time on the subevent). The final slip at each subevent is related to its characteristic dimension, using constant stress-drop scaling. Variation of rise time with subevent size is a free parameter of modeling. The nucleation point of each subevent is taken as the point closest to mainshock hypocentre. The synthetic Green's functions are calculated by the discrete-wavenumber method in a 1D horizontally lay- ered crustal model in a relatively coarse grid of points covering the fault plane. The Green's functions needed for the kinematic model in a fine grid are obtained by cu- bic spline interpolation. As different frequencies may be efficiently calculated with different sampling, the interpolation simplifies and speeds-up the procedure signifi- cantly. The composite source model described above allows interpretation in terms of a kinematic model with non-uniform final slip and rupture velocity spatial distribu- tions. The 1994 Northridge earthquake (Mw = 6.7) is used as a validation event. The strong-ground motion modeling of the 1999 Athens earthquake (Mw = 5.9) is also performed.

  17. Developement of watershed and reference loads for a TMDL in Charleston Harbor System, SC.

    Treesearch

    Silong Lu; Devenra Amatya; Jamie Miller

    2005-01-01

    It is essential to determine point and non-point source loads and their distribution for development of a dissolved oxygen (DO) Total Maximum Daily Load (TMDL). A series of models were developed to assess sources of oxygen-demand loadings in Charleston Harbor, South Carolina. These oxygen-demand loadings included nutrients and BOD. Stream flow and nutrient...

  18. An Update on Phased Array Results Obtained on the GE Counter-Rotating Open Rotor Model

    NASA Technical Reports Server (NTRS)

    Podboy, Gary; Horvath, Csaba; Envia, Edmane

    2013-01-01

    Beamform maps have been generated from 1) simulated data generated by the LINPROP code and 2) actual experimental phased array data obtained on the GE Counter-rotating open rotor model. The beamform maps show that many of the tones in the experimental data come from their corresponding Mach radius. If the phased array points to the Mach radius associated with a tone then it is likely that the tone is a result of the loading and thickness noise on the blades. In this case, the phased array correctly points to where the noise is coming from and indicates the axial location of the loudest source in the image but not necessarily the correct vertical location. If the phased array does not point to the Mach radius associated with a tone then some mechanism other than loading and thickness noise may control the amplitude of the tone. In this case, the phased array may or may not point to the actual source. If the source is not rotating it is likely that the phased array points to the source. If the source is rotating it is likely that the phased array indicates the axial location of the loudest source but not necessarily the correct vertical location. These results indicate that you have to be careful in how you interpret phased array data obtained on an open rotor since they may show the tones coming from a location other than the source location. With a subsonic tip speed open rotor the tones can come form locations outboard of the blade tips. This has implications regarding noise shielding.

  19. 3D Seismic Imaging using Marchenko Methods

    NASA Astrophysics Data System (ADS)

    Lomas, A.; Curtis, A.

    2017-12-01

    Marchenko methods are novel, data driven techniques that allow seismic wavefields from sources and receivers on the Earth's surface to be redatumed to construct wavefields with sources in the subsurface - including complex multiply-reflected waves, and without the need for a complex reference model. In turn, this allows subsurface images to be constructed at any such subsurface redatuming points (image or virtual receiver points). Such images are then free of artefacts from multiply-scattered waves that usually contaminate migrated seismic images. Marchenko algorithms require as input the same information as standard migration methods: the full reflection response from sources and receivers at the Earth's surface, and an estimate of the first arriving wave between the chosen image point and the surface. The latter can be calculated using a smooth velocity model estimated using standard methods. The algorithm iteratively calculates a signal that focuses at the image point to create a virtual source at that point, and this can be used to retrieve the signal between the virtual source and the surface. A feature of these methods is that the retrieved signals are naturally decomposed into up- and down-going components. That is, we obtain both the signal that initially propagated upwards from the virtual source and arrived at the surface, separated from the signal that initially propagated downwards. Figure (a) shows a 3D subsurface model with a variable density but a constant velocity (3000m/s). Along the surface of this model (z=0) in both the x and y directions are co-located sources and receivers at 20-meter intervals. The redatumed signal in figure (b) has been calculated using Marchenko methods from a virtual source (1200m, 500m and 400m) to the surface. For comparison the true solution is given in figure (c), and shows a good match when compared to figure (b). While these 2D redatuming and imaging methods are still in their infancy having first been developed in 2012, we have extended them to 3D media and wavefields. We show that while the wavefield effects may be more complex in 3D, Marchenko methods are still valid, and 3D images that are free of multiple-related artefacts, are a realistic possibility.

  20. From the volcano effect to banding: a minimal model for bacterial behavioral transitions near chemoattractant sources

    NASA Astrophysics Data System (ADS)

    Javens, Gregory; Jashnsaz, Hossein; Pressé, Steve

    2018-07-01

    Sharp chemoattractant (CA) gradient variations near food sources may give rise to dramatic behavioral changes of bacteria neighboring these sources. For instance, marine bacteria exhibiting run-reverse motility are known to form distinct bands around patches (large sources) of chemoattractant such as nutrient-soaked beads while run-and-tumble bacteria have been predicted to exhibit a ‘volcano effect’ (spherical shell-shaped density) around a small (point) source of food. Here we provide the first minimal model of banding for run-reverse bacteria and show that, while banding and the volcano effect may appear superficially similar, they are different physical effects manifested under different source emission rate (and thus effective source size). More specifically, while the volcano effect is known to arise around point sources from a bacterium’s temporal differentiation of signal (and corresponding finite integration time), this effect alone is insufficient to account for banding around larger patches as bacteria would otherwise cluster around the patch without forming bands at some fixed radial distance. In particular, our model demonstrates that banding emerges from the interplay of run-reverse motility and saturation of the bacterium’s chemoreceptors to CA molecules and our model furthermore predicts that run-reverse bacteria susceptible to banding behavior should also exhibit a volcano effect around sources with smaller emission rates.

  1. Analysis of non-point and point source pollution in China: case study in Shima Watershed in Guangdong Province

    NASA Astrophysics Data System (ADS)

    Fang, Huaiyang; Lu, Qingshui; Gao, Zhiqiang; Shi, Runhe; Gao, Wei

    2013-09-01

    China economy has been rapidly increased since 1978. Rapid economic growth led to fast growth of fertilizer and pesticide consumption. A significant portion of fertilizers and pesticides entered the water and caused water quality degradation. At the same time, rapid economic growth also caused more and more point source pollution discharge into the water. Eutrophication has become a major threat to the water bodies. Worsening environment problems forced governments to take measures to control water pollution. We extracted land cover from Landsat TM images; calculated point source pollution with export coefficient method; then SWAT model was run to simulate non-point source pollution. We found that the annual TP loads from industry pollution into rivers are 115.0 t in the entire watershed. Average annual TP loads from each sub-basin ranged from 0 to 189.4 ton. Higher TP loads of each basin from livestock and human living mainly occurs in the areas where they are far from large towns or cities and the TP loads from industry are relatively low. Mean annual TP loads that delivered to the streams was 246.4 tons and the highest TP loads occurred in north part of this area, and the lowest TP loads is mainly distributed in middle part. Therefore, point source pollution has much high proportion in this area and governments should take measures to control point source pollution.

  2. Probing dim point sources in the inner Milky Way using PCAT

    NASA Astrophysics Data System (ADS)

    Daylan, Tansu; Portillo, Stephen K. N.; Finkbeiner, Douglas P.

    2017-01-01

    Poisson regression of the Fermi-LAT data in the inner Milky Way reveals an extended gamma-ray excess. An important question is whether the signal is coming from a collection of unresolved point sources, possibly old recycled pulsars, or constitutes a truly diffuse emission component. Previous analyses have relied on non-Poissonian template fits or wavelet decomposition of the Fermi-LAT data, which find evidence for a population of dim point sources just below the 3FGL flux limit. In order to be able to draw conclusions about the flux distribution of point sources at the dim end, we employ a Bayesian trans-dimensional MCMC framework by taking samples from the space of catalogs consistent with the observed gamma-ray emission in the inner Milky Way. The software implementation, PCAT (Probabilistic Cataloger), is designed to efficiently explore that catalog space in the crowded field limit such as in the galactic plane, where the model PSF, point source positions and fluxes are highly degenerate. We thus generate fair realizations of the underlying MSP population in the inner galaxy and constrain the population characteristics such as the radial and flux distribution of such sources.

  3. Rethinking moment tensor inversion methods to retrieve the source mechanisms of low-frequency seismic events

    NASA Astrophysics Data System (ADS)

    Karl, S.; Neuberg, J.

    2011-12-01

    Volcanoes exhibit a variety of seismic signals. One specific type, the so-called long-period (LP) or low-frequency event, has proven to be crucial for understanding the internal dynamics of the volcanic system. These long period (LP) seismic events have been observed at many volcanoes around the world, and are thought to be associated with resonating fluid-filled conduits or fluid movements (Chouet, 1996; Neuberg et al., 2006). While the seismic wavefield is well established, the actual trigger mechanism of these events is still poorly understood. Neuberg et al. (2006) proposed a conceptual model for the trigger of LP events at Montserrat involving the brittle failure of magma in the glass transition in response to the upwards movement of magma. In an attempt to gain a better quantitative understanding of the driving forces of LPs, inversions for the physical source mechanisms have become increasingly common. Previous studies have assumed a point source for waveform inversion. Knowing that applying a point source model to synthetic seismograms representing an extended source process does not yield the real source mechanism, it can, however, still lead to apparent moment tensor elements which then can be compared to previous results in the literature. Therefore, this study follows the proposed concepts of Neuberg et al. (2006), modelling the extended LP source as an octagonal arrangement of double couples approximating a circular ringfault bounding the circumference of the volcanic conduit. Synthetic seismograms were inverted for the physical source mechanisms of LPs using the moment tensor inversion code TDMTISO_INVC by Dreger (2003). Here, we will present the effects of changing the source parameters on the apparent moment tensor elements. First results show that, due to negative interference, the amplitude of the seismic signals of a ringfault structure is greatly reduced when compared to a single double couple source. Furthermore, best inversion results yield a solution comprised of positive isotropic and compensated linear vector dipole components. Thus, the physical source mechanisms of volcano seismic signals may be misinterpreted as opening shear or tensile cracks when wrongly assuming a point source. In order to approach the real physical sources with our models, inversions based on higher-order tensors might have to be considered in the future. An inversion technique where the point source is replaced by a so-called moment tensor density would allow inversions of volcano seismic signals for sources that can then be temporally and spatially extended.

  4. Modeling diffuse phosphorus emissions to assist in best management practice designing

    NASA Astrophysics Data System (ADS)

    Kovacs, Adam; Zessner, Matthias; Honti, Mark; Clement, Adrienne

    2010-05-01

    A diffuse emission modeling tool has been developed, which is appropriate to support decision-making in watershed management. The PhosFate (Phosphorus Fate) tool allows planning best management practices (BMPs) in catchments and simulating their possible impacts on the phosphorus (P) loads. PhosFate is a simple fate model to calculate diffuse P emissions and their transport within a catchment. The model is a semi-empirical, catchment scale, distributed parameter and long-term (annual) average model. It has two main parts: (a) the emission and (b) the transport model. The main input data of the model are digital maps (elevation, soil types and landuse categories), statistical data (crop yields, animal numbers, fertilizer amounts and precipitation distribution) and point information (precipitation, meteorology, soil humus content, point source emissions and reservoir data). The emission model calculates the diffuse P emissions at their source. It computes the basic elements of the hydrology as well as the soil loss. The model determines the accumulated P surplus of the topsoil and distinguishes the dissolved and the particulate P forms. Emissions are calculated according to the different pathways (surface runoff, erosion and leaching). The main outputs are the spatial distribution (cell values) of the runoff components, the soil loss and the P emissions within the catchment. The transport model joins the independent cells based on the flow tree and it follows the further fate of emitted P from each cell to the catchment outlets. Surface runoff and P fluxes are accumulated along the tree and the field and in-stream retention of the particulate forms are computed. In case of base flow and subsurface P loads only the channel transport is taken into account due to the less known hydrogeological conditions. During the channel transport, point sources and reservoirs are also considered. Main results of the transport algorithm are the discharge, dissolved and sediment-bounded P load values at any arbitrary point within the catchment. Finally, a simple design procedure has been built up to plan BMPs in the catchments and simulate their possible impacts on diffuse P fluxes as well as calculate their approximately costs. Both source and transport controlling measures have been involved into the planning procedure. The model also allows examining the impacts of alterations of fertilizer application, point source emissions as well as the climate change on the river loads. Besides this, a simple optimization algorithm has been developed to select the most effective source areas (real hot spots), which should be targeted by the interventions. The fate model performed well in Hungarian pilot catchments. Using the calibrated and validated model, different management scenarios were worked out and their effects and costs evaluated and compared to each other. The results show that the approach is suitable to effectively design BMP measures at local scale. Combinative application of the source and transport controlling BMPs can result in high P reduction efficiency. Optimization of the interventions can remarkably reduce the area demand of the necessary BMPs, consequently the establishment costs can be decreased. The model can be coupled with a larger scale catchment model to form a "screening and planning" modeling system.

  5. Point Source X-Ray Lithography System for Sub-0.15 Micron Design Rules

    DTIC Science & Technology

    1998-05-22

    consist of a SAL developed stepper, an SRL developed Dense Plasma Focus , (DPF), X-Ray source, and a CXrL developed beam line. The system will be...existing machine that used spark gap switching, SRL has developed an all solid state driver and improved head electrode assembly for their dense plasma ... focus X-Ray source. Likewise, SAL has used their existing Model 4 stepper installed at CXrL as a design starting point, and has developed an advanced

  6. A novel solution for LED wall lamp design and simulation

    NASA Astrophysics Data System (ADS)

    Ge, Rui; Hong, Weibin; Li, Kuangqi; Liang, Pengxiang; Zhao, Fuli

    2014-11-01

    The model of the wall washer lamp and the practical illumination application have been established with a new design of the lens to meet the uniform illumination demand for wall washer lamp based on the Lambertian light sources. Our secondary optical design of freeform surface lens to LED wall washer lamp based on the conservation law of energy and Snell's law can improve the lighting effects as a uniform illumination. With the relationship between the surface of the lens and the surface of the target, a great number of discrete points of the freeform profile curve were obtained through the iterative method. After importing the data into our modeling program, the optical entity was obtained. Finally, to verify the feasibility of the algorithm, the model was simulated by specialized software, with both the LED Lambertian point source and LED panel source model.

  7. Generating Accurate 3d Models of Architectural Heritage Structures Using Low-Cost Camera and Open Source Algorithms

    NASA Astrophysics Data System (ADS)

    Zacharek, M.; Delis, P.; Kedzierski, M.; Fryskowska, A.

    2017-05-01

    These studies have been conductedusing non-metric digital camera and dense image matching algorithms, as non-contact methods of creating monuments documentation.In order toprocess the imagery, few open-source software and algorithms of generating adense point cloud from images have been executed. In the research, the OSM Bundler, VisualSFM software, and web application ARC3D were used. Images obtained for each of the investigated objects were processed using those applications, and then dense point clouds and textured 3D models were created. As a result of post-processing, obtained models were filtered and scaled.The research showedthat even using the open-source software it is possible toobtain accurate 3D models of structures (with an accuracy of a few centimeters), but for the purpose of documentation and conservation of cultural and historical heritage, such accuracy can be insufficient.

  8. Error Estimation and Compensation in Reduced Dynamic Models of Large Space Structures

    DTIC Science & Technology

    1987-04-23

    PROCUREMENT INSTRUMENT IDENTIFICATION NUMBER ORGANIZATION (if aplicable ) AFWAL I FIBRA F33615-84-C-3219 8c. ADDRESS (City, Stateand ZIP Code) ?0 SOURCE...10 Modes of the Full Model 15 5 Comparison of Various Reduced Models 18 6 Driving Point Mobilities , Wing Tip (Z55) 19 7 Driving Point Mobilities , Wing...Root Trailing Edge (Z19) 20 8 AMI Improvement 23 9 Frequency Domain Solution, Driving Point Mobilities , Wing Tip (Z55), RM1I 25 10 Frequency Domain

  9. [Nitrogen non-point source pollution identification based on ArcSWAT in Changle River].

    PubMed

    Deng, Ou-Ping; Sun, Si-Yang; Lü, Jun

    2013-04-01

    The ArcSWAT (Soil and Water Assessment Tool) model was adopted for Non-point source (NPS) nitrogen pollution modeling and nitrogen source apportionment for the Changle River watershed, a typical agricultural watershed in Southeast China. Water quality and hydrological parameters were monitored, and the watershed natural conditions (including soil, climate, land use, etc) and pollution sources information were also investigated and collected for SWAT database. The ArcSWAT model was established in the Changle River after the calibrating and validating procedures of the model parameters. Based on the validated SWAT model, the contributions of different nitrogen sources to river TN loading were quantified, and spatial-temporal distributions of NPS nitrogen export to rivers were addressed. The results showed that in the Changle River watershed, Nitrogen fertilizer, nitrogen air deposition and nitrogen soil pool were the prominent pollution sources, which contributed 35%, 32% and 25% to the river TN loading, respectively. There were spatial-temporal variations in the critical sources for NPS TN export to the river. Natural sources, such as soil nitrogen pool and atmospheric nitrogen deposition, should be targeted as the critical sources for river TN pollution during the rainy seasons. Chemical nitrogen fertilizer application should be targeted as the critical sources for river TN pollution during the crop growing season. Chemical nitrogen fertilizer application, soil nitrogen pool and atmospheric nitrogen deposition were the main sources for TN exported from the garden plot, forest and residential land, respectively. However, they were the main sources for TN exported both from the upland and paddy field. These results revealed that NPS pollution controlling rules should focus on the spatio-temporal distribution of NPS pollution sources.

  10. MODELING MINERAL NITROGEN EXPORT FROM A FOREST TERRESTRIAL ECOSYSTEM TO STREAMS

    EPA Science Inventory

    Terrestrial ecosystems are major sources of N pollution to aquatic ecosystems. Predicting N export to streams is a critical goal of non-point source modeling. This study was conducted to assess the effect of terrestrial N cycling on stream N export using long-term monitoring da...

  11. Capturing microbial sources distributed in a mixed-use watershed within an integrated environmental modeling workflow

    EPA Science Inventory

    Many watershed models simulate overland and instream microbial fate and transport, but few provide loading rates on land surfaces and point sources to the waterbody network. This paper describes the underlying equations for microbial loading rates associated with 1) land-applied ...

  12. Capturing microbial sources distributed in a mixed-use watershed within an integrated environmental modeling workflow

    USDA-ARS?s Scientific Manuscript database

    Many watershed models simulate overland and instream microbial fate and transport, but few provide loading rates on land surfaces and point sources to the waterbody network. This paper describes the underlying equations for microbial loading rates associated with 1) land-applied manure on undevelope...

  13. Evaluating the suitability of the Soil Vulnerability Index (SVI) classification scheme using the SWAT model

    USDA-ARS?s Scientific Manuscript database

    Conservation practices are effective ways to mitigate non-point source pollution, especially when implemented on critical source areas (CSAs) known to be the areas contributing disproportionately to high pollution loads. Although hydrologic models are promising tools to identify CSAs within agricul...

  14. In-time source tracking of watershed loads of Taihu Lake Basin, China based on spatial relationship modeling.

    PubMed

    Wang, Ce; Bi, Jun; Zhang, Xu-Xiang; Fang, Qiang; Qi, Yi

    2018-05-25

    Influent river carrying cumulative watershed load plays a significant role in promoting nuisance algal bloom in river-fed lake. It is most relevant to discern in-stream water quality exceedance and evaluate the spatial relationship between risk location and potential pollution sources. However, no comprehensive studies of source tracking in watershed based on management grid have been conducted for refined water quality management, particularly for plain terrain with complex river network. In this study, field investigations were implemented during 2014 in Taige Canal watershed of Taihu Lake Basin. A Geographical Information System (GIS)-based spatial relationship model was established to characterize the spatial relationships of "point (point-source location and monitoring site)-line (river segment)-plane (catchment)." As a practical exemplification, in-time source tracking was triggered on April 15, 2015 at Huangnianqiao station, where TN and TP concentration violated the water quality standard (TN 4.0 mg/L, TP 0.15 mg/L). Of the target grid cells, 53 and 46 were identified as crucial areas having high pollution intensity for TN and TP pollution, respectively. The estimated non-point source load in each grid cell could be apportioned into different source types based on spatial pollution-related entity objects. We found that the non-point source load derived from rural sewage and livestock and poultry breeding accounted for more than 80% of total TN or TP load than another source type of crop farming. The approach in this study would be of great benefit to local authorities for identifying the serious polluted regions and efficiently making environmental policies to reduce watershed load.

  15. Reducing errors in aircraft atmospheric inversion estimates of point-source emissions: the Aliso Canyon natural gas leak as a natural tracer experiment

    NASA Astrophysics Data System (ADS)

    Gourdji, S. M.; Yadav, V.; Karion, A.; Mueller, K. L.; Conley, S.; Ryerson, T.; Nehrkorn, T.; Kort, E. A.

    2018-04-01

    Urban greenhouse gas (GHG) flux estimation with atmospheric measurements and modeling, i.e. the ‘top-down’ approach, can potentially support GHG emission reduction policies by assessing trends in surface fluxes and detecting anomalies from bottom-up inventories. Aircraft-collected GHG observations also have the potential to help quantify point-source emissions that may not be adequately sampled by fixed surface tower-based atmospheric observing systems. Here, we estimate CH4 emissions from a known point source, the Aliso Canyon natural gas leak in Los Angeles, CA from October 2015–February 2016, using atmospheric inverse models with airborne CH4 observations from twelve flights ≈4 km downwind of the leak and surface sensitivities from a mesoscale atmospheric transport model. This leak event has been well-quantified previously using various methods by the California Air Resources Board, thereby providing high confidence in the mass-balance leak rate estimates of (Conley et al 2016), used here for comparison to inversion results. Inversions with an optimal setup are shown to provide estimates of the leak magnitude, on average, within a third of the mass balance values, with remaining errors in estimated leak rates predominantly explained by modeled wind speed errors of up to 10 m s‑1, quantified by comparing airborne meteorological observations with modeled values along the flight track. An inversion setup using scaled observational wind speed errors in the model-data mismatch covariance matrix is shown to significantly reduce the influence of transport model errors on spatial patterns and estimated leak rates from the inversions. In sum, this study takes advantage of a natural tracer release experiment (i.e. the Aliso Canyon natural gas leak) to identify effective approaches for reducing the influence of transport model error on atmospheric inversions of point-source emissions, while suggesting future potential for integrating surface tower and aircraft atmospheric GHG observations in top-down urban emission monitoring systems.

  16. Effects of Grid Resolution on Modeled Air Pollutant Concentrations Due to Emissions from Large Point Sources: Case Study during KORUS-AQ 2016 Campaign

    NASA Astrophysics Data System (ADS)

    Ju, H.; Bae, C.; Kim, B. U.; Kim, H. C.; Kim, S.

    2017-12-01

    Large point sources in the Chungnam area received a nation-wide attention in South Korea because the area is located southwest of the Seoul Metropolitan Area whose population is over 22 million and the summertime prevalent winds in the area is northeastward. Therefore, emissions from the large point sources in the Chungnam area were one of the major observation targets during the KORUS-AQ 2016 including aircraft measurements. In general, horizontal grid resolutions of eulerian photochemical models have profound effects on estimated air pollutant concentrations. It is due to the formulation of grid models; that is, emissions in a grid cell will be assumed to be mixed well under planetary boundary layers regardless of grid cell sizes. In this study, we performed series of simulations with the Comprehensive Air Quality Model with eXetension (CAMx). For 9-km and 3-km simulations, we used meteorological fields obtained from the Weather Research and Forecast model while utilizing the "Flexi-nesting" option in the CAMx for the 1-km simulation. In "Flexi-nesting" mode, CAMx interpolates or assigns model inputs from the immediate parent grid. We compared modeled concentrations with ground observation data as well as aircraft measurements to quantify variations of model bias and error depending on horizontal grid resolutions.

  17. Model for Semantically Rich Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Poux, F.; Neuville, R.; Hallot, P.; Billen, R.

    2017-10-01

    This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.

  18. Development of Additional Hazard Assessment Models

    DTIC Science & Technology

    1977-03-01

    globules, their trajectory (the distance from the spill point to the impact point on the river bed), and the time required for sinking. Established theories ...chemicals, the dissolution rate is estimated by using eddy diffusivity surface renewal theories . The validity of predictions of these theories has been... theories and experimental data on aeration of rivers. * Describe dispersion in rivers with stationary area source and sources moving with the stream

  19. Response of non-point source pollutant loads to climate change in the Shitoukoumen reservoir catchment.

    PubMed

    Zhang, Lei; Lu, Wenxi; An, Yonglei; Li, Di; Gong, Lei

    2012-01-01

    The impacts of climate change on streamflow and non-point source pollutant loads in the Shitoukoumen reservoir catchment are predicted by combining a general circulation model (HadCM3) with the Soil and Water Assessment Tool (SWAT) hydrological model. A statistical downscaling model was used to generate future local scenarios of meteorological variables such as temperature and precipitation. Then, the downscaled meteorological variables were used as input to the SWAT hydrological model calibrated and validated with observations, and the corresponding changes of future streamflow and non-point source pollutant loads in Shitoukoumen reservoir catchment were simulated and analyzed. Results show that daily temperature increases in three future periods (2010-2039, 2040-2069, and 2070-2099) relative to a baseline of 1961-1990, and the rate of increase is 0.63°C per decade. Annual precipitation also shows an apparent increase of 11 mm per decade. The calibration and validation results showed that the SWAT model was able to simulate well the streamflow and non-point source pollutant loads, with a coefficient of determination of 0.7 and a Nash-Sutcliffe efficiency of about 0.7 for both the calibration and validation periods. The future climate change has a significant impact on streamflow and non-point source pollutant loads. The annual streamflow shows a fluctuating upward trend from 2010 to 2099, with an increase rate of 1.1 m(3) s(-1) per decade, and a significant upward trend in summer, with an increase rate of 1.32 m(3) s(-1) per decade. The increase in summer contributes the most to the increase of annual load compared with other seasons. The annual NH (4) (+) -N load into Shitoukoumen reservoir shows a significant downward trend with a decrease rate of 40.6 t per decade. The annual TP load shows an insignificant increasing trend, and its change rate is 3.77 t per decade. The results of this analysis provide a scientific basis for effective support of decision makers and strategies of adaptation to climate change.

  20. Comparative evaluation of statistical and mechanistic models of Escherichia coli at beaches in southern Lake Michigan

    USGS Publications Warehouse

    Safaie, Ammar; Wendzel, Aaron; Ge, Zhongfu; Nevers, Meredith; Whitman, Richard L.; Corsi, Steven R.; Phanikumar, Mantha S.

    2016-01-01

    Statistical and mechanistic models are popular tools for predicting the levels of indicator bacteria at recreational beaches. Researchers tend to use one class of model or the other, and it is difficult to generalize statements about their relative performance due to differences in how the models are developed, tested, and used. We describe a cooperative modeling approach for freshwater beaches impacted by point sources in which insights derived from mechanistic modeling were used to further improve the statistical models and vice versa. The statistical models provided a basis for assessing the mechanistic models which were further improved using probability distributions to generate high-resolution time series data at the source, long-term “tracer” transport modeling based on observed electrical conductivity, better assimilation of meteorological data, and the use of unstructured-grids to better resolve nearshore features. This approach resulted in improved models of comparable performance for both classes including a parsimonious statistical model suitable for real-time predictions based on an easily measurable environmental variable (turbidity). The modeling approach outlined here can be used at other sites impacted by point sources and has the potential to improve water quality predictions resulting in more accurate estimates of beach closures.

  1. Simulation and source identification of X-ray contrast media in the water cycle of Berlin.

    PubMed

    Knodel, J; Geissen, S-U; Broll, J; Dünnbier, U

    2011-11-01

    This article describes the development of a model to simulate the fate of iodinated X-ray contrast media (XRC) in the water cycle of the German capital, Berlin. It also handles data uncertainties concerning the different amounts and sources of input for XRC via source densities in single districts for the XRC usage by inhabitants, hospitals, and radiologists. As well, different degradation rates for the behavior of the adsorbable organic iodine (AOI) were investigated in single water compartments. The introduced model consists of mass balances and includes, in addition to naturally branched bodies of water, the water distribution network between waterways and wastewater treatment plants, which are coupled to natural surface waters at numerous points. Scenarios were calculated according to the data uncertainties that were statistically evaluated to identify the scenario with the highest agreement among the provided measurement data. The simulation of X-ray contrast media in the water cycle of Berlin showed that medical institutions have to be considered as point sources for congested urban areas due to their high levels of X-ray contrast media emission. The calculations identified hospitals, represented by their capacity (number of hospital beds), as the most relevant point sources, while the inhabitants served as important diffusive sources. Deployed for almost inert substances like contrast media, the model can be used for qualitative statements and, therefore, as a decision-support tool. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Discovery of the First Quadruple Gravitationally Lensed Quasar Candidate with Pan-STARRS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berghea, C. T.; Nelson, George J.; Dudik, R. P.

    We report the serendipitous discovery of the first gravitationally lensed quasar candidate from Pan-STARRS. The grizy images reveal four point-like images with magnitudes between 14.9 and 18.1 mag. The colors of the point sources are similar, and they are more consistent with quasars than with stars or galaxies. The lensing galaxy is detected in the izy bands, with an inferred photometric redshift of ∼0.6, lower than that of the point sources. We successfully model the system with a singular isothermal ellipsoid with shear, using the relative positions of the five objects as constraints. While the brightness ranking of the pointmore » sources is consistent with that of the model, we find discrepancies between the model-predicted and observed fluxes, likely due to microlensing by stars and millilensing due to the dark matter substructure. In order to fully confirm the gravitational lens nature of this system and add it to the small but growing number of the powerful probes of cosmology and astrophysics represented by quadruply lensed quasars, we require further spectroscopy and high-resolution imaging.« less

  3. Single scan parameterization of space-variant point spread functions in image space via a printed array: the impact for two PET/CT scanners.

    PubMed

    Kotasidis, F A; Matthews, J C; Angelis, G I; Noonan, P J; Jackson, A; Price, P; Lionheart, W R; Reader, A J

    2011-05-21

    Incorporation of a resolution model during statistical image reconstruction often produces images of improved resolution and signal-to-noise ratio. A novel and practical methodology to rapidly and accurately determine the overall emission and detection blurring component of the system matrix using a printed point source array within a custom-made Perspex phantom is presented. The array was scanned at different positions and orientations within the field of view (FOV) to examine the feasibility of extrapolating the measured point source blurring to other locations in the FOV and the robustness of measurements from a single point source array scan. We measured the spatially-variant image-based blurring on two PET/CT scanners, the B-Hi-Rez and the TruePoint TrueV. These measured spatially-variant kernels and the spatially-invariant kernel at the FOV centre were then incorporated within an ordinary Poisson ordered subset expectation maximization (OP-OSEM) algorithm and compared to the manufacturer's implementation using projection space resolution modelling (RM). Comparisons were based on a point source array, the NEMA IEC image quality phantom, the Cologne resolution phantom and two clinical studies (carbon-11 labelled anti-sense oligonucleotide [(11)C]-ASO and fluorine-18 labelled fluoro-l-thymidine [(18)F]-FLT). Robust and accurate measurements of spatially-variant image blurring were successfully obtained from a single scan. Spatially-variant resolution modelling resulted in notable resolution improvements away from the centre of the FOV. Comparison between spatially-variant image-space methods and the projection-space approach (the first such report, using a range of studies) demonstrated very similar performance with our image-based implementation producing slightly better contrast recovery (CR) for the same level of image roughness (IR). These results demonstrate that image-based resolution modelling within reconstruction is a valid alternative to projection-based modelling, and that, when using the proposed practical methodology, the necessary resolution measurements can be obtained from a single scan. This approach avoids the relatively time-consuming and involved procedures previously proposed in the literature.

  4. Simulation of Temperature, Nutrients, Biochemical Oxygen Demand, and Dissolved Oxygen in the Catawba River, South Carolina, 1996-97

    USGS Publications Warehouse

    Feaster, Toby D.; Conrads, Paul; Guimaraes, Wladmir B.; Sanders, Curtis L.; Bales, Jerad D.

    2003-01-01

    Time-series plots of dissolved-oxygen concentrations were determined for various simulated hydrologic and point-source loading conditions along a free-flowing section of the Catawba River from Lake Wylie Dam to the headwaters of Fishing Creek Reservoir in South Carolina. The U.S. Geological Survey one-dimensional dynamic-flow model, BRANCH, was used to simulate hydrodynamic data for the Branched Lagrangian Transport Model. Waterquality data were used to calibrate the Branched Lagrangian Transport Model and included concentrations of nutrients, chlorophyll a, and biochemical oxygen demand in water samples collected during two synoptic sampling surveys at 10 sites along the main stem of the Catawba River and at 3 tributaries; and continuous water temperature and dissolved-oxygen concentrations measured at 5 locations along the main stem of the Catawba River. A sensitivity analysis of the simulated dissolved-oxygen concentrations to model coefficients and data inputs indicated that the simulated dissolved-oxygen concentrations were most sensitive to watertemperature boundary data due to the effect of temperature on reaction kinetics and the solubility of dissolved oxygen. Of the model coefficients, the simulated dissolved-oxygen concentration was most sensitive to the biological oxidation rate of nitrite to nitrate. To demonstrate the utility of the Branched Lagrangian Transport Model for the Catawba River, the model was used to simulate several water-quality scenarios to evaluate the effect on the 24-hour mean dissolved-oxygen concentrations at selected sites for August 24, 1996, as simulated during the model calibration period of August 23 27, 1996. The first scenario included three loading conditions of the major effluent discharges along the main stem of the Catawba River (1) current load (as sampled in August 1996); (2) no load (all point-source loads were removed from the main stem of the Catawba River; loads from the main tributaries were not removed); and (3) fully loaded (in accordance with South Carolina Department of Health and Environmental Control National Discharge Elimination System permits). Results indicate that the 24-hour mean and minimum dissolved-oxygen concentrations for August 24, 1996, changed from the no-load condition within a range of - 0.33 to 0.02 milligram per liter and - 0.48 to 0.00 milligram per liter, respectively. Fully permitted loading conditions changed the 24-hour mean and minimum dissolved-oxygen concentrations from - 0.88 to 0.04 milligram per liter and - 1.04 to 0.00 milligram per liter, respectively. A second scenario included the addition of a point-source discharge of 25 million gallons per day to the August 1996 calibration conditions. The discharge was added at S.C. Highway 5 or at a location near Culp Island (about 4 miles downstream from S.C. Highway 5) and had no significant effect on the daily mean and minimum dissolved-oxygen concentration. A third scenario evaluated the phosphorus loading into Fishing Creek Reservoir; four loading conditions of phosphorus into Catawba River were simulated. The four conditions included fully permitted and actual loading conditions, removal of all point sources from the Catawba River, and removal of all point and nonpoint sources from Sugar Creek. Removing the point-source inputs on the Catawba River and the point and nonpoint sources in Sugar Creek reduced the organic phosphorus and orthophosphate loadings to Fishing Creek Reservoir by 78 and 85 percent, respectively.

  5. Monitor-based evaluation of pollutant load from urban stormwater runoff in Beijing.

    PubMed

    Liu, Y; Che, W; Li, J

    2005-01-01

    As a major pollutant source to urban receiving waters, the non-point source pollution from urban runoff needs to be well studied and effectively controlled. Based on monitoring data from urban runoff pollutant sources, this article describes a systematic estimation of total pollutant loads from the urban areas of Beijing. A numerical model was developed to quantify main pollutant loads of urban runoff in Beijing. A sub-procedure is involved in this method, in which the flush process influences both the quantity and quality of stormwater runoff. A statistics-based method was applied in computing the annual pollutant load as an output of the runoff. The proportions of pollutant from point-source and non-point sources were compared. This provides a scientific basis for proper environmental input assessment of urban stormwater pollution to receiving waters, improvement of infrastructure performance, implementation of urban stormwater management, and utilization of stormwater.

  6. PSFGAN: a generative adversarial network system for separating quasar point sources and host galaxy light

    NASA Astrophysics Data System (ADS)

    Stark, Dominic; Launet, Barthelemy; Schawinski, Kevin; Zhang, Ce; Koss, Michael; Turp, M. Dennis; Sartori, Lia F.; Zhang, Hantian; Chen, Yiru; Weigel, Anna K.

    2018-06-01

    The study of unobscured active galactic nuclei (AGN) and quasars depends on the reliable decomposition of the light from the AGN point source and the extended host galaxy light. The problem is typically approached using parametric fitting routines using separate models for the host galaxy and the point spread function (PSF). We present a new approach using a Generative Adversarial Network (GAN) trained on galaxy images. We test the method using Sloan Digital Sky Survey r-band images with artificial AGN point sources added that are then removed using the GAN and with parametric methods using GALFIT. When the AGN point source is more than twice as bright as the host galaxy, we find that our method, PSFGAN, can recover point source and host galaxy magnitudes with smaller systematic error and a lower average scatter (49 per cent). PSFGAN is more tolerant to poor knowledge of the PSF than parametric methods. Our tests show that PSFGAN is robust against a broadening in the PSF width of ± 50 per cent if it is trained on multiple PSFs. We demonstrate that while a matched training set does improve performance, we can still subtract point sources using a PSFGAN trained on non-astronomical images. While initial training is computationally expensive, evaluating PSFGAN on data is more than 40 times faster than GALFIT fitting two components. Finally, PSFGAN is more robust and easy to use than parametric methods as it requires no input parameters.

  7. The Observation of Fault Finiteness and Rapid Velocity Variation in Pnl Waveforms for the Mw 6.5, San Simeon, California Earthquake

    NASA Astrophysics Data System (ADS)

    Konca, A. O.; Ji, C.; Helmberger, D. V.

    2004-12-01

    We observed the effect of the fault finiteness in the Pnl waveforms from regional distances (4° to 12° ) for the Mw6.5 San Simeon Earthquake on 22 December 2003. We aimed to include more of the high frequencies (2 seconds and longer periods) than the studies that use regional data for focal solutions (5 to 8 seconds and longer periods). We calculated 1-D synthetic seismograms for the Pn_l portion for both a point source, and a finite fault solution. The comparison of the point source and finite fault waveforms with data show that the first several seconds of the point source synthetics have considerably higher amplitude than the data, while finite fault does not have a similar problem. This can be explained by reversely polarized depth phases overlapping with the P waves from the later portion of the fault, and causing smaller amplitudes for the beginning portion of the seismogram. This is clearly a finite fault phenomenon; therefore, can not be explained by point source calculations. Moreover, the point source synthetics, which are calculated with a focal solution from a long period regional inversion, are overestimating the amplitude by three to four times relative to the data amplitude, while finite fault waveforms have the similar amplitudes to the data. Hence, a moment estimation based only on the point source solution of the regional data could have been wrong by half of magnitude. We have also calculated the shifts of synthetics relative to data to fit the seismograms. Our results reveal that the paths from Central California to the south are faster than to the paths to the east and north. The P wave arrival to the TUC station in Arizona is 4 seconds earlier than the predicted Southern California model, while most stations to the east are delayed around 1 second. The observed higher uppermost mantle velocities to the south are consistent with some recent tomographic models. Synthetics generated with these models significantly improves the fits and the timing at most stations. This means that regional waveform data can be used to help locate and establish source complexities for future events.

  8. A soft X-ray map of the Perseus cluster of galaxies

    NASA Technical Reports Server (NTRS)

    Cash, W.; Malina, R. F.; Wolff, R. S.

    1976-01-01

    A 0.5-3-keV X-ray map of the Perseus cluster of galaxies is presented. The map shows a region of strong emission centered near NGC 1275 plus a highly elongated emission region which lies along the line of bright galaxies that dominates the core of the cluster. The data are compared with various models that include point and diffuse sources. One model which adequately represents the data is the superposition of a point source at NGC 1275 and an isothermal ellipsoid resulting from the bremsstrahlung emission of cluster gas. The ellipsoid has a major core radius of 20.5 arcmin and a minor core radius of 5.5 arcmin, consistent with the values obtained from galaxy counts. All acceptable models provide evidence for a compact source (less than 3 arcmin FWHM) at NGC 1275 containing about 25% of the total emission. Since the diffuse X-ray and radio components have radically different morphologies, it is unlikely that the emissions arise from a common source, as proposed in inverse-Compton models.

  9. Analysis of point source size on measurement accuracy of lateral point-spread function of confocal Raman microscopy

    NASA Astrophysics Data System (ADS)

    Fu, Shihang; Zhang, Li; Hu, Yao; Ding, Xiang

    2018-01-01

    Confocal Raman Microscopy (CRM) has matured to become one of the most powerful instruments in analytical science because of its molecular sensitivity and high spatial resolution. Compared with conventional Raman Microscopy, CRM can perform three dimensions mapping of tiny samples and has the advantage of high spatial resolution thanking to the unique pinhole. With the wide application of the instrument, there is a growing requirement for the evaluation of the imaging performance of the system. Point-spread function (PSF) is an important approach to the evaluation of imaging capability of an optical instrument. Among a variety of measurement methods of PSF, the point source method has been widely used because it is easy to operate and the measurement results are approximate to the true PSF. In the point source method, the point source size has a significant impact on the final measurement accuracy. In this paper, the influence of the point source sizes on the measurement accuracy of PSF is analyzed and verified experimentally. A theoretical model of the lateral PSF for CRM is established and the effect of point source size on full-width at half maximum of lateral PSF is simulated. For long-term preservation and measurement convenience, PSF measurement phantom using polydimethylsiloxane resin, doped with different sizes of polystyrene microspheres is designed. The PSF of CRM with different sizes of microspheres are measured and the results are compared with the simulation results. The results provide a guide for measuring the PSF of the CRM.

  10. Continuous description of fluctuating eccentricities

    NASA Astrophysics Data System (ADS)

    Blaizot, Jean-Paul; Broniowski, Wojciech; Ollitrault, Jean-Yves

    2014-11-01

    We consider the initial energy density in the transverse plane of a high energy nucleus-nucleus collision as a random field ρ (x), whose probability distribution P [ ρ ], the only ingredient of the present description, encodes all possible sources of fluctuations. We argue that it is a local Gaussian, with a short-range 2-point function, and that the fluctuations relevant for the calculation of the eccentricities that drive the anisotropic flow have small relative amplitudes. In fact, this 2-point function, together with the average density, contains all the information needed to calculate the eccentricities and their variances, and we derive general model independent expressions for these quantities. The short wavelength fluctuations are shown to play no role in these calculations, except for a renormalization of the short range part of the 2-point function. As an illustration, we compare to a commonly used model of independent sources, and recover the known results of this model.

  11. Network traffic behaviour near phase transition point

    NASA Astrophysics Data System (ADS)

    Lawniczak, A. T.; Tang, X.

    2006-03-01

    We explore packet traffic dynamics in a data network model near phase transition point from free flow to congestion. The model of data network is an abstraction of the Network Layer of the OSI (Open Systems Interconnect) Reference Model of packet switching networks. The Network Layer is responsible for routing packets across the network from their sources to their destinations and for control of congestion in data networks. Using the model we investigate spatio-temporal packets traffic dynamics near the phase transition point for various network connection topologies, and static and adaptive routing algorithms. We present selected simulation results and analyze them.

  12. Procedure for Separating Noise Sources in Measurements of Turbofan Engine Core Noise

    NASA Technical Reports Server (NTRS)

    Miles, Jeffrey Hilton

    2006-01-01

    The study of core noise from turbofan engines has become more important as noise from other sources like the fan and jet have been reduced. A multiple microphone and acoustic source modeling method to separate correlated and uncorrelated sources has been developed. The auto and cross spectrum in the frequency range below 1000 Hz is fitted with a noise propagation model based on a source couplet consisting of a single incoherent source with a single coherent source or a source triplet consisting of a single incoherent source with two coherent point sources. Examples are presented using data from a Pratt & Whitney PW4098 turbofan engine. The method works well.

  13. Evaluating Air-Quality Models: Review and Outlook.

    NASA Astrophysics Data System (ADS)

    Weil, J. C.; Sykes, R. I.; Venkatram, A.

    1992-10-01

    Over the past decade, much attention has been devoted to the evaluation of air-quality models with emphasis on model performance in predicting the high concentrations that are important in air-quality regulations. This paper stems from our belief that this practice needs to be expanded to 1) evaluate model physics and 2) deal with the large natural or stochastic variability in concentration. The variability is represented by the root-mean- square fluctuating concentration (c about the mean concentration (C) over an ensemble-a given set of meteorological, source, etc. conditions. Most air-quality models used in applications predict C, whereas observations are individual realizations drawn from an ensemble. For cC large residuals exist between predicted and observed concentrations, which confuse model evaluations.This paper addresses ways of evaluating model physics in light of the large c the focus is on elevated point-source models. Evaluation of model physics requires the separation of the mean model error-the difference between the predicted and observed C-from the natural variability. A residual analysis is shown to be an elective way of doing this. Several examples demonstrate the usefulness of residuals as well as correlation analyses and laboratory data in judging model physics.In general, c models and predictions of the probability distribution of the fluctuating concentration (c), (c, are in the developmental stage, with laboratory data playing an important role. Laboratory data from point-source plumes in a convection tank show that (c approximates a self-similar distribution along the plume center plane, a useful result in a residual analysis. At pmsent,there is one model-ARAP-that predicts C, c, and (c for point-source plumes. This model is more computationally demanding than other dispersion models (for C only) and must be demonstrated as a practical tool. However, it predicts an important quantity for applications- the uncertainty in the very high and infrequent concentrations. The uncertainty is large and is needed in evaluating operational performance and in predicting the attainment of air-quality standards.

  14. An Improved Statistical Point-source Foreground Model for the Epoch of Reionization

    NASA Astrophysics Data System (ADS)

    Murray, S. G.; Trott, C. M.; Jordan, C. H.

    2017-08-01

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.

  15. Using SPARROW to Model Total Nitrogen Sources, and Transport in Rivers and Streams of California and Adjacent States, U.S.A

    NASA Astrophysics Data System (ADS)

    Saleh, D.; Domagalski, J. L.

    2012-12-01

    Sources and factors affecting the transport of total nitrogen are being evaluated for a study area that covers most of California and some areas in Oregon and Nevada, by using the SPARROW model (SPAtially Referenced Regression On Watershed attributes) developed by the U.S. Geological Survey. Mass loads of total nitrogen calculated for monitoring sites at stream gauging stations are regressed against land-use factors affecting nitrogen transport, including fertilizer use, recharge, atmospheric deposition, stream characteristics, and other factors to understand how total nitrogen is transported under average conditions. SPARROW models have been used successfully in other parts of the country to understand how nutrients are transported, and how management strategies can be formulated, such as with Total Maximum Daily Load (TMDL) assessments. Fertilizer use, atmospheric deposition, and climatic data were obtained for 2002, and loads for that year were calculated for monitored streams and point sources (mostly from wastewater treatment plants). The stream loads were calculated by using the adjusted maximum likelihood estimation method (AMLE). River discharge and nitrogen concentrations were de-trended in these calculations in order eliminate the effect of temporal changes on stream load. Effluent discharge information as well as total nitrogen concentrations from point sources were obtained from USEPA databases and from facility records. The model indicates that atmospheric deposition and fertilizer use account for a large percentage of the total nitrogen load in many of the larger watersheds throughout the study area. Point sources, on the other hand, are generally localized around large cities, are considered insignificant sources, and account for a small percentage of the total nitrogen loads throughout the study area.

  16. Modeling of Pixelated Detector in SPECT Pinhole Reconstruction.

    PubMed

    Feng, Bing; Zeng, Gengsheng L

    2014-04-10

    A challenge for the pixelated detector is that the detector response of a gamma-ray photon varies with the incident angle and the incident location within a crystal. The normalization map obtained by measuring the flood of a point-source at a large distance can lead to artifacts in reconstructed images. In this work, we investigated a method of generating normalization maps by ray-tracing through the pixelated detector based on the imaging geometry and the photo-peak energy for the specific isotope. The normalization is defined for each pinhole as the normalized detector response for a point-source placed at the focal point of the pinhole. Ray-tracing is used to generate the ideal flood image for a point-source. Each crystal pitch area on the back of the detector is divided into 60 × 60 sub-pixels. Lines are obtained by connecting between a point-source and the centers of sub-pixels inside each crystal pitch area. For each line ray-tracing starts from the entrance point at the detector face and ends at the center of a sub-pixel on the back of the detector. Only the attenuation by NaI(Tl) crystals along each ray is assumed to contribute directly to the flood image. The attenuation by the silica (SiO 2 ) reflector is also included in the ray-tracing. To calculate the normalization for a pinhole, we need to calculate the ideal flood for a point-source at 360 mm distance (where the point-source was placed for the regular flood measurement) and the ideal flood image for the point-source at the pinhole focal point, together with the flood measurement at 360 mm distance. The normalizations are incorporated in the iterative OSEM reconstruction as a component of the projection matrix. Applications to single-pinhole and multi-pinhole imaging showed that this method greatly reduced the reconstruction artifacts.

  17. NuSTAR Hard X-Ray Survey of the Galactic Center Region. II. X-Ray Point Sources

    NASA Technical Reports Server (NTRS)

    Hong, Jaesub; Mori, Kaya; Hailey, Charles J.; Nynka, Melania; Zhang, Shou; Gotthelf, Eric; Fornasini, Francesca M.; Krivonos, Roman; Bauer, Franz; Perez, Kerstin; hide

    2016-01-01

    We present the first survey results of hard X-ray point sources in the Galactic Center (GC) region by NuSTAR. We have discovered 70 hard (3-79 keV) X-ray point sources in a 0.6 deg(sup 2) region around Sgr?A* with a total exposure of 1.7 Ms, and 7 sources in the Sgr B2 field with 300 ks. We identify clear Chandra counterparts for 58 NuSTAR sources and assign candidate counterparts for the remaining 19. The NuSTAR survey reaches X-ray luminosities of approx. 4× and approx. 8 ×10(exp 32) erg/s at the GC (8 kpc) in the 3-10 and 10-40 keV bands, respectively. The source list includes three persistent luminous X-ray binaries (XBs) and the likely run-away pulsar called the Cannonball. New source-detection significance maps reveal a cluster of hard (>10 keV) X-ray sources near the Sgr A diffuse complex with no clear soft X-ray counterparts. The severe extinction observed in the Chandra spectra indicates that all the NuSTAR sources are in the central bulge or are of extragalactic origin. Spectral analysis of relatively bright NuSTAR sources suggests that magnetic cataclysmic variables constitute a large fraction (>40%-60%). Both spectral analysis and logN-logS distributions of the NuSTAR sources indicate that the X-ray spectra of the NuSTAR sources should have kT > 20 keV on average for a single temperature thermal plasma model or an average photon index of Lambda = 1.5-2 for a power-law model. These findings suggest that the GC X-ray source population may contain a larger fraction of XBs with high plasma temperatures than the field population.

  18. Change-point detection of induced and natural seismicity

    NASA Astrophysics Data System (ADS)

    Fiedler, B.; Holschneider, M.; Zoeller, G.; Hainzl, S.

    2016-12-01

    Earthquake rates are influenced by tectonic stress buildup, earthquake-induced stress changes, and transient aseismic sources. While the first two sources can be well modeled due to the fact that the source is known, transient aseismic processes are more difficult to detect. However, the detection of the associated changes of the earthquake activity is of great interest, because it might help to identify natural aseismic deformation patterns (such as slow slip events) and the occurrence of induced seismicity related to human activities. We develop a Bayesian approach to detect change-points in seismicity data which are modeled by Poisson processes. By means of a Likelihood-Ratio-Test, we proof the significance of the change of the intensity. The model is also extended to spatiotemporal data to detect the area of the transient changes. The method is firstly tested for synthetic data and then applied to observational data from central US and the Bardarbunga volcano in Iceland.

  19. Source Region Modeling of Explosions 2 and 3 from the Source Physics Experiment Using the Rayleigh Integral Method

    NASA Astrophysics Data System (ADS)

    Jones, K. R.; Arrowsmith, S.; Whitaker, R. W.

    2012-12-01

    The overall mission of the National Center for Nuclear Security (NCNS) Source Physics Experiment at the National Nuclear Security Site (SPE-N) near Las Vegas, Nevada is to improve upon and develop new physics based models for underground nuclear explosions using scaled, underground chemical explosions as proxies. To this end, we use the Rayleigh integral as an approximation to the Helmholz-Kirchoff integral, [Whitaker, 2007 and Arrowsmith et al., 2011], to model infrasound generation in the far-field. Infrasound generated by single-point explosive sources above ground can typically be treated as monopole point-sources. While the source is relatively simple, the research needed to model above ground point-sources is complicated by path effects related to the propagation of the acoustic signal and out of the scope of this study. In contrast, for explosions that occur below ground, including the SPE explosions, the source region is more complicated but the observation distances are much closer (< 5 km), thus greatly reducing the complication of path effects. In this case, elastic energy from the explosions radiates upward and spreads out, depending on depth, to a more distributed region at the surface. Due to this broad surface perturbation of the atmosphere we cannot model the source as a simple monopole point-source. Instead, we use the analogy of a piston mounted in a rigid, infinite baffle, where the surface area that moves as a result of the explosion is the piston and the surrounding region is the baffle. The area of the "piston" is determined by the depth and explosive yield of the event. In this study we look at data from SPE-N-2 and SPE-N-3. Both shots had an explosive yield of 1 ton at a depth of 45 m. We collected infrasound data with up to eight stations and 32 sensors within a 5 km radius of ground zero. To determine the area of the surface acceleration, we used data from twelve surface accelerometers installed within 100 m radially about ground zero. With the accelerometer data defining the vertical motion of the surface, we use the Rayleigh Integral Method, [Whitaker, 2007 and Arrowsmith et al., 2011], to generate a synthetic infrasound pulse to compare to the observed data. Because the phase across the "piston" is not necessarily uniform, constructive and destructive interference will change the shape of the acoustic pulse if observed directly above the source (on-axis) or perpendicular to the source (off-axis). Comparing the observed data to the synthetic data we note that the overall structure of the pulse agrees well and that the differences can be attributed to a number of possibilities, including the sensors used, topography, meteorological conditions, etc. One other potential source of error between the observed and calculated data is that we use a flat, symmetric source region for the "piston" where in reality the source region is not flat and not perfectly symmetric. A primary goal of this work is to better understand and model the relationships between surface area, depth, and yield of underground explosions.

  20. Comparison of dew point temperature estimation methods in Southwestern Georgia

    Treesearch

    Marcus D. Williams; Scott L. Goodrick; Andrew Grundstein; Marshall Shepherd

    2015-01-01

    Recent upward trends in acres irrigated have been linked to increasing near-surface moisture. Unfortunately, stations with dew point data for monitoring near-surface moisture are sparse. Thus, models that estimate dew points from more readily observed data sources are useful. Daily average dew temperatures were estimated and evaluated at 14 stations in...

  1. Point source sulphur dioxide peaks and hospital presentations for asthma.

    PubMed

    Donoghue, A M; Thomas, M

    1999-04-01

    To examine the effect on hospital presentations for asthma of brief exposures to sulphur dioxide (SO2) (within the range 0-8700 micrograms/m3) emanating from two point sources in a remote rural city of 25,000 people. A time series analysis of SO2 concentrations and hospital presentations for asthma was undertaken at Mount Isa where SO2 is released into the atmosphere by a copper smelter and a lead smelter. The study examined 5 minute block mean SO2 concentrations and daily hospital presentations for asthma, wheeze, or shortness of breath. Generalised linear models and generalised additive models based on a Poisson distribution were applied. There was no evidence of any positive relation between peak SO2 concentrations and hospital presentations or admissions for asthma, wheeze, or shortness of breath. Brief exposures to high concentrations of SO2 emanating from point sources at Mount Isa do not cause sufficiently serious symptoms in asthmatic people to require presentation to hospital.

  2. URBAN/SUBURBAN WATERSHED CHARACTERIZATION

    EPA Science Inventory

    The ability to characterize the land surface and related pollutant source loadings is critical for reliable watershed modeling. Urban/suburban land uses are the most rapidly growing land use class, generating non-point source pollutant loadings likely to seriously impair streams...

  3. Location identification for indoor instantaneous point contaminant source by probability-based inverse Computational Fluid Dynamics modeling.

    PubMed

    Liu, X; Zhai, Z

    2008-02-01

    Indoor pollutions jeopardize human health and welfare and may even cause serious morbidity and mortality under extreme conditions. To effectively control and improve indoor environment quality requires immediate interpretation of pollutant sensor readings and accurate identification of indoor pollution history and source characteristics (e.g. source location and release time). This procedure is complicated by non-uniform and dynamic contaminant indoor dispersion behaviors as well as diverse sensor network distributions. This paper introduces a probability concept based inverse modeling method that is able to identify the source location for an instantaneous point source placed in an enclosed environment with known source release time. The study presents the mathematical models that address three different sensing scenarios: sensors without concentration readings, sensors with spatial concentration readings, and sensors with temporal concentration readings. The paper demonstrates the inverse modeling method and algorithm with two case studies: air pollution in an office space and in an aircraft cabin. The predictions were successfully verified against the forward simulation settings, indicating good capability of the method in finding indoor pollutant sources. The research lays a solid ground for further study of the method for more complicated indoor contamination problems. The method developed can help track indoor contaminant source location with limited sensor outputs. This will ensure an effective and prompt execution of building control strategies and thus achieve a healthy and safe indoor environment. The method can also assist the design of optimal sensor networks.

  4. Air source integrated heat pump simulation model for EnergyPlus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Bo; New, Joshua; Baxter, Van

    An Air Source Integrated Heat Pump (AS-IHP) is an air source, multi-functional spacing conditioning unit with water heating function (WH), which can lead to great energy savings by recovering the condensing waste heat for domestic water heating. This paper summarizes development of the EnergyPlus AS-IHP model, introducing the physics, sub-models, working modes, and control logic. Based on the model, building energy simulations were conducted to demonstrate greater than 50% annual energy savings, in comparison to a baseline heat pump with electric water heater, over 10 US cities, using the EnergyPlus quick-service restaurant template building. We assessed water heating energy savingmore » potentials using AS-IHP versus both gas and electric baseline systems, and pointed out climate zones where AS-IHPs are promising. In addition, a grid integration strategy was investigated to reveal further energy saving and electricity cost reduction potentials, via increasing the water heating set point temperature during off-peak hours and using larger water tanks.« less

  5. Deterministic seismic hazard macrozonation of India

    NASA Astrophysics Data System (ADS)

    Kolathayar, Sreevalsa; Sitharam, T. G.; Vipin, K. S.

    2012-10-01

    Earthquakes are known to have occurred in Indian subcontinent from ancient times. This paper presents the results of seismic hazard analysis of India (6°-38°N and 68°-98°E) based on the deterministic approach using latest seismicity data (up to 2010). The hazard analysis was done using two different source models (linear sources and point sources) and 12 well recognized attenuation relations considering varied tectonic provinces in the region. The earthquake data obtained from different sources were homogenized and declustered and a total of 27,146 earthquakes of moment magnitude 4 and above were listed in the study area. The sesismotectonic map of the study area was prepared by considering the faults, lineaments and the shear zones which are associated with earthquakes of magnitude 4 and above. A new program was developed in MATLAB for smoothing of the point sources. For assessing the seismic hazard, the study area was divided into small grids of size 0.1° × 0.1° (approximately 10 × 10 km), and the hazard parameters were calculated at the center of each of these grid cells by considering all the seismic sources within a radius of 300 to 400 km. Rock level peak horizontal acceleration (PHA) and spectral accelerations for periods 0.1 and 1 s have been calculated for all the grid points with a deterministic approach using a code written in MATLAB. Epistemic uncertainty in hazard definition has been tackled within a logic-tree framework considering two types of sources and three attenuation models for each grid point. The hazard evaluation without logic tree approach also has been done for comparison of the results. The contour maps showing the spatial variation of hazard values are presented in the paper.

  6. Small catchments DEM creation using Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Gafurov, A. M.

    2018-01-01

    Digital elevation models (DEM) are an important source of information on the terrain, allowing researchers to evaluate various exogenous processes. The higher the accuracy of DEM the better the level of the work possible. An important source of data for the construction of DEMs are point clouds obtained with terrestrial laser scanning (TLS) and unmanned aerial vehicles (UAV). In this paper, we present the results of constructing a DEM on small catchments using UAVs. Estimation of the UAV DEM showed comparable accuracy with the TLS if real time kinematic Global Positioning System (RTK-GPS) ground control points (GCPs) and check points (CPs) were used. In this case, the main source of errors in the construction of DEMs are the errors in the referencing of survey results.

  7. A deeper look at the X-ray point source population of NGC 4472

    NASA Astrophysics Data System (ADS)

    Joseph, T. D.; Maccarone, T. J.; Kraft, R. P.; Sivakoff, G. R.

    2017-10-01

    In this paper we discuss the X-ray point source population of NGC 4472, an elliptical galaxy in the Virgo cluster. We used recent deep Chandra data combined with archival Chandra data to obtain a 380 ks exposure time. We find 238 X-ray point sources within 3.7 arcmin of the galaxy centre, with a completeness flux, FX, 0.5-2 keV = 6.3 × 10-16 erg s-1 cm-2. Most of these sources are expected to be low-mass X-ray binaries. We finding that, using data from a single galaxy which is both complete and has a large number of objects (˜100) below 1038 erg s-1, the X-ray luminosity function is well fitted with a single power-law model. By cross matching our X-ray data with both space based and ground based optical data for NGC 4472, we find that 80 of the 238 sources are in globular clusters. We compare the red and blue globular cluster subpopulations and find red clusters are nearly six times more likely to host an X-ray source than blue clusters. We show that there is evidence that these two subpopulations have significantly different X-ray luminosity distributions. Source catalogues for all X-ray point sources, as well as any corresponding optical data for globular cluster sources, are also presented here.

  8. Waveform inversion of volcano-seismic signals for an extended source

    USGS Publications Warehouse

    Nakano, M.; Kumagai, H.; Chouet, B.; Dawson, P.

    2007-01-01

    We propose a method to investigate the dimensions and oscillation characteristics of the source of volcano-seismic signals based on waveform inversion for an extended source. An extended source is realized by a set of point sources distributed on a grid surrounding the centroid of the source in accordance with the source geometry and orientation. The source-time functions for all point sources are estimated simultaneously by waveform inversion carried out in the frequency domain. We apply a smoothing constraint to suppress short-scale noisy fluctuations of source-time functions between adjacent sources. The strength of the smoothing constraint we select is that which minimizes the Akaike Bayesian Information Criterion (ABIC). We perform a series of numerical tests to investigate the capability of our method to recover the dimensions of the source and reconstruct its oscillation characteristics. First, we use synthesized waveforms radiated by a kinematic source model that mimics the radiation from an oscillating crack. Our results demonstrate almost complete recovery of the input source dimensions and source-time function of each point source, but also point to a weaker resolution of the higher modes of crack oscillation. Second, we use synthetic waveforms generated by the acoustic resonance of a fluid-filled crack, and consider two sets of waveforms dominated by the modes with wavelengths 2L/3 and 2W/3, or L and 2L/5, where W and L are the crack width and length, respectively. Results from these tests indicate that the oscillating signature of the 2L/3 and 2W/3 modes are successfully reconstructed. The oscillating signature of the L mode is also well recovered, in contrast to results obtained for a point source for which the moment tensor description is inadequate. However, the oscillating signature of the 2L/5 mode is poorly recovered owing to weaker resolution of short-scale crack wall motions. The triggering excitations of the oscillating cracks are successfully reconstructed. Copyright 2007 by the American Geophysical Union.

  9. Simulation of in-stream water quality on global scale under changing climate and anthropogenic conditions

    NASA Astrophysics Data System (ADS)

    Voss, Anja; Bärlund, Ilona; Punzet, Manuel; Williams, Richard; Teichert, Ellen; Malve, Olli; Voß, Frank

    2010-05-01

    Although catchment scale modelling of water and solute transport and transformations is a widely used technique to study pollution pathways and effects of natural changes, policies and mitigation measures there are only a few examples of global water quality modelling. This work will provide a description of the new continental-scale model of water quality WorldQual and the analysis of model simulations under changed climate and anthropogenic conditions with respect to changes in diffuse and point loading as well as surface water quality. BOD is used as an indicator of the level of organic pollution and its oxygen-depleting potential, and for the overall health of aquatic ecosystems. The first application of this new water quality model is to river systems of Europe. The model itself is being developed as part of the EU-funded SCENES Project which has the principal goal of developing new scenarios of the future of freshwater resources in Europe. The aim of the model is to determine chemical fluxes in different pathways combining analysis of water quantity with water quality. Simple equations, consistent with the availability of data on the continental scale, are used to simulate the response of in-stream BOD concentrations to diffuse and anthropogenic point loadings as well as flow dilution. Point sources are divided into manufacturing, domestic and urban loadings, whereas diffuse loadings come from scattered settlements, agricultural input (for instance livestock farming), and also from natural background sources. The model is tested against measured longitudinal gradients and time series data at specific river locations with different loading characteristics like the Thames that is driven by domestic loading and Ebro with relative high share of diffuse loading. With scenario studies the influence of climate and anthropogenic changes on European water resources shall be investigated with the following questions: 1. What percentage of river systems will have degraded water quality due to different driving forces? 2. How will climate change and changes in wastewater discharges affect water quality? For the analysis these scenario aspects are included: 1. climate with changed runoff (affecting diffuse pollution and loading from sealed areas), river discharge (causing dilution or concentration of point source pollution) and water temperature (affecting BOD degradation). 2. Point sources with changed population (affecting domestic pollution), connectivity to treatment plants (influencing domestic and manufacturing pollution as well as input from sealed areas and scattered settlements).

  10. Point-source inversion techniques

    NASA Astrophysics Data System (ADS)

    Langston, Charles A.; Barker, Jeffrey S.; Pavlin, Gregory B.

    1982-11-01

    A variety of approaches for obtaining source parameters from waveform data using moment-tensor or dislocation point source models have been investigated and applied to long-period body and surface waves from several earthquakes. Generalized inversion techniques have been applied to data for long-period teleseismic body waves to obtain the orientation, time function and depth of the 1978 Thessaloniki, Greece, event, of the 1971 San Fernando event, and of several events associated with the 1963 induced seismicity sequence at Kariba, Africa. The generalized inversion technique and a systematic grid testing technique have also been used to place meaningful constraints on mechanisms determined from very sparse data sets; a single station with high-quality three-component waveform data is often sufficient to discriminate faulting type (e.g., strike-slip, etc.). Sparse data sets for several recent California earthquakes, for a small regional event associated with the Koyna, India, reservoir, and for several events at the Kariba reservoir have been investigated in this way. Although linearized inversion techniques using the moment-tensor model are often robust, even for sparse data sets, there are instances where the simplifying assumption of a single point source is inadequate to model the data successfully. Numerical experiments utilizing synthetic data and actual data for the 1971 San Fernando earthquake graphically demonstrate that severe problems may be encountered if source finiteness effects are ignored. These techniques are generally applicable to on-line processing of high-quality digital data, but source complexity and inadequacy of the assumed Green's functions are major problems which are yet to be fully addressed.

  11. Heterogeneity of direct aftershock productivity of the main shock rupture

    NASA Astrophysics Data System (ADS)

    Guo, Yicun; Zhuang, Jiancang; Hirata, Naoshi; Zhou, Shiyong

    2017-07-01

    The epidemic type aftershock sequence (ETAS) model is widely used to describe and analyze the clustering behavior of seismicity. Instead of regarding large earthquakes as point sources, the finite-source ETAS model treats them as ruptures that extend in space. Each earthquake rupture consists of many patches, and each patch triggers its own aftershocks isotropically. We design an iterative algorithm to invert the unobserved fault geometry based on the stochastic reconstruction method. This model is applied to analyze the Japan Meteorological Agency (JMA) catalog during 1964-2014. We take six great earthquakes with magnitudes >7.5 after 1980 as finite sources and reconstruct the aftershock productivity patterns on each rupture surface. Comparing results from the point-source ETAS model, we find the following: (1) the finite-source model improves the data fitting; (2) direct aftershock productivity is heterogeneous on the rupture plane; (3) the triggering abilities of M5.4+ events are enhanced; (4) the background rate is higher in the off-fault region and lower in the on-fault region for the Tohoku earthquake, while high probabilities of direct aftershocks distribute all over the source region in the modified model; (5) the triggering abilities of five main shocks become 2-6 times higher after taking the rupture geometries into consideration; and (6) the trends of the cumulative background rate are similar in both models, indicating the same levels of detection ability for seismicity anomalies. Moreover, correlations between aftershock productivity and slip distributions imply that aftershocks within rupture faults are adjustments to coseismic stress changes due to slip heterogeneity.

  12. Outdoor air pollution in close proximity to a continuous point source

    NASA Astrophysics Data System (ADS)

    Klepeis, Neil E.; Gabel, Etienne B.; Ott, Wayne R.; Switzer, Paul

    Data are lacking on human exposure to air pollutants occurring in ground-level outdoor environments within a few meters of point sources. To better understand outdoor exposure to tobacco smoke from cigarettes or cigars, and exposure to other types of outdoor point sources, we performed more than 100 controlled outdoor monitoring experiments on a backyard residential patio in which we released pure carbon monoxide (CO) as a tracer gas for continuous time periods lasting 0.5-2 h. The CO was emitted from a single outlet at a fixed per-experiment rate of 120-400 cc min -1 (˜140-450 mg min -1). We measured CO concentrations every 15 s at up to 36 points around the source along orthogonal axes. The CO sensors were positioned at standing or sitting breathing heights of 2-5 ft (up to 1.5 ft above and below the source) and at horizontal distances of 0.25-2 m. We simultaneously measured real-time air speed, wind direction, relative humidity, and temperature at single points on the patio. The ground-level air speeds on the patio were similar to those we measured during a survey of 26 outdoor patio locations in 5 nearby towns. The CO data exhibited a well-defined proximity effect similar to the indoor proximity effect reported in the literature. Average concentrations were approximately inversely proportional to distance. Average CO levels were approximately proportional to source strength, supporting generalization of our results to different source strengths. For example, we predict a cigarette smoker would cause average fine particle levels of approximately 70-110 μg m -3 at horizontal distances of 0.25-0.5 m. We also found that average CO concentrations rose significantly as average air speed decreased. We fit a multiplicative regression model to the empirical data that predicts outdoor concentrations as a function of source emission rate, source-receptor distance, air speed and wind direction. The model described the data reasonably well, accounting for ˜50% of the log-CO variability in 5-min CO concentrations.

  13. Trail Creek II: Modeling Flow and E. Coli Concentrations in a Small Urban Stream using SWAT

    NASA Astrophysics Data System (ADS)

    Radcliffe, D. E.; Saintil, T.

    2017-12-01

    Pathogens are one of the leading causes of stream and river impairment in the State of Georgia. The common presence of fecal bacteria is driven by several factors including rapid population growth stressing pre-existing and ageing infrastructure, urbanization and poor planning, increase percent imperviousness, urban runoff, municipal discharges, sewage, pet/wildlife waste and leaky septic tanks. The Trail Creek watershed, located in Athens-Clarke County, Georgia covers about 33 km2. Stream segments within Trail Creek violate the GA standard due to high levels of fecal coliform bacteria. In this study, the Soil and Water Assessment Tool (SWAT) modeling software was used to predict E. coli bacteria concentrations during baseflow and stormflow. Census data from the county was used for human and animal population estimates and the Fecal Indicator Tool to generate the number of colony forming units of E. Coli for each source. The model was calibrated at a daily time step with one year of monitored streamflow and E. coli bacteria data using SWAT-CUP and the SUFI2 algorithm. To simulate leaking sewer lines, we added point sources in the five subbasins in the SWAT model with the greatest length of sewer line within 50 m of the stream. The flow in the point sources were set to 5% of the stream flow and the bacteria count set to that of raw sewage (30,000 cfu/100 mL). The calibrated model showed that the average load during 2003-2013 at the watershed outlet was 13 million cfu per month. Using the calibrated model, we simulated scenarios that assumed leaking sewers were repaired in one of the five subbasins with point sources. The reduction ranged from 10 to 46%, with the largest reduction in subbasin in the downtown area. Future modeling work will focus on the use of green infrastructure to address sources of bacteria.

  14. The efficient model to define a single light source position by use of high dynamic range image of 3D scene

    NASA Astrophysics Data System (ADS)

    Wang, Xu-yang; Zhdanov, Dmitry D.; Potemin, Igor S.; Wang, Ying; Cheng, Han

    2016-10-01

    One of the challenges of augmented reality is a seamless combination of objects of the real and virtual worlds, for example light sources. We suggest a measurement and computation models for reconstruction of light source position. The model is based on the dependence of luminance of the small size diffuse surface directly illuminated by point like source placed at a short distance from the observer or camera. The advantage of the computational model is the ability to eliminate the effects of indirect illumination. The paper presents a number of examples to illustrate the efficiency and accuracy of the proposed method.

  15. Integration of Geodata in Documenting Castle Ruins

    NASA Astrophysics Data System (ADS)

    Delis, P.; Wojtkowska, M.; Nerc, P.; Ewiak, I.; Lada, A.

    2016-06-01

    Textured three dimensional models are currently the one of the standard methods of representing the results of photogrammetric works. A realistic 3D model combines the geometrical relations between the structure's elements with realistic textures of each of its elements. Data used to create 3D models of structures can be derived from many different sources. The most commonly used tool for documentation purposes, is a digital camera and nowadays terrestrial laser scanning (TLS). Integration of data acquired from different sources allows modelling and visualization of 3D models historical structures. Additional aspect of data integration is possibility of complementing of missing points for example in point clouds. The paper shows the possibility of integrating data from terrestrial laser scanning with digital imagery and an analysis of the accuracy of the presented methods. The paper describes results obtained from raw data consisting of a point cloud measured using terrestrial laser scanning acquired from a Leica ScanStation2 and digital imagery taken using a Kodak DCS Pro 14N camera. The studied structure is the ruins of the Ilza castle in Poland.

  16. Clinton River Sediment Transport Modeling Study

    EPA Pesticide Factsheets

    The U.S. ACE develops sediment transport models for tributaries to the Great Lakes that discharge to AOCs. The models developed help State and local agencies to evaluate better ways for soil conservation and non-point source pollution prevention.

  17. To Grid or Not to Grid… Precipitation Data and Hydrological Modeling in the Khangai Mountain Region of Mongolia

    NASA Astrophysics Data System (ADS)

    Venable, N. B. H.; Fassnacht, S. R.; Adyabadam, G.

    2014-12-01

    Precipitation data in semi-arid and mountainous regions is often spatially and temporally sparse, yet it is a key variable needed to drive hydrological models. Gridded precipitation datasets provide a spatially and temporally coherent alternative to the use of point-based station data, but in the case of Mongolia, may not be constructed from all data available from government data sources, or may only be available at coarse resolutions. To examine the uncertainty associated with the use of gridded and/or point precipitation data, monthly water balance models of three river basins across forest steppe (the Khoid Tamir River at Ikhtamir), steppe (the Baidrag River at Bayanburd), and desert steppe (the Tuin River at Bogd) ecozones in the Khangai Mountain Region of Mongolia were compared. The models were forced over a 10-year period from 2001-2010, with gridded temperature and precipitation data at a 0.5 x 0.5 degree resolution. These results were compared to modeling using an interpolated hybrid of the gridded data and additional point data recently gathered from government sources; and with point data from the nearest meteorological station to the streamflow gage of choice. Goodness-of-fit measures including the Nash-Sutcliff Efficiency statistic, the percent bias, and the RMSE-observations standard deviation ratio were used to assess model performance. The results were mixed with smaller differences between the two gridded products as compared to the differences between gridded products and station data. The largest differences in precipitation inputs and modeled runoff amounts occurred between the two gridded datasets and station data in the desert steppe (Tuin), and the smallest differences occurred in the forest steppe (Khoid Tamir) and steppe (Baidrag). Mean differences between water balance model results are generally smaller than mean differences in the initial input data over the period of record. Seasonally, larger differences in gridded versus station-based precipitation products and modeled outputs occur in summer in the desert-steppe, and in spring in the forest steppe. Choice of precipitation data source in terms of gridded or point-based data directly affects model outcomes with greater uncertainty noted on a seasonal basis across ecozones of the Khangai.

  18. Distributed and dynamic modelling of hydrology, phosphorus and ecology in the Hampshire Avon and Blashford Lakes: evaluating alternative strategies to meet WFD standards.

    PubMed

    Whitehead, P G; Jin, L; Crossman, J; Comber, S; Johnes, P J; Daldorph, P; Flynn, N; Collins, A L; Butterfield, D; Mistry, R; Bardon, R; Pope, L; Willows, R

    2014-05-15

    The issues of diffuse and point source phosphorus (P) pollution in the Hampshire Avon and Blashford Lakes are explored using a catchment model of the river system. A multibranch, process based, dynamic water quality model (INCA-P) has been applied to the whole river system to simulate water fluxes, total phosphorus (TP) and soluble reactive phosphorus (SRP) concentrations and ecology. The model has been used to assess impacts of both agricultural runoff and point sources from waste water treatment plants (WWTPs) on water quality. The results show that agriculture contributes approximately 40% of the phosphorus load and point sources the other 60% of the load in this catchment. A set of scenarios have been investigated to assess the impacts of alternative phosphorus reduction strategies and it is shown that a combined strategy of agricultural phosphorus reduction through either fertiliser reductions or better phosphorus management together with improved treatment at WWTPs would reduce the SRP concentrations in the river to acceptable levels to meet the EU Water Framework Directive (WFD) requirements. A seasonal strategy for WWTP phosphorus reductions would achieve significant benefits at reduced cost. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nitao, J J

    The goal of the Event Reconstruction Project is to find the location and strength of atmospheric release points, both stationary and moving. Source inversion relies on observational data as input. The methodology is sufficiently general to allow various forms of data. In this report, the authors will focus primarily on concentration measurements obtained at point monitoring locations at various times. The algorithms being investigated in the Project are the MCMC (Markov Chain Monte Carlo), SMC (Sequential Monte Carlo) Methods, classical inversion methods, and hybrids of these. They refer the reader to the report by Johannesson et al. (2004) for explanationsmore » of these methods. These methods require computing the concentrations at all monitoring locations for a given ''proposed'' source characteristic (locations and strength history). It is anticipated that the largest portion of the CPU time will take place performing this computation. MCMC and SMC will require this computation to be done at least tens of thousands of times. Therefore, an efficient means of computing forward model predictions is important to making the inversion practical. In this report they show how Green's functions and reciprocal Green's functions can significantly accelerate forward model computations. First, instead of computing a plume for each possible source strength history, they can compute plumes from unit impulse sources only. By using linear superposition, they can obtain the response for any strength history. This response is given by the forward Green's function. Second, they may use the law of reciprocity. Suppose that they require the concentration at a single monitoring point x{sub m} due to a potential (unit impulse) source that is located at x{sub s}. instead of computing a plume with source location x{sub s}, they compute a ''reciprocal plume'' whose (unit impulse) source is at the monitoring locations x{sub m}. The reciprocal plume is computed using a reversed-direction wind field. The wind field and transport coefficients must also be appropriately time-reversed. Reciprocity says that the concentration of reciprocal plume at x{sub s} is related to the desired concentration at x{sub m}. Since there are many less monitoring points than potential source locations, the number of forward model computations is drastically reduced.« less

  20. Industrial point source CO2 emission strength estimation with aircraft measurements and dispersion modelling.

    PubMed

    Carotenuto, Federico; Gualtieri, Giovanni; Miglietta, Franco; Riccio, Angelo; Toscano, Piero; Wohlfahrt, Georg; Gioli, Beniamino

    2018-02-22

    CO 2 remains the greenhouse gas that contributes most to anthropogenic global warming, and the evaluation of its emissions is of major interest to both research and regulatory purposes. Emission inventories generally provide quite reliable estimates of CO 2 emissions. However, because of intrinsic uncertainties associated with these estimates, it is of great importance to validate emission inventories against independent estimates. This paper describes an integrated approach combining aircraft measurements and a puff dispersion modelling framework by considering a CO 2 industrial point source, located in Biganos, France. CO 2 density measurements were obtained by applying the mass balance method, while CO 2 emission estimates were derived by implementing the CALMET/CALPUFF model chain. For the latter, three meteorological initializations were used: (i) WRF-modelled outputs initialized by ECMWF reanalyses; (ii) WRF-modelled outputs initialized by CFSR reanalyses and (iii) local in situ observations. Governmental inventorial data were used as reference for all applications. The strengths and weaknesses of the different approaches and how they affect emission estimation uncertainty were investigated. The mass balance based on aircraft measurements was quite succesful in capturing the point source emission strength (at worst with a 16% bias), while the accuracy of the dispersion modelling, markedly when using ECMWF initialization through the WRF model, was only slightly lower (estimation with an 18% bias). The analysis will help in highlighting some methodological best practices that can be used as guidelines for future experiments.

  1. Investigation of Finite Sources through Time Reversal

    NASA Astrophysics Data System (ADS)

    Kremers, Simon; Brietzke, Gilbert; Igel, Heiner; Larmat, Carene; Fichtner, Andreas; Johnson, Paul A.; Huang, Lianjie

    2010-05-01

    Under certain conditions time reversal is a promising method to determine earthquake source characteristics without any a-priori information (except the earth model and the data). It consists of injecting flipped-in-time records from seismic stations within the model to create an approximate reverse movie of wave propagation from which the location of the hypocenter and other information might be inferred. In this study, the backward propagation is performed numerically using a parallel cartesian spectral element code. Initial tests using point source moment tensors serve as control for the adaptability of the used wave propagation algorithm. After that we investigated the potential of time reversal to recover finite source characteristics (e.g., size of ruptured area, rupture velocity etc.). We used synthetic data from the SPICE kinematic source inversion blind test initiated to investigate the performance of current kinematic source inversion approaches (http://www.spice-rtn.org/library/valid). The synthetic data set attempts to reproduce the 2000 Tottori earthquake with 33 records close to the fault. We discuss the influence of various assumptions made on the source (e.g., origin time, hypocenter, fault location, etc.), adjoint source weighting (e.g., correct for epicentral distance) and structure (uncertainty in the velocity model) on the results of the time reversal process. We give an overview about the quality of focussing of the different wavefield properties (i.e., displacements, strains, rotations, energies). Additionally, the potential to recover source properties of multiple point sources at the same time is discussed.

  2. Study and comparison of different sensitivity models for a two-plane Compton camera.

    PubMed

    Muñoz, Enrique; Barrio, John; Bernabéu, José; Etxebeste, Ane; Lacasta, Carlos; Llosá, Gabriela; Ros, Ana; Roser, Jorge; Oliver, Josep F

    2018-06-25

    Given the strong variations in the sensitivity of Compton cameras for the detection of events originating from different points in the field of view (FoV), sensitivity correction is often necessary in Compton image reconstruction. Several approaches for the calculation of the sensitivity matrix have been proposed in the literature. While most of these models are easily implemented and can be useful in many cases, they usually assume high angular coverage over the scattered photon, which is not the case for our prototype. In this work, we have derived an analytical model that allows us to calculate a detailed sensitivity matrix, which has been compared to other sensitivity models in the literature. Specifically, the proposed model describes the probability of measuring a useful event in a two-plane Compton camera, including the most relevant physical processes involved. The model has been used to obtain an expression for the system and sensitivity matrices for iterative image reconstruction. These matrices have been validated taking Monte Carlo simulations as a reference. In order to study the impact of the sensitivity, images reconstructed with our sensitivity model and with other models have been compared. Images have been reconstructed from several simulated sources, including point-like sources and extended distributions of activity, and also from experimental data measured with 22 Na sources. Results show that our sensitivity model is the best suited for our prototype. Although other models in the literature perform successfully in many scenarios, they are not applicable in all the geometrical configurations of interest for our system. In general, our model allows to effectively recover the intensity of point-like sources at different positions in the FoV and to reconstruct regions of homogeneous activity with minimal variance. Moreover, it can be employed for all Compton camera configurations, including those with low angular coverage over the scatterer.

  3. Measurement of fluorophore concentrations and fluorescence quantum yield in tissue-simulating phantoms using three diffusion models of steady-state spatially resolved fluorescence.

    PubMed

    Diamond, Kevin R; Farrell, Thomas J; Patterson, Michael S

    2003-12-21

    Steady-state diffusion theory models of fluorescence in tissue have been investigated for recovering fluorophore concentrations and fluorescence quantum yield. Spatially resolved fluorescence, excitation and emission reflectance Carlo simulations, and measured using a multi-fibre probe on tissue-simulating phantoms containing either aluminium phthalocyanine tetrasulfonate (AlPcS4), Photofrin meso-tetra-(4-sulfonatophenyl)-porphine dihydrochloride The accuracy of the fluorophore concentration and fluorescence quantum yield recovered by three different models of spatially resolved fluorescence were compared. The models were based on: (a) weighted difference of the excitation and emission reflectance, (b) fluorescence due to a point excitation source or (c) fluorescence due to a pencil beam excitation source. When literature values for the fluorescence quantum yield were used for each of the fluorophores, the fluorophore absorption coefficient (and hence concentration) at the excitation wavelength (mu(a,x,f)) was recovered with a root-mean-square accuracy of 11.4% using the point source model of fluorescence and 8.0% using the more complicated pencil beam excitation model. The accuracy was calculated over a broad range of optical properties and fluorophore concentrations. The weighted difference of reflectance model performed poorly, with a root-mean-square error in concentration of about 50%. Monte Carlo simulations suggest that there are some situations where the weighted difference of reflectance is as accurate as the other two models, although this was not confirmed experimentally. Estimates of the fluorescence quantum yield in multiple scattering media were also made by determining mu(a,x,f) independently from the fitted absorption spectrum and applying the various diffusion theory models. The fluorescence quantum yields for AlPcS4 and TPPS4 were calculated to be 0.59 +/- 0.03 and 0.121 +/- 0.001 respectively using the point source model, and 0.63 +/- 0.03 and 0.129 +/- 0.002 using the pencil beam excitation model. These results are consistent with published values.

  4. Algorithms and analytical solutions for rapidly approximating long-term dispersion from line and area sources

    NASA Astrophysics Data System (ADS)

    Barrett, Steven R. H.; Britter, Rex E.

    Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean dispersion are shown to produce results several orders of magnitude more efficiently with a loss of accuracy small compared to the absolute accuracy of advanced dispersion models near sources. The method can be readily incorporated into existing dispersion models, and may allow for additional computation time to be expended on modelling dispersion processes more accurately in future, rather than on accounting for source geometry.

  5. Proceedings from the Workshop on Research Needs for Assessment and Management of Non-Point Air Emissions from Department of Defense Activities held in Research Triangle Park, North Carolina on 19-21 February 2008

    DTIC Science & Technology

    2008-10-01

    Chow, J.C. (2006). Feasibility of soil dust source apportionment by the pyrolysis-gas chromatography/mass spectrometry method. J. Air Waste Manage...receptor-oriented source apportionment models. • Develop monitoring methods to determine source and fence line amounts of fugitive dust emissions for...offsite impact, including evaluation with receptor- oriented source apportionment models 76 8.8.1 Background 76 8.8.2 Significance 77 8.8.3

  6. Sources and transport of phosphorus to rivers in California and adjacent states, U.S., as determined by SPARROW modeling

    USGS Publications Warehouse

    Domagalski, Joseph L.; Saleh, Dina

    2015-01-01

    The SPARROW (SPAtially Referenced Regression on Watershed attributes) model was used to simulate annual phosphorus loads and concentrations in unmonitored stream reaches in California, U.S., and portions of Nevada and Oregon. The model was calibrated using de-trended streamflow and phosphorus concentration data at 80 locations. The model explained 91% of the variability in loads and 51% of the variability in yields for a base year of 2002. Point sources, geological background, and cultivated land were significant sources. Variables used to explain delivery of phosphorus from land to water were precipitation and soil clay content. Aquatic loss of phosphorus was significant in streams of all sizes, with the greatest decay predicted in small- and intermediate-sized streams. Geological sources, including volcanic rocks and shales, were the principal control on concentrations and loads in many regions. Some localized formations such as the Monterey shale of southern California are important sources of phosphorus and may contribute to elevated stream concentrations. Many of the larger point source facilities were located in downstream areas, near the ocean, and do not affect inland streams except for a few locations. Large areas of cultivated land result in phosphorus load increases, but do not necessarily increase the loads above those of geological background in some cases because of local hydrology, which limits the potential of phosphorus transport from land to streams.

  7. Point-source stochastic-method simulations of ground motions for the PEER NGA-East Project

    USGS Publications Warehouse

    Boore, David

    2015-01-01

    Ground-motions for the PEER NGA-East project were simulated using a point-source stochastic method. The simulated motions are provided for distances between of 0 and 1200 km, M from 4 to 8, and 25 ground-motion intensity measures: peak ground velocity (PGV), peak ground acceleration (PGA), and 5%-damped pseudoabsolute response spectral acceleration (PSA) for 23 periods ranging from 0.01 s to 10.0 s. Tables of motions are provided for each of six attenuation models. The attenuation-model-dependent stress parameters used in the stochastic-method simulations were derived from inversion of PSA data from eight earthquakes in eastern North America.

  8. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less

  9. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    DOE PAGES

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    2017-06-13

    An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less

  10. Additional adjoint Monte Carlo studies of the shielding of concrete structures against initial gamma radiation. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, M.; Cohen, M.O.

    1975-02-01

    The adjoint Monte Carlo method previously developed by MAGI has been applied to the calculation of initial radiation dose due to air secondary gamma rays and fission product gamma rays at detector points within buildings for a wide variety of problems. These provide an in-depth survey of structure shielding effects as well as many new benchmark problems for matching by simplified models. Specifically, elevated ring source results were obtained in the following areas: doses at on-and off-centerline detectors in four concrete blockhouse structures; doses at detector positions along the centerline of a high-rise structure without walls; dose mapping at basementmore » detector positions in the high-rise structure; doses at detector points within a complex concrete structure containing exterior windows and walls and interior partitions; modeling of the complex structure by replacing interior partitions by additional material at exterior walls; effects of elevation angle changes; effects on the dose of changes in fission product ambient spectra; and modeling of mutual shielding due to external structures. In addition, point source results yielding dose extremes about the ring source average were obtained. (auth)« less

  11. Integration of Heterogenous Digital Surface Models

    NASA Astrophysics Data System (ADS)

    Boesch, R.; Ginzler, C.

    2011-08-01

    The application of extended digital surface models often reveals, that despite an acceptable global accuracy for a given dataset, the local accuracy of the model can vary in a wide range. For high resolution applications which cover the spatial extent of a whole country, this can be a major drawback. Within the Swiss National Forest Inventory (NFI), two digital surface models are available, one derived from LiDAR point data and the other from aerial images. Automatic photogrammetric image matching with ADS80 aerial infrared images with 25cm and 50cm resolution is used to generate a surface model (ADS-DSM) with 1m resolution covering whole switzerland (approx. 41000 km2). The spatially corresponding LiDAR dataset has a global point density of 0.5 points per m2 and is mainly used in applications as interpolated grid with 2m resolution (LiDAR-DSM). Although both surface models seem to offer a comparable accuracy from a global view, local analysis shows significant differences. Both datasets have been acquired over several years. Concerning LiDAR-DSM, different flight patterns and inconsistent quality control result in a significantly varying point density. The image acquisition of the ADS-DSM is also stretched over several years and the model generation is hampered by clouds, varying illumination and shadow effects. Nevertheless many classification and feature extraction applications requiring high resolution data depend on the local accuracy of the used surface model, therefore precise knowledge of the local data quality is essential. The commercial photogrammetric software NGATE (part of SOCET SET) generates the image based surface model (ADS-DSM) and delivers also a map with figures of merit (FOM) of the matching process for each calculated height pixel. The FOM-map contains matching codes like high slope, excessive shift or low correlation. For the generation of the LiDAR-DSM only first- and last-pulse data was available. Therefore only the point distribution can be used to derive a local accuracy measure. For the calculation of a robust point distribution measure, a constrained triangulation of local points (within an area of 100m2) has been implemented using the Open Source project CGAL. The area of each triangle is a measure for the spatial distribution of raw points in this local area. Combining the FOM-map with the local evaluation of LiDAR points allows an appropriate local accuracy evaluation of both surface models. The currently implemented strategy ("partial replacement") uses the hypothesis, that the ADS-DSM is superior due to its better global accuracy of 1m. If the local analysis of the FOM-map within the 100m2 area shows significant matching errors, the corresponding area of the triangulated LiDAR points is analyzed. If the point density and distribution is sufficient, the LiDAR-DSM will be used in favor of the ADS-DSM at this location. If the local triangulation reflects low point density or the variance of triangle areas exceeds a threshold, the investigated location will be marked as NODATA area. In a future implementation ("anisotropic fusion") an anisotropic inverse distance weighting (IDW) will be used, which merges both surface models in the point data space by using FOM-map and local triangulation to derive a quality weight for each of the interpolation points. The "partial replacement" implementation and the "fusion" prototype for the anisotropic IDW make use of the Open Source projects CGAL (Computational Geometry Algorithms Library), GDAL (Geospatial Data Abstraction Library) and OpenCV (Open Source Computer Vision).

  12. A three-dimensional point process model for the spatial distribution of disease occurrence in relation to an exposure source.

    PubMed

    Grell, Kathrine; Diggle, Peter J; Frederiksen, Kirsten; Schüz, Joachim; Cardis, Elisabeth; Andersen, Per K

    2015-10-15

    We study methods for how to include the spatial distribution of tumours when investigating the relation between brain tumours and the exposure from radio frequency electromagnetic fields caused by mobile phone use. Our suggested point process model is adapted from studies investigating spatial aggregation of a disease around a source of potential hazard in environmental epidemiology, where now the source is the preferred ear of each phone user. In this context, the spatial distribution is a distribution over a sample of patients rather than over multiple disease cases within one geographical area. We show how the distance relation between tumour and phone can be modelled nonparametrically and, with various parametric functions, how covariates can be included in the model and how to test for the effect of distance. To illustrate the models, we apply them to a subset of the data from the Interphone Study, a large multinational case-control study on the association between brain tumours and mobile phone use. Copyright © 2015 John Wiley & Sons, Ltd.

  13. EPA Office of Water (OW): 2002 SPARROW Total NP (Catchments)

    EPA Pesticide Factsheets

    SPARROW (SPAtially Referenced Regressions On Watershed attributes) is a watershed modeling tool with output that allows the user to interpret water quality monitoring data at the regional and sub-regional scale. The model relates in-stream water-quality measurements to spatially referenced characteristics of watersheds, including pollutant sources and environmental factors that affect rates of pollutant delivery to streams from the land and aquatic, in-stream processing . The core of the model consists of a nonlinear regression equation describing the non-conservative transport of contaminants from point and non-point (or ??diffuse??) sources on land to rivers and through the stream and river network. SPARROW estimates contaminant concentrations, loads (or ??mass,?? which is the product of concentration and streamflow), and yields in streams (mass of nitrogen and of phosphorus entering a stream per acre of land). It empirically estimates the origin and fate of contaminants in streams and receiving bodies, and quantifies uncertainties in model predictions. The model predictions are illustrated through detailed maps that provide information about contaminant loadings and source contributions at multiple scales for specific stream reaches, basins, or other geographic areas.

  14. Common radiation analysis model for 75,000 pound thrust NERVA engine (1137400E)

    NASA Technical Reports Server (NTRS)

    Warman, E. A.; Lindsey, B. A.

    1972-01-01

    The mathematical model and sources of radiation used for the radiation analysis and shielding activities in support of the design of the 1137400E version of the 75,000 lbs thrust NERVA engine are presented. The nuclear subsystem (NSS) and non-nuclear components are discussed. The geometrical model for the NSS is two dimensional as required for the DOT discrete ordinates computer code or for an azimuthally symetrical three dimensional Point Kernel or Monte Carlo code. The geometrical model for the non-nuclear components is three dimensional in the FASTER geometry format. This geometry routine is inherent in the ANSC versions of the QAD and GGG Point Kernal programs and the COHORT Monte Carlo program. Data are included pertaining to a pressure vessel surface radiation source data tape which has been used as the basis for starting ANSC analyses with the DASH code to bridge into the COHORT Monte Carlo code using the WANL supplied DOT angular flux leakage data. In addition to the model descriptions and sources of radiation, the methods of analyses are briefly described.

  15. X-ray reflection from cold white dwarfs in magnetic cataclysmic variables

    NASA Astrophysics Data System (ADS)

    Hayashi, Takayuki; Kitaguchi, Takao; Ishida, Manabu

    2018-02-01

    We model X-ray reflection from white dwarfs (WDs) in magnetic cataclysmic variables (mCVs) using a Monte Carlo simulation. A point source with a power-law spectrum or a realistic post-shock accretion column (PSAC) source irradiates a cool and spherical WD. The PSAC source emits thermal spectra of various temperatures stratified along the column according to the PSAC model. In the point-source simulation, we confirm the following: a source harder and nearer to the WD enhances the reflection; higher iron abundance enhances the equivalent widths (EWs) of fluorescent iron Kα1, 2 lines and their Compton shoulder, and increases the cut-off energy of a Compton hump; significant reflection appears from an area that is more than 90° apart from the position right under the point X-ray source because of the WD curvature. The PSAC simulation reveals the following: a more massive WD basically enhances the intensities of the fluorescent iron Kα1, 2 lines and the Compton hump, except for some specific accretion rate, because the more massive WD makes a hotter PSAC from which higher-energy X-rays are preferentially emitted; a larger specific accretion rate monotonically enhances the reflection because it makes a hotter and shorter PSAC; the intrinsic thermal component hardens by occultation of the cool base of the PSAC by the WD. We quantitatively estimate the influences of the parameters on the EWs and the Compton hump with both types of source. We also calculate X-ray modulation profiles brought about by the WD spin. These depend on the angles of the spin axis from the line of sight and from the PSAC, and on whether the two PSACs can be seen. The reflection spectral model and the modulation model involve the fluorescent lines and the Compton hump and can directly be compared to the data, which allows us to estimate these geometrical parameters with unprecedented accuracy.

  16. Applying the Manning equation to determine the critical distance in non-point source pollution using remotely sensed data and cartographic modelling

    NASA Astrophysics Data System (ADS)

    de Oliveira, Lília M.; Santos, Nádia A. P.; Maillard, Philippe

    2013-10-01

    Non-point source pollution (NPSP) is perhaps the leading cause of water quality problems and one of the most challenging environmental issues given the difficulty of modeling and controlling it. In this article, we applied the Manning equation, a hydraulic concept, to improve models of non-point source pollution and determine its influence as a function of slope - land cover roughness for runoff to reach the stream. In our study the equation is somewhat taken out of its usual context to be applies to the flow of an entire watershed. Here a digital elevation model (DEM) from the SRTM satellite was used to compute the slope and data from the RapidEye satellite constellation was used to produce a land cover map later transformed into a roughness surface. The methodology is applied to a 1433 km2 watershed in Southeast Brazil mostly covered by forest, pasture, urban and wetlands. The model was used to create slope buffer of varying width in which the proportions of land cover and roughness coefficient were obtained. Next we correlated these data, through regression, with four water quality parameters measured in situ: nitrate, phosphorous, faecal coliform and turbidity. We compare our results with the ones obtained by fixed buffer. It was found that slope buffer outperformed fixed buffer with higher coefficients of determination up to 15%.

  17. Mathematical design of a novel input/instruction device using a moving acoustic emitter

    NASA Astrophysics Data System (ADS)

    Wang, Xianchao; Guo, Yukun; Li, Jingzhi; Liu, Hongyu

    2017-10-01

    This paper is concerned with the mathematical design of a novel input/instruction device using a moving emitter. The emitter acts as a point source and can be installed on a digital pen or worn on the finger of the human being who desires to interact/communicate with the computer. The input/instruction can be recognized by identifying the moving trajectory of the emitter performed by the human being from the collected wave field data. The identification process is modelled as an inverse source problem where one intends to identify the trajectory of a moving point source. There are several salient features of our study which distinguish our result from the existing ones in the literature. First, the point source is moving in an inhomogeneous background medium, which models the human body. Second, the dynamical wave field data are collected in a limited aperture. Third, the reconstruction method is independent of the background medium, and it is totally direct without any matrix inversion. Hence, it is efficient and robust with respect to the measurement noise. Both theoretical justifications and computational experiments are presented to verify our novel findings.

  18. Four pi calibration and modeling of a bare germanium detector in a cylindrical field source

    NASA Astrophysics Data System (ADS)

    Dewberry, R. A.; Young, J. E.

    2012-05-01

    In this paper we describe a 4π cylindrical field acquisition configuration surrounding a bare (unshielded, uncollimated) high purity germanium detector. We perform an efficiency calibration with a flexible planar source and model the configuration in the 4π cylindrical field. We then use exact calculus to model the flux on the cylindrical sides and end faces of the detector. We demonstrate that the model accurately represents the experimental detection efficiency compared to that of a point source and to Monte Carlo N-particle (MCNP) calculations of the flux. The model sums over the entire source surface area and the entire detector surface area including both faces and the detector's cylindrical sides. Agreement between the model and both experiment and the MCNP calculation is within 8%.

  19. An alternative screening model for the estimation of outdoor air concentration at large contaminated sites

    NASA Astrophysics Data System (ADS)

    Verginelli, Iason; Nocentini, Massimo; Baciocchi, Renato

    2017-09-01

    Simplified analytical solutions of fate and transport models are often used to carry out risk assessment on contaminated sites, to evaluate the long-term air quality in relation to volatile organic compounds in either soil or groundwater. Among the different assumptions employed to develop these solutions, in this work we focus on those used in the ASTM-RBCA ;box model; for the evaluation of contaminant dispersion in the atmosphere. In this simple model, it is assumed that the contaminant volatilized from the subsurface is dispersed in the atmosphere within a mixing height equal to two meters, i.e. the height of the breathing zone. In certain cases, this simplification could lead to an overestimation of the outdoor air concentration at the point of exposure. In this paper we first discuss the maximum source lengths (in the wind direction) for which the application of the ;box model; can be considered acceptable. Specifically, by comparing the results of ;box model; with the SCREEN3 model of U.S.EPA we found that under very stable atmospheric conditions (class F) the ASTM-RBCA approach provides acceptable results for source lengths up to 200 m while for very unstable atmospheric conditions (class A and B) the overestimation of the concentrations at the point of the exposure can be already observed for source lengths of only 10 m. In the latter case, the overestimation of the ;box model; can be of more than one order of magnitude for source lengths above 500 m. To overcome this limitation, in this paper we introduce a simple analytical solution that can be used for the calculation of the concentration at the point of exposure for large contaminated sites. The method consists in the introduction of an equivalent mixing zone height that allows to account for the dispersion of the contaminants along the source length while keeping the simplistic ;box model; approach that is implemented in most of risk assessment tools that are based on the ASTM-RBCA standard (e.g. RBCA toolkit). Based on our testing, we found that the developed model replicates very well the results of the more sophisticated dispersion SCREEN3 model with deviations always below 10%. The key advantage of this approach is that it can be very easily incorporated in the current risk assessment screening tools that are based on the ASTM standards while ensuring a more accurate evaluation of the concentration at the point of exposure.

  20. Simulation on Change Law of Runoff, Sediment and Non-point Source Nitrogen and Phosphorus Discharge under Different Land uses Based on SWAT Model: A Case Study of Er hai Lake Small Watershed

    NASA Astrophysics Data System (ADS)

    Tong, Xiao Xia; Lai Cui, Yuan; Chen, Man Yu; Hu, Bo; Xu, Wen Sheng

    2018-05-01

    The Er yuan watershed of Er hai district is chosen as the research area, the law of runoff and sediment and non-point source nitrogen and phosphorus discharges under different land uses during 2001 to 2014 are simulated based on SWAT model. Results of simulation indicate that the order of total runoff yield of different land use type from high to low is grassland, paddy fields, dry land. Specifically, the order of surface runoff yield from high to low is paddy fields, dry land, grassland, the order of lateral runoff yield from high to low is paddy fields, dry land, grassland, the order of groundwater runoff yield from high to low is grassland, paddy fields, dry land. The orders of sediment and nitrogen and phosphorus yield per unit area of different land use type are the same, grassland> paddy fields> dry land. It can be seen, nitrogen and phosphorus discharges from paddy fields and dry land are the main sources of agricultural non-point pollution of the irrigated area. Therefore, reasonable field management measures which can decrease the discharge of nitrogen and phosphorus of paddy fields and dry land are the key to agricultural non-point source pollution prevention and control.

  1. Studies of acoustic emission from point and extended sources

    NASA Technical Reports Server (NTRS)

    Sachse, W.; Kim, K. Y.; Chen, C. P.

    1986-01-01

    The use of simulated and controlled acoustic emission signals forms the basis of a powerful tool for the detailed study of various deformation and wave interaction processes in materials. The results of experiments and signal analyses of acoustic emission resulting from point sources such as various types of indentation-produced cracks in brittle materials and the growth of fatigue cracks in 7075-T6 aluminum panels are discussed. Recent work dealing with the modeling and subsequent signal processing of an extended source of emission in a material is reviewed. Results of the forward problem and the inverse problem are presented with the example of a source distributed through the interior of a specimen.

  2. Isolating intrinsic noise sources in a stochastic genetic switch.

    PubMed

    Newby, Jay M

    2012-01-01

    The stochastic mutual repressor model is analysed using perturbation methods. This simple model of a gene circuit consists of two genes and three promotor states. Either of the two protein products can dimerize, forming a repressor molecule that binds to the promotor of the other gene. When the repressor is bound to a promotor, the corresponding gene is not transcribed and no protein is produced. Either one of the promotors can be repressed at any given time or both can be unrepressed, leaving three possible promotor states. This model is analysed in its bistable regime in which the deterministic limit exhibits two stable fixed points and an unstable saddle, and the case of small noise is considered. On small timescales, the stochastic process fluctuates near one of the stable fixed points, and on large timescales, a metastable transition can occur, where fluctuations drive the system past the unstable saddle to the other stable fixed point. To explore how different intrinsic noise sources affect these transitions, fluctuations in protein production and degradation are eliminated, leaving fluctuations in the promotor state as the only source of noise in the system. The process without protein noise is then compared to the process with weak protein noise using perturbation methods and Monte Carlo simulations. It is found that some significant differences in the random process emerge when the intrinsic noise source is removed.

  3. Toxic metals in Venics lagoon sediments: Model, observation, an possible removal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basu, A.; Molinaroli, E.

    1994-11-01

    We have modeled the distribution of nine toxic metals in the surface sediments from 163 stations in the Venice lagoon using published data. Three entrances from the Adriatic Sea control the circulation in the lagoon and divide it into three basins. We assume, for purposes of modeling, that Porto Marghera at the head of the Industrial Zone area is the single source of toxic metals in the Venice lagoon. In a standing body of lagoon water, concentration of pollutants at distance x from the source (C{sub 0}) may be given by C=C{sub 0}e{sup -kx} where k is the rate constantmore » of dispersal. We calculated k empirically using concentrations at the source, and those farthest from it, that is the end points of the lagoon. Average k values (ppm/km) in the lagoon are: Zn 0.165, Cd 0.116, Hg 0.110, Cu 0.105, Co 0.072, Pb 0.058, Ni 0.008, Cr (0.011) and Fe (0.018 percent/km), and they have complex distributions. Given the k values, concentration at source (C{sub 0}), and the distance x of any point in the lagoon from the source, we have calculated the model concentrations of the nine metals at each sampling station. Tides, currents, floor morphology, additional sources, and continued dumping perturb model distributions causing anomalies (observed minus model concentrations). Positive anomalies are found near the source, where continued dumping perturbs initial boundary conditions, and in areas of sluggish circulation. Negative anomalies are found in areas with strong currents that may flush sediments out of the lagoon. We have thus identified areas in the lagoon where higher rate of sediment removal and exchange may lesson pollution. 41 refs., 4 figs., 3 tabs.« less

  4. Methane Flux Estimation from Point Sources using GOSAT Target Observation: Detection Limit and Improvements with Next Generation Instruments

    NASA Astrophysics Data System (ADS)

    Kuze, A.; Suto, H.; Kataoka, F.; Shiomi, K.; Kondo, Y.; Crisp, D.; Butz, A.

    2017-12-01

    Atmospheric methane (CH4) has an important role in global radiative forcing of climate but its emission estimates have larger uncertainties than carbon dioxide (CO2). The area of anthropogenic emission sources is usually much smaller than 100 km2. The Thermal And Near infrared Sensor for carbon Observation Fourier-Transform Spectrometer (TANSO-FTS) onboard the Greenhouse gases Observing SATellite (GOSAT) has measured CO2 and CH4 column density using sun light reflected from the earth's surface. It has an agile pointing system and its footprint can cover 87-km2 with a single detector. By specifying pointing angles and observation time for every orbit, TANSO-FTS can target various CH4 point sources together with reference points every 3 day over years. We selected a reference point that represents CH4 background density before or after targeting a point source. By combining satellite-measured enhancement of the CH4 column density and surface measured wind data or estimates from the Weather Research and Forecasting (WRF) model, we estimated CH4emission amounts. Here, we picked up two sites in the US West Coast, where clear sky frequency is high and a series of data are available. The natural gas leak at Aliso Canyon showed a large enhancement and its decrease with time since the initial blowout. We present time series of flux estimation assuming the source is single point without influx. The observation of the cattle feedlot in Chino, California has weather station within the TANSO-FTS footprint. The wind speed is monitored continuously and the wind direction is stable at the time of GOSAT overpass. The large TANSO-FTS footprint and strong wind decreases enhancement below noise level. Weak wind shows enhancements in CH4, but the velocity data have large uncertainties. We show the detection limit of single samples and how to reduce uncertainty using time series of satellite data. We will propose that the next generation instruments for accurate anthropogenic CO2 and CH4 flux estimation have improve spatial resolution (˜1km2 ) to further enhance column density changes. We also propose adding imaging capability to monitor plume orientation. We will present laboratory model results and a sampling pattern optimization study that combines local emission source and global survey observations.

  5. A source-attractor approach to network detection of radiation sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Qishi; Barry, M. L..; Grieme, M.

    Radiation source detection using a network of detectors is an active field of research for homeland security and defense applications. We propose Source-attractor Radiation Detection (SRD) method to aggregate measurements from a network of detectors for radiation source detection. SRD method models a potential radiation source as a magnet -like attractor that pulls in pre-computed virtual points from the detector locations. A detection decision is made if a sufficient level of attraction, quantified by the increase in the clustering of the shifted virtual points, is observed. Compared with traditional methods, SRD has the following advantages: i) it does not requiremore » an accurate estimate of the source location from limited and noise-corrupted sensor readings, unlike the localizationbased methods, and ii) its virtual point shifting and clustering calculation involve simple arithmetic operations based on the number of detectors, avoiding the high computational complexity of grid-based likelihood estimation methods. We evaluate its detection performance using canonical datasets from Domestic Nuclear Detection Office s (DNDO) Intelligence Radiation Sensors Systems (IRSS) tests. SRD achieves both lower false alarm rate and false negative rate compared to three existing algorithms for network source detection.« less

  6. Primary Beam Air Kerma Dependence on Distance from Cargo and People Scanners

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strom, Daniel J.; Cerra, Frank

    The distance dependence of air kerma or dose rate of the primary radiation beam is not obvious for security scanners of cargo and people in which there is relative motion between a collimated source and the person or object being imaged. To study this problem, one fixed line source and three moving-source scan-geometry cases are considered, each characterized by radiation emanating perpendicular to an axis. The cases are 1) a stationary line source of radioactive material, e.g., contaminated solution in a pipe; 2) a moving, uncollimated point source of radiation that is shuttered or off when it is stationary; 3)more » a moving, collimated point source of radiation that is shuttered or off when it is stationary; and 4) a translating, narrow “pencil” beam emanating in a flying-spot, raster pattern. Each case is considered for short and long distances compared to the line source length or path traversed by a moving source. The short distance model pertains mostly to dose to objects being scanned and personnel associated with the screening operation. The long distance model pertains mostly to potential dose to bystanders. For radionuclide sources, the number of nuclear transitions that occur a) per unit length of a line source, or b) during the traversal of a point source, is a unifying concept. The “universal source strength” of air kerma rate at a meter from the source can be used to describe x-ray machine or radionuclide sources. For many cargo and people scanners with highly collimated fan or pencil beams, dose varies as the inverse of the distance from the source in the near field and with the inverse square of the distance beyond a critical radius. Ignoring the inverse square dependence and using inverse distance dependence is conservative in the sense of tending to overestimate dose.« less

  7. Primary Beam Air Kerma Dependence on Distance from Cargo and People Scanners.

    PubMed

    Strom, Daniel J; Cerra, Frank

    2016-06-01

    The distance dependence of air kerma or dose rate of the primary radiation beam is not obvious for security scanners of cargo and people in which there is relative motion between a collimated source and the person or object being imaged. To study this problem, one fixed line source and three moving-source scan-geometry cases are considered, each characterized by radiation emanating perpendicular to an axis. The cases are 1) a stationary line source of radioactive material, e.g., contaminated solution in a pipe; 2) a moving, uncollimated point source of radiation that is shuttered or off when it is stationary; 3) a moving, collimated point source of radiation that is shuttered or off when it is stationary; and 4) a translating, narrow "pencil" beam emanating in a flying-spot, raster pattern. Each case is considered for short and long distances compared to the line source length or path traversed by a moving source. The short distance model pertains mostly to dose to objects being scanned and personnel associated with the screening operation. The long distance model pertains mostly to potential dose to bystanders. For radionuclide sources, the number of nuclear transitions that occur a) per unit length of a line source or b) during the traversal of a point source is a unifying concept. The "universal source strength" of air kerma rate at 1 m from the source can be used to describe x-ray machine or radionuclide sources. For many cargo and people scanners with highly collimated fan or pencil beams, dose varies as the inverse of the distance from the source in the near field and with the inverse square of the distance beyond a critical radius. Ignoring the inverse square dependence and using inverse distance dependence is conservative in the sense of tending to overestimate dose.

  8. Using Lunar Observations to Validate Pointing Accuracy and Geolocation, Detector Sensitivity Stability and Static Point Response of the CERES Instruments

    NASA Technical Reports Server (NTRS)

    Daniels, Janet L.; Smith, G. Louis; Priestley, Kory J.; Thomas, Susan

    2014-01-01

    Validation of in-orbit instrument performance is a function of stability in both instrument and calibration source. This paper describes a method using lunar observations scanning near full moon by the Clouds and Earth Radiant Energy System (CERES) instruments. The Moon offers an external source whose signal variance is predictable and non-degrading. From 2006 to present, these in-orbit observations have become standardized and compiled for the Flight Models -1 and -2 aboard the Terra satellite, for Flight Models-3 and -4 aboard the Aqua satellite, and beginning 2012, for Flight Model-5 aboard Suomi-NPP. Instrument performance measurements studied are detector sensitivity stability, pointing accuracy and static detector point response function. This validation method also shows trends per CERES data channel of 0.8% per decade or less for Flight Models 1-4. Using instrument gimbal data and computed lunar position, the pointing error of each detector telescope, the accuracy and consistency of the alignment between the detectors can be determined. The maximum pointing error was 0.2 Deg. in azimuth and 0.17 Deg. in elevation which corresponds to an error in geolocation near nadir of 2.09 km. With the exception of one detector, all instruments were found to have consistent detector alignment from 2006 to present. All alignment error was within 0.1o with most detector telescopes showing a consistent alignment offset of less than 0.02 Deg.

  9. Simulating the evolution of non-point source pollutants in a shallow water environment.

    PubMed

    Yan, Min; Kahawita, Rene

    2007-03-01

    Non-point source pollution originating from surface applied chemicals in either liquid or solid form as part of agricultural activities, appears in the surface runoff caused by rainfall. The infiltration and transport of these pollutants has a significant impact on subsurface and riverine water quality. The present paper describes the development of a unified 2-D mathematical model incorporating individual models for infiltration, adsorption, solubility rate, advection and diffusion, which significantly improve the current practice on mathematical modeling of pollutant evolution in shallow water. The governing equations have been solved numerically using cubic spline integration. Experiments were conducted at the Hydrodynamics Laboratory of the Ecole Polytechnique de Montreal to validate the mathematical model. Good correspondence between the computed results and experimental data has been obtained. The model may be used to predict the ultimate fate of surface applied chemicals by evaluating the proportions that are dissolved, infiltrated into the subsurface or are washed off.

  10. Simulation of agricultural non-point source pollution in Xichuan by using SWAT model

    NASA Astrophysics Data System (ADS)

    Xing, Linan; Zuo, Jiane; Liu, Fenglin; Zhang, Xiaohui; Cao, Qiguang

    2018-02-01

    This paper evaluated the applicability of using SWAT to access agricultural non-point source pollution in Xichuan area. In order to build the model, DEM, soil sort and land use map, climate monitoring data were collected as basic database. The SWAT model was calibrated and validated for the SWAT was carried out using streamflow, suspended solids, total phosphorus and total nitrogen records from 2009 to 2011. Errors, coefficient of determination and Nash-Sutcliffe coefficient were considered to evaluate the applicability. The coefficient of determination were 0.96, 0.66, 0.55 and 0.66 for streamflow, SS, TN, and TP, respectively. Nash-Sutcliffe coefficient were 0.93, 0.5, 0.52 and 0.63, respectively. The results all meet the requirements. It suggested that the SWAT model can simulate the study area.

  11. Investigating Galactic Structure with COBE/DIRBE and Simulation

    NASA Technical Reports Server (NTRS)

    Cohen, Martin

    1999-01-01

    In this work I applied the current version of the SKY model of the point source sky to the interpretation of the diffuse all-sky emission observed by COBE/DIRBE (Cosmic Background Explorer Satellite/Diffuse Infrared Background Experiment). The goal was to refine the SKY model using the all-sky DIRBE maps of the Galaxy, in order that a search could be made for an isotropic cosmic background."Faint Source Model" [FSM] was constructed to remove Galactic fore ground stars from the ZSMA products. The FSM mimics SKY version 1 but it was inadequate to seek cosmic background emission because of the sizeable residual emission in the ZSMA products after this starlight subtraction. At this point I can only support that such models are currently inadequate to reveal a cosmic background. Even SKY5 yields the same disappointing result.

  12. Effectiveness of SWAT in characterizing the watershed hydrology in the snowy-mountainous Lower Bear Malad River (LBMR) watershed in Box Elder County, Utah

    NASA Astrophysics Data System (ADS)

    Salha, A. A.; Stevens, D. K.

    2015-12-01

    Distributed watershed models are essential for quantifying sediment and nutrient loads that originate from point and nonpoint sources. Such models are primary means towards generating pollutant estimates in ungaged watersheds and respond well at watershed scales by capturing the variability in soils, climatic conditions, land uses/covers and management conditions over extended periods of time. This effort evaluates the performance of the Soil and Water Assessment Tool (SWAT) model as a watershed level tool to investigate, manage, and characterize the transport and fate of nutrients in Lower Bear Malad River (LBMR) watershed (Subbasin HUC 16010204) in Utah. Water quality concerns have been documented and are primarily attributed to high phosphorus and total suspended sediment concentrations caused by agricultural and farming practices along with identified point sources (WWTPs). Input data such as Digital Elevation Model (DEM), land use/Land cover (LULC), soils, and climate data for 10 years (2000-2010) is utilized to quantify the LBMR streamflow. Such modeling is useful in developing the required water quality regulations such as Total Maximum Daily Loads (TMDL). Measured concentrations of nutrients were closely captured by simulated monthly nutrient concentrations based on the R2 and Nash- Sutcliffe fitness criteria. The model is expected to be able to identify contaminant non-point sources, identify areas of high pollution risk, locate optimal monitoring sites, and evaluate best management practices to cost-effectively reduce pollution and improve water quality as required by the LBMR watershed's TMDL.

  13. Adjoint Sensitivity Method to Determine Optimal Set of Stations for Tsunami Source Inversion

    NASA Astrophysics Data System (ADS)

    Gusman, A. R.; Hossen, M. J.; Cummins, P. R.; Satake, K.

    2017-12-01

    We applied the adjoint sensitivity technique in tsunami science for the first time to determine an optimal set of stations for a tsunami source inversion. The adjoint sensitivity (AS) method has been used in numerical weather prediction to find optimal locations for adaptive observations. We implemented this technique to Green's Function based Time Reverse Imaging (GFTRI), which is recently used in tsunami source inversion in order to reconstruct the initial sea surface displacement, known as tsunami source model. This method has the same source representation as the traditional least square (LSQ) source inversion method where a tsunami source is represented by dividing the source region into a regular grid of "point" sources. For each of these, Green's function (GF) is computed using a basis function for initial sea surface displacement whose amplitude is concentrated near the grid point. We applied the AS method to the 2009 Samoa earthquake tsunami that occurred on 29 September 2009 in the southwest Pacific, near the Tonga trench. Many studies show that this earthquake is a doublet associated with both normal faulting in the outer-rise region and thrust faulting in the subduction interface. To estimate the tsunami source model for this complex event, we initially considered 11 observations consisting of 5 tide gauges and 6 DART bouys. After implementing AS method, we found the optimal set of observations consisting with 8 stations. Inversion with this optimal set provides better result in terms of waveform fitting and source model that shows both sub-events associated with normal and thrust faulting.

  14. Calculated and measured brachytherapy dosimetry parameters in water for the Xoft Axxent X-Ray Source: an electronic brachytherapy source.

    PubMed

    Rivard, Mark J; Davis, Stephen D; DeWerd, Larry A; Rusch, Thomas W; Axelrod, Steve

    2006-11-01

    A new x-ray source, the model S700 Axxent X-Ray Source (Source), has been developed by Xoft Inc. for electronic brachytherapy. Unlike brachytherapy sources containing radionuclides, this Source may be turned on and off at will and may be operated at variable currents and voltages to change the dose rate and penetration properties. The in-water dosimetry parameters for this electronic brachytherapy source have been determined from measurements and calculations at 40, 45, and 50 kV settings. Monte Carlo simulations of radiation transport utilized the MCNP5 code and the EPDL97-based mcplib04 cross-section library. Inter-tube consistency was assessed for 20 different Sources, measured with a PTW 34013 ionization chamber. As the Source is intended to be used for a maximum of ten treatment fractions, tube stability was also assessed. Photon spectra were measured using a high-purity germanium (HPGe) detector, and calculated using MCNP. Parameters used in the two-dimensional (2D) brachytherapy dosimetry formalism were determined. While the Source was characterized as a point due to the small anode size, < 1 mm, use of the one-dimensional (1D) brachytherapy dosimetry formalism is not recommended due to polar anisotropy. Consequently, 1D brachytherapy dosimetry parameters were not sought. Calculated point-source model radial dose functions at gP(5) were 0.20, 0.24, and 0.29 for the 40, 45, and 50 kV voltage settings, respectively. For 1

  15. Cross spectra between pressure and temperature in a constant-area duct downstream of a hydrogen-fueled combustor

    NASA Technical Reports Server (NTRS)

    Miles, J. H.; Wasserbauer, C. A.; Krejsa, E. A.

    1983-01-01

    Pressure temperature cross spectra are necessary in predicting noise propagation in regions of velocity gradients downstream of combustors if the effect of convective entropy disturbances is included. Pressure temperature cross spectra and coherences were measured at spatially separated points in a combustion rig fueled with hydrogen. Temperature-temperature and pressure-pressure cross spectra and coherences between the spatially separated points as well as temperature and pressure autospectra were measured. These test results were compared with previous results obtained in the same combustion rig using Jet A fuel in order to investigate their dependence on the type of combustion process. The phase relationships are not consistent with a simple source model that assumes that pressure and temperature are in phase at a point in the combustor and at all other points downstream are related to one another by only a time delay due to convection of temperature disturbances. Thus these test results indicate that a more complex model of the source is required.

  16. A framework for emissions source apportionment in industrial areas: MM5/CALPUFF in a near-field application.

    PubMed

    Ghannam, K; El-Fadel, M

    2013-02-01

    This paper examines the relative source contribution to ground-level concentrations of carbon monoxide (CO), nitrogen dioxide (NO2), and PM10 (particulate matter with an aerodynamic diameter < 10 microm) in a coastal urban area due to emissions from an industrial complex with multiple stacks, quarrying activities, and a nearby highway. For this purpose, an inventory of CO, oxide of nitrogen (NO(x)), and PM10 emissions was coupled with the non-steady-state Mesoscale Model 5/California Puff Dispersion Modeling system to simulate individual source contributions under several spatial and temporal scales. As the contribution of a particular source to ground-level concentrations can be evaluated by simulating this single-source emissions or otherwise total emissions except that source, a set of emission sensitivity simulations was designed to examine if CALPUFF maintains a linear relationship between emission rates and predicted concentrations in cases where emitted plumes overlap and chemical transformations are simulated. Source apportionment revealed that ground-level releases (i.e., highway and quarries) extended over large areas dominated the contribution to exposure levels over elevated point sources, despite the fact that cumulative emissions from point sources are higher. Sensitivity analysis indicated that chemical transformations of NO(x) are insignificant, possibly due to short-range plume transport, with CALPUFF exhibiting a linear response to changes in emission rate. The current paper points to the significance of ground-level emissions in contributing to urban air pollution exposure and questions the viability of the prevailing paradigm of point-source emission reduction, especially that the incremental improvement in air quality associated with this common abatement strategy may not accomplish the desirable benefit in terms of lower exposure with costly emissions capping. The application of atmospheric dispersion models for source apportionment helps in identifying major contributors to regional air pollution. In industrial urban areas where multiple sources with different geometry contribute to emissions, ground-level releases extended over large areas such as roads and quarries often dominate the contribution to ground-level air pollution. Industrial emissions released at elevated stack heights may experience significant dilution, resulting in minor contribution to exposure at ground level. In such contexts, emission reduction, which is invariably the abatement strategy targeting industries at a significant investment in control equipment or process change, may result in minimal return on investment in terms of improvement in air quality at sensitive receptors.

  17. FARSITE: Fire Area Simulator-model development and evaluation

    Treesearch

    Mark A. Finney

    1998-01-01

    A computer simulation model, FARSITE, includes existing fire behavior models for surface, crown, spotting, point-source fire acceleration, and fuel moisture. The model's components and assumptions are documented. Simulations were run for simple conditions that illustrate the effect of individual fire behavior models on two-dimensional fire growth.

  18. A Targeted Search for Point Sources of EeV Photons with the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Aab, A.; Abreu, P.; Aglietta, M.; Samarai, I. Al; Albuquerque, I. F. M.; Allekotte, I.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Anastasi, G. A.; Anchordoqui, L.; Andrada, B.; Andringa, S.; Aramo, C.; Arqueros, F.; Arsene, N.; Asorey, H.; Assis, P.; Aublin, J.; Avila, G.; Badescu, A. M.; Balaceanu, A.; Barreira Luz, R. J.; Beatty, J. J.; Becker, K. H.; Bellido, J. A.; Berat, C.; Bertaina, M. E.; Bertou, X.; Biermann, P. L.; Billoir, P.; Biteau, J.; Blaess, S. G.; Blanco, A.; Blazek, J.; Bleve, C.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Borodai, N.; Botti, A. M.; Brack, J.; Brancus, I.; Bretz, T.; Bridgeman, A.; Briechle, F. L.; Buchholz, P.; Bueno, A.; Buitink, S.; Buscemi, M.; Caballero-Mora, K. S.; Caccianiga, L.; Cancio, A.; Canfora, F.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Chavez, A. G.; Chinellato, J. A.; Chudoba, J.; Clay, R. W.; Colalillo, R.; Coleman, A.; Collica, L.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cooper, M. J.; Coutu, S.; Covault, C. E.; Cronin, J.; D'Amico, S.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; de Jong, S. J.; De Mauro, G.; de Mello Neto, J. R. T.; De Mitri, I.; de Oliveira, J.; de Souza, V.; Debatin, J.; Deligny, O.; Di Giulio, C.; Di Matteo, A.; Díaz Castro, M. L.; Diogo, F.; Dobrigkeit, C.; D'Olivo, J. C.; Dorosti, Q.; dos Anjos, R. C.; Dova, M. T.; Dundovic, A.; Ebr, J.; Engel, R.; Erdmann, M.; Erfani, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Falcke, H.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Fick, B.; Figueira, J. M.; Filipčič, A.; Fratu, O.; Freire, M. M.; Fujii, T.; Fuster, A.; Gaior, R.; García, B.; Garcia-Pinto, D.; Gaté, F.; Gemmeke, H.; Gherghel-Lascu, A.; Ghia, P. L.; Giaccari, U.; Giammarchi, M.; Giller, M.; Głas, D.; Glaser, C.; Golup, G.; Gómez Berisso, M.; Gómez Vitale, P. F.; González, N.; Gorgi, A.; Gorham, P.; Grillo, A. F.; Grubb, T. D.; Guarino, F.; Guedes, G. P.; Hampel, M. R.; Hansen, P.; Harari, D.; Harrison, T. A.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Heimann, P.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Holt, E.; Homola, P.; Hörandel, J. R.; Horvath, P.; Hrabovský, M.; Huege, T.; Hulsman, J.; Insolia, A.; Isar, P. G.; Jandt, I.; Jansen, S.; Johnsen, J. A.; Josebachuili, M.; Kääpä, A.; Kambeitz, O.; Kampert, K. H.; Katkov, I.; Keilhauer, B.; Kemp, E.; Kemp, J.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Krause, R.; Krohm, N.; Kuempel, D.; Kukec Mezek, G.; Kunka, N.; Kuotb Awad, A.; LaHurd, D.; Lauscher, M.; Legumina, R.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; Lopes, L.; López, R.; López Casado, A.; Luce, Q.; Lucero, A.; Malacari, M.; Mallamaci, M.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Mariş, I. C.; Marsella, G.; Martello, D.; Martinez, H.; Martínez Bravo, O.; Masías Meza, J. J.; Mathes, H. J.; Mathys, S.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Mayotte, E.; Mazur, P. O.; Medina, C.; Medina-Tanco, G.; Melo, D.; Menshikov, A.; Micheletti, M. I.; Middendorf, L.; Minaya, I. A.; Miramonti, L.; Mitrica, B.; Mockler, D.; Mollerach, S.; Montanet, F.; Morello, C.; Mostafá, M.; Müller, A. L.; Müller, G.; Muller, M. A.; Müller, S.; Mussa, R.; Naranjo, I.; Nellen, L.; Nguyen, P. H.; Niculescu-Oglinzanu, M.; Niechciol, M.; Niemietz, L.; Niggemann, T.; Nitz, D.; Nosek, D.; Novotny, V.; Nožka, H.; Núñez, L. A.; Ochilo, L.; Oikonomou, F.; Olinto, A.; Palatka, M.; Pallotta, J.; Papenbreer, P.; Parente, G.; Parra, A.; Paul, T.; Pech, M.; Pedreira, F.; Pȩkala, J.; Pelayo, R.; Peña-Rodriguez, J.; Pereira, L. A. S.; Perlín, M.; Perrone, L.; Peters, C.; Petrera, S.; Phuntsok, J.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Porowski, C.; Prado, R. R.; Privitera, P.; Prouza, M.; Quel, E. J.; Querchfeld, S.; Quinn, S.; Ramos-Pollan, R.; Rautenberg, J.; Ravignani, D.; Revenu, B.; Ridky, J.; Risse, M.; Ristori, P.; Rizi, V.; Rodrigues de Carvalho, W.; Rodriguez Fernandez, G.; Rodriguez Rojo, J.; Rogozin, D.; Roncoroni, M. J.; Roth, M.; Roulet, E.; Rovero, A. C.; Ruehl, P.; Saffi, S. J.; Saftoiu, A.; Salamida, F.; Salazar, H.; Saleh, A.; Salesa Greus, F.; Salina, G.; Sánchez, F.; Sanchez-Lucas, P.; Santos, E. M.; Santos, E.; Sarazin, F.; Sarmento, R.; Sarmiento, C. A.; Sato, R.; Schauer, M.; Scherini, V.; Schieler, H.; Schimp, M.; Schmidt, D.; Scholten, O.; Schovánek, P.; Schröder, F. G.; Schulz, A.; Schulz, J.; Schumacher, J.; Sciutto, S. J.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sigl, G.; Silli, G.; Sima, O.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sonntag, S.; Sorokin, J.; Squartini, R.; Stanca, D.; Stanič, S.; Stasielak, J.; Stassi, P.; Strafella, F.; Suarez, F.; Suarez Durán, M.; Sudholz, T.; Suomijärvi, T.; Supanitsky, A. D.; Swain, J.; Szadkowski, Z.; Taboada, A.; Taborda, O. A.; Tapia, A.; Theodoro, V. M.; Timmermans, C.; Todero Peixoto, C. J.; Tomankova, L.; Tomé, B.; Torralba Elipe, G.; Travnicek, P.; Trini, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van Aar, G.; van Bodegom, P.; van den Berg, A. M.; van Vliet, A.; Varela, E.; Vargas Cárdenas, B.; Varner, G.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Vergara Quispe, I. D.; Verzi, V.; Vicha, J.; Villaseñor, L.; Vorobiov, S.; Wahlberg, H.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weindl, A.; Wiencke, L.; Wilczyński, H.; Winchen, T.; Wirtz, M.; Wittkowski, D.; Wundheiler, B.; Yang, L.; Yelos, D.; Yushkov, A.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zepeda, A.; Zimmermann, B.; Ziolkowski, M.; Zong, Z.; Zuccarello, F.

    2017-03-01

    Simultaneous measurements of air showers with the fluorescence and surface detectors of the Pierre Auger Observatory allow a sensitive search for EeV photon point sources. Several Galactic and extragalactic candidate objects are grouped in classes to reduce the statistical penalty of many trials from that of a blind search and are analyzed for a significant excess above the background expectation. The presented search does not find any evidence for photon emission at candidate sources, and combined p-values for every class are reported. Particle and energy flux upper limits are given for selected candidate sources. These limits significantly constrain predictions of EeV proton emission models from non-transient Galactic and nearby extragalactic sources, as illustrated for the particular case of the Galactic center region.

  19. Illusion induced overlapped optics.

    PubMed

    Zang, XiaoFei; Shi, Cheng; Li, Zhou; Chen, Lin; Cai, Bin; Zhu, YiMing; Zhu, HaiBin

    2014-01-13

    The traditional transformation-based cloak seems like it can only hide objects by bending the incident electromagnetic waves around the hidden region. In this paper, we prove that invisible cloaks can be applied to realize the overlapped optics. No matter how many in-phase point sources are located in the hidden region, all of them can overlap each other (this can be considered as illusion effect), leading to the perfect optical interference effect. In addition, a singular parameter-independent cloak is also designed to obtain quasi-overlapped optics. Even more amazing of overlapped optics is that if N identical separated in-phase point sources covered with the illusion media, the total power outside the transformation region is N2I0 (not NI0) (I0 is the power of just one point source, and N is the number point sources), which seems violating the law of conservation of energy. A theoretical model based on interference effect is proposed to interpret the total power of these two kinds of overlapped optics effects. Our investigation may have wide applications in high power coherent laser beams, and multiple laser diodes, and so on.

  20. Strategy for Texture Management in Metals Additive Manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirka, Michael M.; Lee, Yousub; Greeley, Duncan A.

    Additive manufacturing (AM) technologies have long been recognized for their ability to fabricate complex geometric components directly from models conceptualized through computers, allowing for complicated designs and assemblies to be fabricated at lower costs, with shorter time to market, and improved function. Lacking behind the design complexity aspect is the ability to fully exploit AM processes for control over texture within AM components. Currently, standard heat-fill strategies utilized in AM processes result in largely columnar grain structures. Here, we propose a point heat source fill for the electron beam melting (EBM) process through which the texture in AM materials canmore » be controlled. Using this point heat source strategy, the ability to form either columnar or equiaxed grain structures upon solidification through changes in the process parameters associated with the point heat source fill is demonstrated for the nickel-base superalloy, Inconel 718. Mechanically, the material is demonstrated to exhibit either anisotropic properties for the columnar-grained material fabricated through using the standard raster scan of the EBM process or isotropic properties for the equiaxed material fabricated using the point heat source fill.« less

  1. Strategy for Texture Management in Metals Additive Manufacturing

    DOE PAGES

    Kirka, Michael M.; Lee, Yousub; Greeley, Duncan A.; ...

    2017-01-31

    Additive manufacturing (AM) technologies have long been recognized for their ability to fabricate complex geometric components directly from models conceptualized through computers, allowing for complicated designs and assemblies to be fabricated at lower costs, with shorter time to market, and improved function. Lacking behind the design complexity aspect is the ability to fully exploit AM processes for control over texture within AM components. Currently, standard heat-fill strategies utilized in AM processes result in largely columnar grain structures. Here, we propose a point heat source fill for the electron beam melting (EBM) process through which the texture in AM materials canmore » be controlled. Using this point heat source strategy, the ability to form either columnar or equiaxed grain structures upon solidification through changes in the process parameters associated with the point heat source fill is demonstrated for the nickel-base superalloy, Inconel 718. Mechanically, the material is demonstrated to exhibit either anisotropic properties for the columnar-grained material fabricated through using the standard raster scan of the EBM process or isotropic properties for the equiaxed material fabricated using the point heat source fill.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gabriele, Fatuzzo; Michele, Mangiameli, E-mail: amichele.mangiameli@dica.unict.it; Giuseppe, Mussumeci

    The laser scanning is a technology that allows in a short time to run the relief geometric objects with a high level of detail and completeness, based on the signal emitted by the laser and the corresponding return signal. When the incident laser radiation hits the object to detect, then the radiation is reflected. The purpose is to build a three-dimensional digital model that allows to reconstruct the reality of the object and to conduct studies regarding the design, restoration and/or conservation. When the laser scanner is equipped with a digital camera, the result of the measurement process is amore » set of points in XYZ coordinates showing a high density and accuracy with radiometric and RGB tones. In this case, the set of measured points is called “point cloud” and allows the reconstruction of the Digital Surface Model. Even the post-processing is usually performed by closed source software, which is characterized by Copyright restricting the free use, free and open source software can increase the performance by far. Indeed, this latter can be freely used providing the possibility to display and even custom the source code. The experience started at the Faculty of Engineering in Catania is aimed at finding a valuable free and open source tool, MeshLab (Italian Software for data processing), to be compared with a reference closed source software for data processing, i.e. RapidForm. In this work, we compare the results obtained with MeshLab and Rapidform through the planning of the survey and the acquisition of the point cloud of a morphologically complex statue.« less

  3. A program to calculate pulse transmission responses through transversely isotropic media

    NASA Astrophysics Data System (ADS)

    Li, Wei; Schmitt, Douglas R.; Zou, Changchun; Chen, Xiwei

    2018-05-01

    We provide a program (AOTI2D) to model responses of ultrasonic pulse transmission measurements through arbitrarily oriented transversely isotropic rocks. The program is built with the distributed point source method that treats the transducers as a series of point sources. The response of each point source is calculated according to the ray-tracing theory of elastic plane waves. The program could offer basic wave parameters including phase and group velocities, polarization, anisotropic reflection coefficients and directivity patterns, and model the wave fields, static wave beam, and the observed signals for pulse transmission measurements considering the material's elastic stiffnesses and orientations, sample dimensions, and the size and positions of the transmitters and the receivers. The program could be applied to exhibit the ultrasonic beam behaviors in anisotropic media, such as the skew and diffraction of ultrasonic beams, and analyze its effect on pulse transmission measurements. The program would be a useful tool to help design the experimental configuration and interpret the results of ultrasonic pulse transmission measurements through either isotropic or transversely isotropic rock samples.

  4. nSTAT: Open-Source Neural Spike Train Analysis Toolbox for Matlab

    PubMed Central

    Cajigas, I.; Malik, W.Q.; Brown, E.N.

    2012-01-01

    Over the last decade there has been a tremendous advance in the analytical tools available to neuroscientists to understand and model neural function. In particular, the point process - Generalized Linear Model (PPGLM) framework has been applied successfully to problems ranging from neuro-endocrine physiology to neural decoding. However, the lack of freely distributed software implementations of published PP-GLM algorithms together with problem-specific modifications required for their use, limit wide application of these techniques. In an effort to make existing PP-GLM methods more accessible to the neuroscience community, we have developed nSTAT – an open source neural spike train analysis toolbox for Matlab®. By adopting an Object-Oriented Programming (OOP) approach, nSTAT allows users to easily manipulate data by performing operations on objects that have an intuitive connection to the experiment (spike trains, covariates, etc.), rather than by dealing with data in vector/matrix form. The algorithms implemented within nSTAT address a number of common problems including computation of peri-stimulus time histograms, quantification of the temporal response properties of neurons, and characterization of neural plasticity within and across trials. nSTAT provides a starting point for exploratory data analysis, allows for simple and systematic building and testing of point process models, and for decoding of stimulus variables based on point process models of neural function. By providing an open-source toolbox, we hope to establish a platform that can be easily used, modified, and extended by the scientific community to address limitations of current techniques and to extend available techniques to more complex problems. PMID:22981419

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, William Scott

    This seminar presentation describes amplitude models and yield estimations that look at the data in order to inform legislation. The following points were brought forth in the summary: global models that will predict three-component amplitudes (R-T-Z) were produced; Q models match regional geology; corrected source spectra can be used for discrimination and yield estimation; three-component data increase coverage and reduce scatter in source spectral estimates; three-component efforts must include distance-dependent effects; a community effort on instrument calibration is needed.

  6. A Clustered Extragalactic Foreground Model for the EoR

    NASA Astrophysics Data System (ADS)

    Murray, S. G.; Trott, C. M.; Jordan, C. H.

    2018-05-01

    We review an improved statistical model of extra-galactic point-source foregrounds first introduced in Murray et al. (2017), in the context of the Epoch of Reionization. This model extends the instrumentally-convolved foreground covariance used in inverse-covariance foreground mitigation schemes, by considering the cosmological clustering of the sources. In this short work, we show that over scales of k ~ (0.6, 40.)hMpc-1, ignoring source clustering is a valid approximation. This is in contrast to Murray et al. (2017), who found a possibility of false detection if the clustering was ignored. The dominant cause for this change is the introduction of a Galactic synchrotron component which shadows the clustering of sources.

  7. UTILIZATION OF LANDSCAPE INDICATORS TO MODEL WATER QUALITY

    EPA Science Inventory



    Many water-bodies within the United States are contaminated by, non-point source (NFS) pollution, which is defined as those materials posing a threat to water quality arising from a number of individual sources and diffused through hydrologic processes. One such NPS pollu...

  8. Fast computation of quadrupole and hexadecapole approximations in microlensing with a single point-source evaluation

    NASA Astrophysics Data System (ADS)

    Cassan, Arnaud

    2017-07-01

    The exoplanet detection rate from gravitational microlensing has grown significantly in recent years thanks to a great enhancement of resources and improved observational strategy. Current observatories include ground-based wide-field and/or robotic world-wide networks of telescopes, as well as space-based observatories such as satellites Spitzer or Kepler/K2. This results in a large quantity of data to be processed and analysed, which is a challenge for modelling codes because of the complexity of the parameter space to be explored and the intensive computations required to evaluate the models. In this work, I present a method that allows to compute the quadrupole and hexadecapole approximations of the finite-source magnification with more efficiency than previously available codes, with routines about six times and four times faster, respectively. The quadrupole takes just about twice the time of a point-source evaluation, which advocates for generalizing its use to large portions of the light curves. The corresponding routines are available as open-source python codes.

  9. Time-Domain Filtering for Spatial Large-Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Pruett, C. David

    1997-01-01

    An approach to large-eddy simulation (LES) is developed whose subgrid-scale model incorporates filtering in the time domain, in contrast to conventional approaches, which exploit spatial filtering. The method is demonstrated in the simulation of a heated, compressible, axisymmetric jet, and results are compared with those obtained from fully resolved direct numerical simulation. The present approach was, in fact, motivated by the jet-flow problem and the desire to manipulate the flow by localized (point) sources for the purposes of noise suppression. Time-domain filtering appears to be more consistent with the modeling of point sources; moreover, time-domain filtering may resolve some fundamental inconsistencies associated with conventional space-filtered LES approaches.

  10. Fiscal year 1988 program report: Pennsylvania Center for Water Resources Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonnell, A.J.

    1989-08-01

    Three projects and a program of technology transfer were conducted under the Pennsylvania Fiscal Year 1988 State Water Resources Research Grants Program (PL 98-242, Sect. 104). In a completed study focused on the protection of water supplies, mature slow sand filters were found to remove 100 percent of Cryptosporidium and Giardia cysts. A site specific study examined the behavior of sedimentary iron and manganese in an acid mine drainage wetland system. A study was initiated to link a comprehensive non-point source model, AGNPS with current GIS technology to enhance the models' utility for evaluating regional water quality problems related tomore » non-point source agricultural pollution.« less

  11. A very deep IRAS survey at l(II) = 97 deg, b(II) = +30 deg

    NASA Technical Reports Server (NTRS)

    Hacking, Perry; Houck, James R.

    1987-01-01

    A deep far-infrared survey is presented using over 1000 scans made of a 4 to 6 sq. deg. field at the north ecliptic pole by the IRAS. Point sources from this survey are up to 100 times fainter than the IRAS point source catalog at 12 and 25 micrometers, and up to 10 times fainter at 60 and 100 micrometers. The 12 and 25 micrometer maps are instrumental noise-limited, and the 60 and 100 micrometer maps are confusion noise-limited. The majority of the 12 micrometer point sources are stars within the Milky Way. The 25 micrometer sources are composed almost equally of stars and galaxies. About 80% of the 60 micrometer sources correspond to galaxies on Palomar Observatory Sky Survey (POSS) enlargements. The remaining 20% are probably galaxies below the POSS detection limit. The differential source counts are presented and compared with what is predicted by the Bahcall and Soneira Standard Galaxy Model using the B-V-12 micrometer colors of stars without circumstellar dust shells given by Waters, Cote and Aumann. The 60 micrometer source counts are inconsistent with those predicted for a uniformly distributed, nonevolving universe. The implications are briefly discussed.

  12. Convex Hull Aided Registration Method (CHARM).

    PubMed

    Fan, Jingfan; Yang, Jian; Zhao, Yitian; Ai, Danni; Liu, Yonghuai; Wang, Ge; Wang, Yongtian

    2017-09-01

    Non-rigid registration finds many applications such as photogrammetry, motion tracking, model retrieval, and object recognition. In this paper we propose a novel convex hull aided registration method (CHARM) to match two point sets subject to a non-rigid transformation. First, two convex hulls are extracted from the source and target respectively. Then, all points of the point sets are projected onto the reference plane through each triangular facet of the hulls. From these projections, invariant features are extracted and matched optimally. The matched feature point pairs are mapped back onto the triangular facets of the convex hulls to remove outliers that are outside any relevant triangular facet. The rigid transformation from the source to the target is robustly estimated by the random sample consensus (RANSAC) scheme through minimizing the distance between the matched feature point pairs. Finally, these feature points are utilized as the control points to achieve non-rigid deformation in the form of thin-plate spline of the entire source point set towards the target one. The experimental results based on both synthetic and real data show that the proposed algorithm outperforms several state-of-the-art ones with respect to sampling, rotational angle, and data noise. In addition, the proposed CHARM algorithm also shows higher computational efficiency compared to these methods.

  13. X-ray Modeling of η Carinae & WR140 from SPH Simulations

    NASA Astrophysics Data System (ADS)

    Russell, Christopher M. P.; Corcoran, Michael F.; Okazaki, Atsuo T.; Madura, Thomas I.; Owocki, Stanley P.

    2011-01-01

    The colliding wind binary (CWB) systems η Carinae and WR140 provide unique laboratories for X-ray astrophysics. Their wind-wind collisions produce hard X-rays that have been monitored extensively by several X-ray telescopes, including RXTE. To interpret these RXTE X-ray light curves, we model the wind-wind collision using 3D smoothed particle hydrodynamics (SPH) simulations. Adiabatic simulations that account for the emission and absorption of X-rays from an assumed point source at the apex of the wind-collision shock cone by the distorted winds can closely match the observed 2-10keV RXTE light curves of both η Car and WR140. This point-source model can also explain the early recovery of η Car's X-ray light curve from the 2009.0 minimum by a factor of 2-4 reduction in the mass loss rate of η Car. Our more recent models relax the point-source approximation and account for the spatially extended emission along the wind-wind interaction shock front. For WR140, the computed X-ray light curve again matches the RXTE observations quite well. But for η Car, a hot, post-periastron bubble leads to an emission level that does not match the extended X-ray minimum observed by RXTE. Initial results from incorporating radiative cooling and radiatively-driven wind acceleration via a new anti-gravity approach into the SPH code are also discussed.

  14. Contaminant point source localization error estimates as functions of data quantity and model quality

    NASA Astrophysics Data System (ADS)

    Hansen, Scott K.; Vesselinov, Velimir V.

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulate well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. This greatly enhanced performance, but gains from additional data collection remained limited.

  15. Predicting dense nonaqueous phase liquid dissolution using a simplified source depletion model parameterized with partitioning tracers

    NASA Astrophysics Data System (ADS)

    Basu, Nandita B.; Fure, Adrian D.; Jawitz, James W.

    2008-07-01

    Simulations of nonpartitioning and partitioning tracer tests were used to parameterize the equilibrium stream tube model (ESM) that predicts the dissolution dynamics of dense nonaqueous phase liquids (DNAPLs) as a function of the Lagrangian properties of DNAPL source zones. Lagrangian, or stream-tube-based, approaches characterize source zones with as few as two trajectory-integrated parameters, in contrast to the potentially thousands of parameters required to describe the point-by-point variability in permeability and DNAPL in traditional Eulerian modeling approaches. The spill and subsequent dissolution of DNAPLs were simulated in two-dimensional domains having different hydrologic characteristics (variance of the log conductivity field = 0.2, 1, and 3) using the multiphase flow and transport simulator UTCHEM. Nonpartitioning and partitioning tracers were used to characterize the Lagrangian properties (travel time and trajectory-integrated DNAPL content statistics) of DNAPL source zones, which were in turn shown to be sufficient for accurate prediction of source dissolution behavior using the ESM throughout the relatively broad range of hydraulic conductivity variances tested here. The results were found to be relatively insensitive to travel time variability, suggesting that dissolution could be accurately predicted even if the travel time variance was only coarsely estimated. Estimation of the ESM parameters was also demonstrated using an approximate technique based on Eulerian data in the absence of tracer data; however, determining the minimum amount of such data required remains for future work. Finally, the stream tube model was shown to be a more unique predictor of dissolution behavior than approaches based on the ganglia-to-pool model for source zone characterization.

  16. Heat Flow Contours and Well Data Around the Milford FORGE Site

    DOE Data Explorer

    Joe Moore

    2016-03-09

    This submission contains a shapefile of heat flow contour lines around the FORGE site located in Milford, Utah. The model was interpolated from data points in the Milford_wells shapefile. This heat flow model was interpolated from 66 data points using the kriging method in Geostatistical Analyst tool of ArcGIS. The resulting model was smoothed 100%. The well dataset contains 59 wells from various sources, with lat/long coordinates, temperature, quality, basement depth, and heat flow. This data was used to make models of the specific characteristics.

  17. Teleseismic Body Wave Analysis for the 27 September 2003 Altai, Earthquake (Mw7.4) and Large Aftershocks

    NASA Astrophysics Data System (ADS)

    Gomez-Gonzalez, J. M.; Mellors, R.

    2007-05-01

    We investigate the kinematics of the rupture process for the September 27, 2003, Mw7.3, Altai earthquake and its associated large aftershocks. This is the largest earthquake striking the Altai mountains within the last 50 years, which provides important constraints on the ongoing tectonics. The fault plane solution obtained by teleseismic body waveform modeling indicated a predominantly strike-slip event (strike=130, dip=75, rake 170), Scalar moment for the main shock ranges from 0.688 to 1.196E+20 N m, a source duration of about 20 to 42 s, and an average centroid depth of 10 km. Source duration would indicate a fault length of about 130 - 270 km. The main shock was followed closely by two aftershocks (Mw5.7, Mw6.4) occurred the same day, another aftershock (Mw6.7) occurred on 1 October , 2003. We also modeled the second aftershock (Mw6.4) to asses geometric similarities during their respective rupture process. This aftershock occurred spatially very close to the mainshock and possesses a similar fault plane solution (strike=128, dip=71, rake=154), and centroid depth (13 km). Several local conditions, such as the crustal model and fault geometry, affect the correct estimation of some source parameters. We perfume a sensitivity evaluation of several parameters, including centroid depth, scalar moment and source duration, based on a point and finite source modeling. The point source approximation results are the departure parameters for the finite source exploration. We evaluate the different reported parameters to discard poor constrained models. In addition, deformation data acquired by InSAR are also included in the analysis.

  18. A prototype of the procedure of strong ground motion prediction for intraslab earthquake based on characterized source model

    NASA Astrophysics Data System (ADS)

    Iwata, T.; Asano, K.; Sekiguchi, H.

    2011-12-01

    We propose a prototype of the procedure to construct source models for strong motion prediction during intraslab earthquakes based on the characterized source model (Irikura and Miyake, 2011). The key is the characterized source model which is based on the empirical scaling relationships for intraslab earthquakes and involve the correspondence between the SMGA (strong motion generation area, Miyake et al., 2003) and the asperity (large slip area). Iwata and Asano (2011) obtained the empirical relationships of the rupture area (S) and the total asperity area (Sa) to the seismic moment (Mo) as follows, with assuming power of 2/3 dependency of S and Sa on M0, S (km**2) = 6.57×10**(-11)×Mo**(2/3) (Nm) (1) Sa (km**2) = 1.04 ×10**(-11)×Mo**(2/3) (Nm) (2). Iwata and Asano (2011) also pointed out that the position and the size of SMGA approximately corresponds to the asperity area for several intraslab events. Based on the empirical relationships, we gave a procedure for constructing source models of intraslab earthquakes for strong motion prediction. [1] Give the seismic moment, Mo. [2] Obtain the total rupture area and the total asperity area according to the empirical scaling relationships between S, Sa, and Mo given by Iwata and Asano (2011). [3] Square rupture area and asperities are assumed. [4] The source mechanism is assumed to be the same as that of small events in the source region. [5] Plural scenarios including variety of the number of asperities and rupture starting points are prepared. We apply this procedure by simulating strong ground motions for several observed events for confirming the methodology.

  19. A systematic analysis of the Braitenberg vehicle 2b for point-like stimulus sources.

    PubMed

    Rañó, Iñaki

    2012-09-01

    Braitenberg vehicles have been used experimentally for decades in robotics with limited empirical understanding. This paper presents the first mathematical model of the vehicle 2b, displaying so-called aggression behaviour, and analyses the possible trajectories for point-like smooth stimulus sources. This sensory-motor steering control mechanism is used to implement biologically grounded target approach, target-seeking or obstacle-avoidance behaviour. However, the analysis of the resulting model reveals that complex and unexpected trajectories can result even for point-like stimuli. We also prove how the implementation of the controller and the vehicle morphology interact to affect the behaviour of the vehicle. This work provides a better understanding of Braitenberg vehicle 2b, explains experimental results and paves the way for a formally grounded application on robotics as well as for a new way of understanding target seeking in biology.

  20. StreamVOC - A deterministic source-apportionment model to estimate volatile organic compound concentrations in rivers and streams

    USGS Publications Warehouse

    Asher, William E.; Bender, David A.; Zogorski, John S.; Bartholomay, Roy C.

    2006-01-01

    This report documents the construction and verification of the model, StreamVOC, that estimates (1) the time- and position-dependent concentrations of volatile organic compounds (VOCs) in rivers and streams as well as (2) the source apportionment (SA) of those concentrations. The model considers how different types of sources and loss processes can act together to yield a given observed VOC concentration. Reasons for interest in the relative and absolute contributions of different sources to contaminant concentrations include the need to apportion: (1) the origins for an observed contamination, and (2) the associated human and ecosystem risks. For VOCs, sources of interest include the atmosphere (by absorption), as well as point and nonpoint inflows of VOC-containing water. Loss processes of interest include volatilization to the atmosphere, degradation, and outflows of VOC-containing water from the stream to local ground water. This report presents the details of StreamVOC and compares model output with measured concentrations for eight VOCs found in the Aberjona River at Winchester, Massachusetts. Input data for the model were obtained during a synoptic study of the stream system conducted July 11-13, 2001, as part of the National Water-Quality Assessment (NAWQA) Program of the U.S. Geological Survey. The input data included a variety of basic stream characteristics (for example, flows, temperature, and VOC concentrations). The StreamVOC concentration results agreed moderately well with the measured concentration data for several VOCs and provided compound-dependent SA estimates as a function of longitudinal distance down the river. For many VOCs, the quality of the agreement between the model-simulated and measured concentrations could be improved by simple adjustments of the model input parameters. In general, this study illustrated: (1) the considerable difficulty of quantifying correctly the locations and magnitudes of ground-water-related sources of contamination in streams; and (2) that model-based estimates of stream VOC concentrations are likely to be most accurate when the major sources are point sources or tributaries where the spatial extent and magnitude of the sources are tightly constrained and easily determined.

  1. Theoretical evaluation of accuracy in position and size of brain activity obtained by near-infrared topography

    NASA Astrophysics Data System (ADS)

    Kawaguchi, Hiroshi; Hayashi, Toshiyuki; Kato, Toshinori; Okada, Eiji

    2004-06-01

    Near-infrared (NIR) topography can obtain a topographical distribution of the activated region in the brain cortex. Near-infrared light is strongly scattered in the head, and the volume of tissue sampled by a source-detector pair on the head surface is broadly distributed in the brain. This scattering effect results in poor resolution and contrast in the topographic image of the brain activity. In this study, a one-dimensional distribution of absorption change in a head model is calculated by mapping and reconstruction methods to evaluate the effect of the image reconstruction algorithm and the interval of measurement points for topographic imaging on the accuracy of the topographic image. The light propagation in the head model is predicted by Monte Carlo simulation to obtain the spatial sensitivity profile for a source-detector pair. The measurement points are one-dimensionally arranged on the surface of the model, and the distance between adjacent measurement points is varied from 4 mm to 28 mm. Small intervals of the measurement points improve the topographic image calculated by both the mapping and reconstruction methods. In the conventional mapping method, the limit of the spatial resolution depends upon the interval of the measurement points and spatial sensitivity profile for source-detector pairs. The reconstruction method has advantages over the mapping method which improve the results of one-dimensional analysis when the interval of measurement points is less than 12 mm. The effect of overlapping of spatial sensitivity profiles indicates that the reconstruction method may be effective to improve the spatial resolution of a two-dimensional reconstruction of topographic image obtained with larger interval of measurement points. Near-infrared topography with the reconstruction method potentially obtains an accurate distribution of absorption change in the brain even if the size of absorption change is less than 10 mm.

  2. A CMB foreground study in WMAP data: Extragalactic point sources and zodiacal light emission

    NASA Astrophysics Data System (ADS)

    Chen, Xi

    The Cosmic Microwave Background (CMB) radiation is the remnant heat from the Big Bang. It serves as a primary tool to understand the global properties, content and evolution of the universe. Since 2001, NASA's Wilkinson Microwave Anisotropy Probe (WMAP) satellite has been napping the full sky anisotropy with unprecedented accuracy, precision and reliability. The CMB angular power spectrum calculated from the WMAP full sky maps not only enables accurate testing of cosmological models, but also places significant constraints on model parameters. The CMB signal in the WMAP sky maps is contaminated by microwave emission from the Milky Way and from extragalactic sources. Therefore, in order to use the maps reliably for cosmological studies, the foreground signals must be well understood and removed from the maps. This thesis focuses on the separation of two foreground contaminants from the WMAP maps: extragalactic point sources and zodiacal light emission. Extragalactic point sources constitute the most important foreground on small angular scales. Various methods have been applied to the WMAP single frequency maps to extract sources. However, due to the limited angular resolution of WMAP, it is possible to confuse positive CMB excursions with point sources or miss sources that are embedded in negative CMB fluctuations. We present a novel CMB-free source finding technique that utilizes the spectrum difference of point sources and CMB to form internal linear combinations of multifrequency maps to suppress the CMB and better reveal sources. When applied to the WMAP 41, 64 and 94 GHz maps, this technique has not only enabled detection of sources that are previously cataloged by independent methods, but also allowed disclosure of new sources. Without the noise contribution from the CMB, this method responds rapidly with the integration time. The number of detections varies as 0( t 0.72 in the two-band search and 0( t 0.70 in the three-band search from one year to five years, separately, in comparison to t 0.40 from the WMAP catalogs. Our source catalogs are a good supplement to the existing WMAP source catalogs, and the method itself is proven to be both complementary to and competitive with all the current source finding techniques in WMAP maps. Scattered light and thermal emission from the interplanetary dust (IPD) within our Solar System are major contributors to the diffuse sky brightness at most infrared wavelengths. For wavelengths longer than 3.5 mm, the thermal emission of the IPD dominates over scattering, and the emission is often referred to as the Zodiacal Light Emission (ZLE). To set a limit of ZLE contribution to the WMAP data, we have performed a simultaneous fit of the yearly WMAP time-ordered data to the time variation of ZLE predicted by the DIRBE IPD model (Kelsallet al. 1998) evaluated at 240 mm, plus [cursive l] = 1 - 4 CMB components. It is found that although this fitting procedure can successfully recover the CMB dipole to a 0.5% accuracy, it is not sensitive enough to determine the ZLE signal nor the other multipole moments very accurately.

  3. Systematic Review: Impact of point sources on antibiotic-resistant bacteria in the natural environment.

    PubMed

    Bueno, I; Williams-Nguyen, J; Hwang, H; Sargeant, J M; Nault, A J; Singer, R S

    2018-02-01

    Point sources such as wastewater treatment plants and agricultural facilities may have a role in the dissemination of antibiotic-resistant bacteria (ARB) and antibiotic resistance genes (ARG). To analyse the evidence for increases in ARB in the natural environment associated with these point sources of ARB and ARG, we conducted a systematic review. We evaluated 5,247 records retrieved through database searches, including both studies that ascertained ARG and ARB outcomes. All studies were subjected to a screening process to assess relevance to the question and methodology to address our review question. A risk of bias assessment was conducted upon the final pool of studies included in the review. This article summarizes the evidence only for those studies with ARB outcomes (n = 47). Thirty-five studies were at high (n = 11) or at unclear (n = 24) risk of bias in the estimation of source effects due to lack of information and/or failure to control for confounders. Statistical analysis was used in ten studies, of which one assessed the effect of multiple sources using modelling approaches; none reported effect measures. Most studies reported higher ARB prevalence or concentration downstream/near the source. However, this evidence was primarily descriptive and it could not be concluded that there is a clear impact of point sources on increases in ARB in the environment. To quantify increases in ARB in the environment due to specific point sources, there is a need for studies that stress study design, control of biases and analytical tools to provide effect measure estimates. © 2017 Blackwell Verlag GmbH.

  4. Industrial pollution and the management of river water quality: a model of Kelani River, Sri Lanka.

    PubMed

    Gunawardena, Asha; Wijeratne, E M S; White, Ben; Hailu, Atakelty; Pandit, Ram

    2017-08-19

    Water quality of the Kelani River has become a critical issue in Sri Lanka due to the high cost of maintaining drinking water standards and the market and non-market costs of deteriorating river ecosystem services. By integrating a catchment model with a river model of water quality, we developed a method to estimate the effect of pollution sources on ambient water quality. Using integrated model simulations, we estimate (1) the relative contribution from point (industrial and domestic) and non-point sources (river catchment) to river water quality and (2) pollutant transfer coefficients for zones along the lower section of the river. Transfer coefficients provide the basis for policy analyses in relation to the location of new industries and the setting of priorities for industrial pollution control. They also offer valuable information to design socially optimal economic policy to manage industrialized river catchments.

  5. Near-field transport of {sup 129}I from a point source in an in-room disposal vault

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolar, M.; Leneveu, D.M.; Johnson, L.H.

    1995-12-31

    A very small number of disposal containers of heat generating nuclear waste may have initial manufacturing defects that would lead to pin-hole type failures at the time of or shortly after emplacement. For sufficiently long-lived containers, only the initial defects need to be considered in modeling of release rates from the disposal vault. Two approaches to modeling of near-field mass transport from a single point source within a disposal room have been compared: the finite-element code MOTIF (A Model Of Transport In Fractured/porous media) and a boundary integral method (BIM). These two approaches were found to give identical results formore » a simplified model of the disposal room without groundwater flow. MOTIF has then been used to study the effects of groundwater flow on the mass transport out of the emplacement room.« less

  6. Multi-Wavelength Study of W40 HII Region

    NASA Astrophysics Data System (ADS)

    Shenoy, Sachindev S.; Shuping, R.; Vacca, W. D.

    2013-01-01

    W40 is an HII region (Sh2-64) within the Serpens molecular cloud in the Aquila rift region. Recent near infrared spectroscopic observations of the brightest members of the central cluster of W40 reveal that the region is powered by at least three early B-type stars and one late O-type star. Near and mid-infrared spectroscopy and photometry, combined with SED modeling of these sources, suggest that the distance to the cluster is between 455 and 535 pc, with about 10 mag of visual extinction. Velocity and extinction measurement of all the nearby regions i.e. Serpens main, Aquila rift, and MWC297 suggest that the entire system (including the W40 extended emission) is associated with the extinction wall at 260 pc. Here we present some preliminary results of a multi-wavelength study of the central cluster and the extended emission of W40. We used Spitzer IRAC data to measure accurate photometry of all the point sources within 4.32 pc of W40 via PRF fitting. This will provide us with a complete census of YSOs in the W40 region. The Spitzer data are combined with publicly available data in 2MASS, WISE and Hershel archives and used to model YSOs in the region. The SEDs and near-IR colors of all the point sources should allow us to determine the age of the central cluster of W40. The results from this work will put W40 in a proper stellar evolutionary context. After subtracting the point sources from the IRAC images, we are able to study the extended emission free from point source contamination. We choose a few morphologically interesting regions in W40 and use the data to model the dust emission. The results from this effort will allow us to study the correlation between dust properties and the large scale physical properties of W40.

  7. Distributed Seismic Moment Fault Model, Spectral Characteristics and Radiation Patterns

    NASA Astrophysics Data System (ADS)

    Shani-Kadmiel, Shahar; Tsesarsky, Michael; Gvirtzman, Zohar

    2014-05-01

    We implement a Distributed Seismic Moment (DSM) fault model, a physics-based representation of an earthquake source based on a skewed-Gaussian slip distribution over an elliptical rupture patch, for the purpose of forward modeling of seismic-wave propagation in 3-D heterogeneous medium. The elliptical rupture patch is described by 13 parameters: location (3), dimensions of the patch (2), patch orientation (1), focal mechanism (3), nucleation point (2), peak slip (1), rupture velocity (1). A node based second order finite difference approach is used to solve the seismic-wave equations in displacement formulation (WPP, Nilsson et al., 2007). Results of our DSM fault model are compared with three commonly used fault models: Point Source Model (PSM), Haskell's fault Model (HM), and HM with Radial (HMR) rupture propagation. Spectral features of the waveforms and radiation patterns from these four models are investigated. The DSM fault model best incorporates the simplicity and symmetry of the PSM with the directivity effects of the HMR while satisfying the physical requirements, i.e., smooth transition from peak slip at the nucleation point to zero at the rupture patch border. The implementation of the DSM in seismic-wave propagation forward models comes at negligible computational cost. Reference: Nilsson, S., Petersson, N. A., Sjogreen, B., and Kreiss, H.-O. (2007). Stable Difference Approximations for the Elastic Wave Equation in Second Order Formulation. SIAM Journal on Numerical Analysis, 45(5), 1902-1936.

  8. Identifying and characterizing major emission point sources as a basis for geospatial distribution of mercury emissions inventories

    NASA Astrophysics Data System (ADS)

    Steenhuisen, Frits; Wilson, Simon J.

    2015-07-01

    Mercury is a global pollutant that poses threats to ecosystem and human health. Due to its global transport, mercury contamination is found in regions of the Earth that are remote from major emissions areas, including the Polar regions. Global anthropogenic emission inventories identify important sectors and industries responsible for emissions at a national level; however, to be useful for air transport modelling, more precise information on the locations of emission is required. This paper describes the methodology applied, and the results of work that was conducted to assign anthropogenic mercury emissions to point sources as part of geospatial mapping of the 2010 global anthropogenic mercury emissions inventory prepared by AMAP/UNEP. Major point-source emission sectors addressed in this work account for about 850 tonnes of the emissions included in the 2010 inventory. This work allocated more than 90% of these emissions to some 4600 identified point source locations, including significantly more point source locations in Africa, Asia, Australia and South America than had been identified during previous work to geospatially-distribute the 2005 global inventory. The results demonstrate the utility and the limitations of using existing, mainly public domain resources to accomplish this work. Assumptions necessary to make use of selected online resources are discussed, as are artefacts that can arise when these assumptions are applied to assign (national-sector) emissions estimates to point sources in various countries and regions. Notwithstanding the limitations of the available information, the value of this procedure over alternative methods commonly used to geo-spatially distribute emissions, such as use of 'proxy' datasets to represent emissions patterns, is illustrated. Improvements in information that would facilitate greater use of these methods in future work to assign emissions to point-sources are discussed. These include improvements to both national (geo-referenced) emission inventories and also to other resources that can be employed when such national inventories are lacking.

  9. Lidar method to estimate emission rates from extended sources

    USDA-ARS?s Scientific Manuscript database

    Currently, point measurements, often combined with models, are the primary means by which atmospheric emission rates are estimated from extended sources. However, these methods often fall short in their spatial and temporal resolution and accuracy. In recent years, lidar has emerged as a suitable to...

  10. UTILIZATION OF LANDSCAPE INDICATORS TO MODEL WATERSHED IMPAIRMENT

    EPA Science Inventory



    Many water-bodies within the United States are contaminated by non-point source (NPS) pollution, which is defined as those materials posing a threat to water quality arising from a number of individual sources and diffused through hydrologic 13romses. One such NPS
    pol...

  11. AIR QUALITY SIMULATION MODEL PERFORMANCE FOR ONE-HOUR AVERAGES

    EPA Science Inventory

    If a one-hour standard for sulfur dioxide were promulgated, air quality dispersion modeling in the vicinity of major point sources would be an important air quality management tool. Would currently available dispersion models be suitable for use in demonstrating attainment of suc...

  12. A stepwise, multi-objective, multi-variable parameter optimization method for the APEX model

    USDA-ARS?s Scientific Manuscript database

    Proper parameterization enables hydrological models to make reliable estimates of non-point source pollution for effective control measures. The automatic calibration of hydrologic models requires significant computational power limiting its application. The study objective was to develop and eval...

  13. Geometrical analysis of an optical fiber bundle displacement sensor

    NASA Astrophysics Data System (ADS)

    Shimamoto, Atsushi; Tanaka, Kohichi

    1996-12-01

    The performance of a multifiber optical lever was geometrically analyzed by extending the Cook and Hamm model [Appl. Opt. 34, 5854-5860 (1995)] for a basic seven-fiber optical lever. The generalized relationships between sensitivity and the displacement detection limit to the fiber core radius, illumination irradiance, and coupling angle were obtained by analyses of three various types of light source, i.e., a parallel beam light source, an infinite plane light source, and a point light source. The analysis of the point light source was confirmed by a measurement that used the light source of a light-emitting diode. The sensitivity of the fiber-optic lever is inversely proportional to the fiber core radius, whereas the receiving light power is proportional to the number of illuminating and receiving fibers. Thus, the bundling of the finer fiber with the larger number of illuminating and receiving fibers is more effective for improving sensitivity and the displacement detection limit.

  14. Improving a maximum horizontal gradient algorithm to determine geological body boundaries and fault systems based on gravity data

    NASA Astrophysics Data System (ADS)

    Van Kha, Tran; Van Vuong, Hoang; Thanh, Do Duc; Hung, Duong Quoc; Anh, Le Duc

    2018-05-01

    The maximum horizontal gradient method was first proposed by Blakely and Simpson (1986) for determining the boundaries between geological bodies with different densities. The method involves the comparison of a center point with its eight nearest neighbors in four directions within each 3 × 3 calculation grid. The horizontal location and magnitude of the maximum values are found by interpolating a second-order polynomial through the trio of points provided that the magnitude of the middle point is greater than its two nearest neighbors in one direction. In theoretical models of multiple sources, however, the above condition does not allow the maximum horizontal locations to be fully located, and it could be difficult to correlate the edges of complicated sources. In this paper, the authors propose an additional condition to identify more maximum horizontal locations within the calculation grid. This additional condition will improve the method algorithm for interpreting the boundaries of magnetic and/or gravity sources. The improved algorithm was tested on gravity models and applied to gravity data for the Phu Khanh basin on the continental shelf of the East Vietnam Sea. The results show that the additional locations of the maximum horizontal gradient could be helpful for connecting the edges of complicated source bodies.

  15. Combining 3d Volume and Mesh Models for Representing Complicated Heritage Buildings

    NASA Astrophysics Data System (ADS)

    Tsai, F.; Chang, H.; Lin, Y.-W.

    2017-08-01

    This study developed a simple but effective strategy to combine 3D volume and mesh models for representing complicated heritage buildings and structures. The idea is to seamlessly integrate 3D parametric or polyhedral models and mesh-based digital surfaces to generate a hybrid 3D model that can take advantages of both modeling methods. The proposed hybrid model generation framework is separated into three phases. Firstly, after acquiring or generating 3D point clouds of the target, these 3D points are partitioned into different groups. Secondly, a parametric or polyhedral model of each group is generated based on plane and surface fitting algorithms to represent the basic structure of that region. A "bare-bones" model of the target can subsequently be constructed by connecting all 3D volume element models. In the third phase, the constructed bare-bones model is used as a mask to remove points enclosed by the bare-bones model from the original point clouds. The remaining points are then connected to form 3D surface mesh patches. The boundary points of each surface patch are identified and these boundary points are projected onto the surfaces of the bare-bones model. Finally, new meshes are created to connect the projected points and original mesh boundaries to integrate the mesh surfaces with the 3D volume model. The proposed method was applied to an open-source point cloud data set and point clouds of a local historical structure. Preliminary results indicated that the reconstructed hybrid models using the proposed method can retain both fundamental 3D volume characteristics and accurate geometric appearance with fine details. The reconstructed hybrid models can also be used to represent targets in different levels of detail according to user and system requirements in different applications.

  16. City model enrichment

    NASA Astrophysics Data System (ADS)

    Smart, Philip D.; Quinn, Jonathan A.; Jones, Christopher B.

    The combination of mobile communication technology with location and orientation aware digital cameras has introduced increasing interest in the exploitation of 3D city models for applications such as augmented reality and automated image captioning. The effectiveness of such applications is, at present, severely limited by the often poor quality of semantic annotation of the 3D models. In this paper, we show how freely available sources of georeferenced Web 2.0 information can be used for automated enrichment of 3D city models. Point referenced names of prominent buildings and landmarks mined from Wikipedia articles and from the OpenStreetMaps digital map and Geonames gazetteer have been matched to the 2D ground plan geometry of a 3D city model. In order to address the ambiguities that arise in the associations between these sources and the city model, we present procedures to merge potentially related buildings and implement fuzzy matching between reference points and building polygons. An experimental evaluation demonstrates the effectiveness of the presented methods.

  17. Scaled SFS method for Lambertian surface 3D measurement under point source lighting.

    PubMed

    Ma, Long; Lyu, Yi; Pei, Xin; Hu, Yan Min; Sun, Feng Ming

    2018-05-28

    A Lambertian surface is a kind of very important assumption in shape from shading (SFS), which is widely used in many measurement cases. In this paper, a novel scaled SFS method is developed to measure the shape of a Lambertian surface with dimensions. In which, a more accurate light source model is investigated under the illumination of a simple point light source, the relationship between surface depth map and the recorded image grayscale is established by introducing the camera matrix into the model. Together with the constraints of brightness, smoothness and integrability, the surface shape with dimensions can be obtained by analyzing only one image using the scaled SFS method. The algorithm simulations show a perfect matching between the simulated structures and the results, the rebuilding root mean square error (RMSE) is below 0.6mm. Further experiment is performed by measuring a PVC tube internal surface, the overall measurement error lies below 2%.

  18. The displacement of the sun from the galactic plane using IRAS and faust source counts

    NASA Technical Reports Server (NTRS)

    Cohen, Martin

    1995-01-01

    I determine the displacement of the Sun from the Galactic plane by interpreting IRAS point-source counts at 12 and 25 microns in the Galactic polar caps using the latest version of the SKY model for the point-source sky (Cohen 1994). A value of solar zenith = 15.5 +/- 0.7 pc north of the plane provides the best match to the ensemble of useful IRAS data. Shallow K counts in the north Galactic pole are also best fitted by this offset, while limited FAUST far-ultraviolet counts at 1660 A near the same pole favor a value near 14 pc. Combining the many IRAS determinations with the few FAUST values suggests that a value of solar zenith = 15.0 +/- 0.5 pc (internal error only) would satisfy these high-latitude sets of data in both wavelength regimes, within the context of the SKY model.

  19. The acoustic field of a point source in a uniform boundary layer over an impedance plane

    NASA Technical Reports Server (NTRS)

    Zorumski, W. E.; Willshire, W. L., Jr.

    1986-01-01

    The acoustic field of a point source in a boundary layer above an impedance plane is investigated anatytically using Obukhov quasi-potential functions, extending the normal-mode theory of Chunchuzov (1984) to account for the effects of finite ground-plane impedance and source height. The solution is found to be asymptotic to the surface-wave term studies by Wenzel (1974) in the limit of vanishing wind speed, suggesting that normal-mode theory can be used to model the effects of an atmospheric boundary layer on infrasonic sound radiation. Model predictions are derived for noise-generation data obtained by Willshire (1985) at the Medicine Bow wind-turbine facility. Long-range downwind propagation is found to behave as a cylindrical wave, with attention proportional to the wind speed, the boundary-layer displacement thickness, the real part of the ground admittance, and the square of the frequency.

  20. SIFT optimization and automation for matching images from multiple temporal sources

    NASA Astrophysics Data System (ADS)

    Castillo-Carrión, Sebastián; Guerrero-Ginel, José-Emilio

    2017-05-01

    Scale Invariant Feature Transformation (SIFT) was applied to extract tie-points from multiple source images. Although SIFT is reported to perform reliably under widely different radiometric and geometric conditions, using the default input parameters resulted in too few points being found. We found that the best solution was to focus on large features as these are more robust and not prone to scene changes over time, which constitutes a first approach to the automation of processes using mapping applications such as geometric correction, creation of orthophotos and 3D models generation. The optimization of five key SIFT parameters is proposed as a way of increasing the number of correct matches; the performance of SIFT is explored in different images and parameter values, finding optimization values which are corroborated using different validation imagery. The results show that the optimization model improves the performance of SIFT in correlating multitemporal images captured from different sources.

  1. Modeling the Volcanic Source at Long Valley, CA, Using a Genetic Algorithm Technique

    NASA Technical Reports Server (NTRS)

    Tiampo, Kristy F.

    1999-01-01

    In this project, we attempted to model the deformation pattern due to the magmatic source at Long Valley caldera using a real-value coded genetic algorithm (GA) inversion similar to that found in Michalewicz, 1992. The project has been both successful and rewarding. The genetic algorithm, coded in the C programming language, performs stable inversions over repeated trials, with varying initial and boundary conditions. The original model used a GA in which the geophysical information was coded into the fitness function through the computation of surface displacements for a Mogi point source in an elastic half-space. The program was designed to invert for a spherical magmatic source - its depth, horizontal location and volume - using the known surface deformations. It also included the capability of inverting for multiple sources.

  2. Point to point multispectral light projection applied to cultural heritage

    NASA Astrophysics Data System (ADS)

    Vázquez, D.; Alvarez, A.; Canabal, H.; Garcia, A.; Mayorga, S.; Muro, C.; Galan, T.

    2017-09-01

    Use of new of light sources based on LED technology should allow the develop of systems that combine conservation and exhibition requirements and allow to make these art goods available to the next generations according to sustainability principles. The goal of this work is to develop light systems and sources with an optimized spectral distribution for each specific point of the art piece. This optimization process implies to maximize the color fidelity reproduction and the same time to minimize the photochemical damage. Perceived color under these sources will be similar (metameric) to technical requirements given by the restoration team uncharged of the conservation and exhibition of the goods of art. Depending of the fragility of the exposed art objects (i.e. spectral responsivity of the material) the irradiance must be kept under a critical level. Therefore, it is necessary to develop a mathematical model that simulates with enough accuracy both the visual effect of the illumination and the photochemical impact of the radiation. Spectral reflectance of a reference painting The mathematical model is based on a merit function that optimized the individual intensity of the LED-light sources taking into account the damage function of the material and color space coordinates. Moreover the algorithm used weights for damage and color fidelity in order to adapt the model to a specific museal application. In this work we show a sample of this technology applied to a picture of Sorolla (1863-1923) an important Spanish painter title "woman walking at the beach".

  3. Angular displacement measuring device

    NASA Technical Reports Server (NTRS)

    Seegmiller, H. Lee B. (Inventor)

    1992-01-01

    A system for measuring the angular displacement of a point of interest on a structure, such as aircraft model within a wind tunnel, includes a source of polarized light located at the point of interest. A remote detector arrangement detects the orientation of the plane of the polarized light received from the source and compares this orientation with the initial orientation to determine the amount or rate of angular displacement of the point of interest. The detector arrangement comprises a rotating polarizing filter and a dual filter and light detector unit. The latter unit comprises an inner aligned filter and photodetector assembly which is disposed relative to the periphery of the polarizer so as to receive polarized light passing the polarizing filter and an outer aligned filter and photodetector assembly which receives the polarized light directly, i.e., without passing through the polarizing filter. The purpose of the unit is to compensate for the effects of dust, fog and the like. A polarization preserving optical fiber conducts polarized light from a remote laser source to the point of interest.

  4. Developing a Near Real-time System for Earthquake Slip Distribution Inversion

    NASA Astrophysics Data System (ADS)

    Zhao, Li; Hsieh, Ming-Che; Luo, Yan; Ji, Chen

    2016-04-01

    Advances in observational and computational seismology in the past two decades have enabled completely automatic and real-time determinations of the focal mechanisms of earthquake point sources. However, seismic radiations from moderate and large earthquakes often exhibit strong finite-source directivity effect, which is critically important for accurate ground motion estimations and earthquake damage assessments. Therefore, an effective procedure to determine earthquake rupture processes in near real-time is in high demand for hazard mitigation and risk assessment purposes. In this study, we develop an efficient waveform inversion approach for the purpose of solving for finite-fault models in 3D structure. Full slip distribution inversions are carried out based on the identified fault planes in the point-source solutions. To ensure efficiency in calculating 3D synthetics during slip distribution inversions, a database of strain Green tensors (SGT) is established for 3D structural model with realistic surface topography. The SGT database enables rapid calculations of accurate synthetic seismograms for waveform inversion on a regular desktop or even a laptop PC. We demonstrate our source inversion approach using two moderate earthquakes (Mw~6.0) in Taiwan and in mainland China. Our results show that 3D velocity model provides better waveform fitting with more spatially concentrated slip distributions. Our source inversion technique based on the SGT database is effective for semi-automatic, near real-time determinations of finite-source solutions for seismic hazard mitigation purposes.

  5. Improving the seismic small-scale modelling by comparison with numerical methods

    NASA Astrophysics Data System (ADS)

    Pageot, Damien; Leparoux, Donatienne; Le Feuvre, Mathieu; Durand, Olivier; Côte, Philippe; Capdeville, Yann

    2017-10-01

    The potential of experimental seismic modelling at reduced scale provides an intermediate step between numerical tests and geophysical campaigns on field sites. Recent technologies such as laser interferometers offer the opportunity to get data without any coupling effects. This kind of device is used in the Mesures Ultrasonores Sans Contact (MUSC) measurement bench for which an automated support system makes possible to generate multisource and multireceivers seismic data at laboratory scale. Experimental seismic modelling would become a great tool providing a value-added stage in the imaging process validation if (1) the experimental measurement chain is perfectly mastered, and thus if the experimental data are perfectly reproducible with a numerical tool, as well as if (2) the effective source is reproducible along the measurement setup. These aspects for a quantitative validation concerning devices with piezoelectrical sources and a laser interferometer have not been yet quantitatively studied in published studies. Thus, as a new stage for the experimental modelling approach, these two key issues are tackled in the proposed paper in order to precisely define the quality of the experimental small-scale data provided by the bench MUSC, which are available in the scientific community. These two steps of quantitative validation are dealt apart any imaging techniques in order to offer the opportunity to geophysicists who want to use such data (delivered as free data) of precisely knowing their quality before testing any imaging technique. First, in order to overcome the 2-D-3-D correction usually done in seismic processing when comparing 2-D numerical data with 3-D experimental measurement, we quantitatively refined the comparison between numerical and experimental data by generating accurate experimental line sources, avoiding the necessity of geometrical spreading correction for 3-D point-source data. The comparison with 2-D and 3-D numerical modelling is based on the Spectral Element Method. The approach shows the relevance of building a line source by sampling several source points, except the boundaries effects on later arrival times. Indeed, the experimental results highlight the amplitude feature and the delay equal to π/4 provided by a line source in the same manner than numerical data. In opposite, the 2-D corrections applied on 3-D data showed discrepancies which are higher on experimental data than on numerical ones due to the source wavelet shape and interferences between different arrivals. The experimental results from the approach proposed here show that discrepancies are avoided, especially for the reflected echoes. Concerning the second point aiming to assess the experimental reproducibility of the source, correlation coefficients of recording from a repeated source impact on a homogeneous model are calculated. The quality of the results, that is, higher than 0.98, allow to calculate a mean source wavelet by inversion of a mean data set. Results obtained on a more realistic model simulating clays on limestones, confirmed the reproducibility of the source impact.

  6. A Targeted Search for Point Sources of EeV Photons with the Pierre Auger Observatory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aab, A.; Abreu, P.; Aglietta, M.

    Simultaneous measurements of air showers with the fluorescence and surface detectors of the Pierre Auger Observatory allow a sensitive search for EeV photon point sources. Several Galactic and extragalactic candidate objects are grouped in classes to reduce the statistical penalty of many trials from that of a blind search and are analyzed for a significant excess above the background expectation. The presented search does not find any evidence for photon emission at candidate sources, and combined p-values for every class are reported. Particle and energy flux upper limits are given for selected candidate sources. Lastly, these limits significantly constrain predictionsmore » of EeV proton emission models from non-transient Galactic and nearby extragalactic sources, as illustrated for the particular case of the Galactic center region.« less

  7. A Targeted Search for Point Sources of EeV Photons with the Pierre Auger Observatory

    DOE PAGES

    Aab, A.; Abreu, P.; Aglietta, M.; ...

    2017-03-09

    Simultaneous measurements of air showers with the fluorescence and surface detectors of the Pierre Auger Observatory allow a sensitive search for EeV photon point sources. Several Galactic and extragalactic candidate objects are grouped in classes to reduce the statistical penalty of many trials from that of a blind search and are analyzed for a significant excess above the background expectation. The presented search does not find any evidence for photon emission at candidate sources, and combined p-values for every class are reported. Particle and energy flux upper limits are given for selected candidate sources. Lastly, these limits significantly constrain predictionsmore » of EeV proton emission models from non-transient Galactic and nearby extragalactic sources, as illustrated for the particular case of the Galactic center region.« less

  8. A Targeted Search for Point Sources of EeV Photons with the Pierre Auger Observatory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aab, A.; Abreu, P.; Aglietta, M.

    Simultaneous measurements of air showers with the fluorescence and surface detectors of the Pierre Auger Observatory allow a sensitive search for EeV photon point sources. Several Galactic and extragalactic candidate objects are grouped in classes to reduce the statistical penalty of many trials from that of a blind search and are analyzed for a significant excess above the background expectation. The presented search does not find any evidence for photon emission at candidate sources, and combined p -values for every class are reported. Particle and energy flux upper limits are given for selected candidate sources. These limits significantly constrain predictionsmore » of EeV proton emission models from non-transient Galactic and nearby extragalactic sources, as illustrated for the particular case of the Galactic center region.« less

  9. Rapid modeling of complex multi-fault ruptures with simplistic models from real-time GPS: Perspectives from the 2016 Mw 7.8 Kaikoura earthquake

    NASA Astrophysics Data System (ADS)

    Crowell, B.; Melgar, D.

    2017-12-01

    The 2016 Mw 7.8 Kaikoura earthquake is one of the most complex earthquakes in recent history, rupturing across at least 10 disparate faults with varying faulting styles, and exhibiting intricate surface deformation patterns. The complexity of this event has motivated the need for multidisciplinary geophysical studies to get at the underlying source physics to better inform earthquake hazards models in the future. However, events like Kaikoura beg the question of how well (or how poorly) such earthquakes can be modeled automatically in real-time and still satisfy the general public and emergency managers. To investigate this question, we perform a retrospective real-time GPS analysis of the Kaikoura earthquake with the G-FAST early warning module. We first perform simple point source models of the earthquake using peak ground displacement scaling and a coseismic offset based centroid moment tensor (CMT) inversion. We predict ground motions based on these point sources as well as simple finite faults determined from source scaling studies, and validate against true recordings of peak ground acceleration and velocity. Secondly, we perform a slip inversion based upon the CMT fault orientations and forward model near-field tsunami maximum expected wave heights to compare against available tide gauge records. We find remarkably good agreement between recorded and predicted ground motions when using a simple fault plane, with the majority of disagreement in ground motions being attributable to local site effects, not earthquake source complexity. Similarly, the near-field tsunami maximum amplitude predictions match tide gauge records well. We conclude that even though our models for the Kaikoura earthquake are devoid of rich source complexities, the CMT driven finite fault is a good enough "average" source and provides useful constraints for rapid forecasting of ground motion and near-field tsunami amplitudes.

  10. Lenstronomy: Multi-purpose gravitational lens modeling software package

    NASA Astrophysics Data System (ADS)

    Birrer, Simon; Amara, Adam

    2018-04-01

    Lenstronomy is a multi-purpose open-source gravitational lens modeling python package. Lenstronomy reconstructs the lens mass and surface brightness distributions of strong lensing systems using forward modelling and supports a wide range of analytic lens and light models in arbitrary combination. The software is also able to reconstruct complex extended sources as well as point sources. Lenstronomy is flexible and numerically accurate, with a clear user interface that could be deployed across different platforms. Lenstronomy has been used to derive constraints on dark matter properties in strong lenses, measure the expansion history of the universe with time-delay cosmography, measure cosmic shear with Einstein rings, and decompose quasar and host galaxy light.

  11. A Test of Maxwell's Z Model Using Inverse Modeling

    NASA Technical Reports Server (NTRS)

    Anderson, J. L. B.; Schultz, P. H.; Heineck, T.

    2003-01-01

    In modeling impact craters a small region of energy and momentum deposition, commonly called a "point source", is often assumed. This assumption implies that an impact is the same as an explosion at some depth below the surface. Maxwell's Z Model, an empirical point-source model derived from explosion cratering, has previously been compared with numerical impact craters with vertical incidence angles, leading to two main inferences. First, the flowfield center of the Z Model must be placed below the target surface in order to replicate numerical impact craters. Second, for vertical impacts, the flow-field center cannot be stationary if the value of Z is held constant; rather, the flow-field center migrates downward as the crater grows. The work presented here evaluates the utility of the Z Model for reproducing both vertical and oblique experimental impact data obtained at the NASA Ames Vertical Gun Range (AVGR). Specifically, ejection angle data obtained through Three-Dimensional Particle Image Velocimetry (3D PIV) are used to constrain the parameters of Maxwell's Z Model, including the value of Z and the depth and position of the flow-field center via inverse modeling.

  12. Body and Surface Wave Modeling of Observed Seismic Events. Part 2.

    DTIC Science & Technology

    1987-05-12

    is based on expand - ing the complete three dimensional solution of the wave equation expressed in cylindrical S coordinates in an asymptotic form which...using line source (2-D) theory. It is based on expand - ing the complete three dimensional solution of the wave equation expressed in cylindrical...generating synthetic point-source seismograms for shear dislocation sources using line source (2-D) theory. It is based on expanding the complete three

  13. Propagation of Solar Energetic Particles in Three-dimensional Interplanetary Magnetic Fields: Radial Dependence of Peak Intensities

    NASA Astrophysics Data System (ADS)

    He, H.-Q.; Zhou, G.; Wan, W.

    2017-06-01

    A functional form {I}\\max (R)={{kR}}-α , where R is the radial distance of a spacecraft, was usually used to model the radial dependence of peak intensities {I}\\max (R) of solar energetic particles (SEPs). In this work, the five-dimensional Fokker-Planck transport equation incorporating perpendicular diffusion is numerically solved to investigate the radial dependence of SEP peak intensities. We consider two different scenarios for the distribution of a spacecraft fleet: (1) along the radial direction line and (2) along the Parker magnetic field line. We find that the index α in the above expression varies in a wide range, primarily depending on the properties (e.g., location and coverage) of SEP sources and on the longitudinal and latitudinal separations between the sources and the magnetic foot points of the observers. Particularly, whether the magnetic foot point of the observer is located inside or outside the SEP source is a crucial factor determining the values of index α. A two-phase phenomenon is found in the radial dependence of peak intensities. The “position” of the break point (transition point/critical point) is determined by the magnetic connection status of the observers. This finding suggests that a very careful examination of the magnetic connection between the SEP source and each spacecraft should be taken in the observational studies. We obtain a lower limit of {R}-1.7+/- 0.1 for empirically modeling the radial dependence of SEP peak intensities. Our findings in this work can be used to explain the majority of the previous multispacecraft survey results, and especially to reconcile the different or conflicting empirical values of the index α in the literature.

  14. INPUFF: A SINGLE SOURCE GAUSSIAN PUFF DISPERSION ALGORITHM. USER'S GUIDE

    EPA Science Inventory

    INPUFF is a Gaussian INtegrated PUFF model. The Gaussian puff diffusion equation is used to compute the contribution to the concentration at each receptor from each puff every time step. Computations in INPUFF can be made for a single point source at up to 25 receptor locations. ...

  15. Evaluation of a stepwise, multi-objective, multi-variable parameter optimization method for the APEX model

    USDA-ARS?s Scientific Manuscript database

    Hydrologic models are essential tools for environmental assessment of agricultural non-point source pollution. The automatic calibration of hydrologic models, though efficient, demands significant computational power, which can limit its application. The study objective was to investigate a cost e...

  16. Gravitational lensing of quasars as seen by the Hubble Space Telescope Snapshot Survey

    NASA Technical Reports Server (NTRS)

    Maoz, D.; Bahcall, J. N.; Doxsey, R.; Schneider, D. P.; Bahcall, N. A.; Lahav, O.; Yanny, B.

    1992-01-01

    Results from the ongoing HST Snapshot Survey are presented, with emphasis on 152 high-luminosity, z greater than 1 quasars. One quasar among those observed, 1208 + 1011, is a candidate lens system with subarcsecond image separation. Six other quasars have point sources within 6 arcsec. Ground-based observations of five of these cases show that the companion point sources are foreground Galactic stars. The predicted lensing frequency of the sample is calculated for a variety of cosmological models. The effect of uncertainties in some of the observational parameters upon the predictions is discussed. No correlation of the drift rate with time, right ascension, declination, or point error is found.

  17. Atmospheric scattering of middle uv radiation from an internal source.

    PubMed

    Meier, R R; Lee, J S; Anderson, D E

    1978-10-15

    A Monte Carlo model has been developed which simulates the multiple-scattering of middle-uv radiation in the lower atmosphere. The source of radiation is assumed to be monochromatic and located at a point. The physical effects taken into account in the model are Rayleigh and Mie scattering, pure absorption by particulates and trace atmospheric gases, and ground albedo. The model output consists of the multiply scattered radiance as a function of look-angle of a detector located within the atmosphere. Several examples are discussed, and comparisons are made with direct-source and single-scattered contributions to the signal received by the detector.

  18. New constraints on neutron star models of gamma-ray bursts. II - X-ray observations of three gamma-ray burst error boxes

    NASA Technical Reports Server (NTRS)

    Boer, M.; Hurley, K.; Pizzichini, G.; Gottardi, M.

    1991-01-01

    Exosat observations are presented for 3 gamma-ray-burst error boxes, one of which may be associated with an optical flash. No point sources were detected at the 3-sigma level. A comparison with Einstein data (Pizzichini et al., 1986) is made for the March 5b, 1979 source. The data are interpreted in the framework of neutron star models and derive upper limits for the neutron star surface temperatures, accretion rates, and surface densities of an accretion disk. Apart from the March 5b, 1979 source, consistency is found with each model.

  19. Evaluation of Rock Surface Characterization by Means of Temperature Distribution

    NASA Astrophysics Data System (ADS)

    Seker, D. Z.; Incekara, A. H.; Acar, A.; Kaya, S.; Bayram, B.; Sivri, N.

    2017-12-01

    Rocks have many different types which are formed over many years. Close range photogrammetry is a techniques widely used and preferred rather than other conventional methods. In this method, the photographs overlapping each other are the basic data source of the point cloud data which is the main data source for 3D model that provides analysts automation possibility. Due to irregular and complex structures of rocks, representation of their surfaces with a large number points is more effective. Color differences caused by weathering on the rock surfaces or naturally occurring make it possible to produce enough number of point clouds from the photographs. Objects such as small trees, shrubs and weeds on and around the surface also contribute to this. These differences and properties are important for efficient operation of pixel matching algorithms to generate adequate point cloud from photographs. In this study, possibilities of using temperature distribution for interpretation of roughness of rock surface which is one of the parameters representing the surface, was investigated. For the study, a small rock which is in size of 3 m x 1 m, located at ITU Ayazaga Campus was selected as study object. Two different methods were used. The first one is production of producing choropleth map by interpolation using temperature values of control points marked on object which were also used in 3D model. 3D object model was created with the help of terrestrial photographs and 12 control points marked on the object and coordinated. Temperature value of control points were measured by using infrared thermometer and used as basic data source in order to create choropleth map with interpolation. Temperature values range from 32 to 37.2 degrees. In the second method, 3D object model was produced by means of terrestrial thermal photographs. Fort this purpose, several terrestrial photographs were taken by thermal camera and 3D object model showing temperature distribution was created. The temperature distributions in both applications are almost identical in position. The areas on the rock surface that roughness values are higher than the surroundings can be clearly identified. When the temperature distributions produced by both methods are evaluated, it is observed that as the roughness on the surface increases, the temperature increases.

  20. Uncertainty analysis of the simulations of effects of discharging treated wastewater to the Red River of the North at Fargo, North Dakota, and Moorhead, Minnesota

    USGS Publications Warehouse

    Wesolowski, Edwin A.

    1996-01-01

    Two separate studies to simulate the effects of discharging treated wastewater to the Red River of the North at Fargo, North Dakota, and Moorhead, Minnesota, have been completed. In the first study, the Red River at Fargo Water-Quality Model was calibrated and verified for icefree conditions. In the second study, the Red River at Fargo Ice-Cover Water-Quality Model was verified for ice-cover conditions.To better understand and apply the Red River at Fargo Water-Quality Model and the Red River at Fargo Ice-Cover Water-Quality Model, the uncertainty associated with simulated constituent concentrations and property values was analyzed and quantified using the Enhanced Stream Water Quality Model-Uncertainty Analysis. The Monte Carlo simulation and first-order error analysis methods were used to analyze the uncertainty in simulated values for six constituents and properties at sites 5, 10, and 14 (upstream to downstream order). The constituents and properties analyzed for uncertainty are specific conductance, total organic nitrogen (reported as nitrogen), total ammonia (reported as nitrogen), total nitrite plus nitrate (reported as nitrogen), 5-day carbonaceous biochemical oxygen demand for ice-cover conditions and ultimate carbonaceous biochemical oxygen demand for ice-free conditions, and dissolved oxygen. Results are given in detail for both the ice-cover and ice-free conditions for specific conductance, total ammonia, and dissolved oxygen.The sensitivity and uncertainty of the simulated constituent concentrations and property values to input variables differ substantially between ice-cover and ice-free conditions. During ice-cover conditions, simulated specific-conductance values are most sensitive to the headwatersource specific-conductance values upstream of site 10 and the point-source specific-conductance values downstream of site 10. These headwater-source and point-source specific-conductance values also are the key sources of uncertainty. Simulated total ammonia concentrations are most sensitive to the point-source total ammonia concentrations at all three sites. Other input variables that contribute substantially to the variability of simulated total ammonia concentrations are the headwater-source total ammonia and the instream reaction coefficient for biological decay of total ammonia to total nitrite. Simulated dissolved-oxygen concentrations at all three sites are most sensitive to headwater-source dissolved-oxygen concentration. This input variable is the key source of variability for simulated dissolved-oxygen concentrations at sites 5 and 10. Headwatersource and point-source dissolved-oxygen concentrations are the key sources of variability for simulated dissolved-oxygen concentrations at site 14.During ice-free conditions, simulated specific-conductance values at all three sites are most sensitive to the headwater-source specific-conductance values. Headwater-source specificconductance values also are the key source of uncertainty. The input variables to which total ammonia and dissolved oxygen are most sensitive vary from site to site and may or may not correspond to the input variables that contribute the most to the variability. The input variables that contribute the most to the variability of simulated total ammonia concentrations are pointsource total ammonia, instream reaction coefficient for biological decay of total ammonia to total nitrite, and Manning's roughness coefficient. The input variables that contribute the most to the variability of simulated dissolved-oxygen concentrations are reaeration rate, sediment oxygen demand rate, and headwater-source algae as chlorophyll a.

  1. A GIS-based multi-source and multi-box modeling approach (GMSMB) for air pollution assessment--a North American case study.

    PubMed

    Wang, Bao-Zhen; Chen, Zhi

    2013-01-01

    This article presents a GIS-based multi-source and multi-box modeling approach (GMSMB) to predict the spatial concentration distributions of airborne pollutant on local and regional scales. In this method, an extended multi-box model combined with a multi-source and multi-grid Gaussian model are developed within the GIS framework to examine the contributions from both point- and area-source emissions. By using GIS, a large amount of data including emission sources, air quality monitoring, meteorological data, and spatial location information required for air quality modeling are brought into an integrated modeling environment. It helps more details of spatial variation in source distribution and meteorological condition to be quantitatively analyzed. The developed modeling approach has been examined to predict the spatial concentration distribution of four air pollutants (CO, NO(2), SO(2) and PM(2.5)) for the State of California. The modeling results are compared with the monitoring data. Good agreement is acquired which demonstrated that the developed modeling approach could deliver an effective air pollution assessment on both regional and local scales to support air pollution control and management planning.

  2. Numerical Simulation of Pollutants' Transport and Fate in AN Unsteady Flow in Lower Bear River, Box Elder County, Utah

    NASA Astrophysics Data System (ADS)

    Salha, A. A.; Stevens, D. K.

    2013-12-01

    This study presents numerical application and statistical development of Stream Water Quality Modeling (SWQM) as a tool to investigate, manage, and research the transport and fate of water pollutants in Lower Bear River, Box elder County, Utah. The concerned segment under study is the Bear River starting from Cutler Dam to its confluence with the Malad River (Subbasin HUC 16010204). Water quality problems arise primarily from high phosphorus and total suspended sediment concentrations that were caused by five permitted point source discharges and complex network of canals and ducts of varying sizes and carrying capacities that transport water (for farming and agriculture uses) from Bear River and then back to it. Utah Department of Environmental Quality (DEQ) has designated the entire reach of the Bear River between Cutler Reservoir and Great Salt Lake as impaired. Stream water quality modeling (SWQM) requires specification of an appropriate model structure and process formulation according to nature of study area and purpose of investigation. The current model is i) one dimensional (1D), ii) numerical, iii) unsteady, iv) mechanistic, v) dynamic, and vi) spatial (distributed). The basic principle during the study is using mass balance equations and numerical methods (Fickian advection-dispersion approach) for solving the related partial differential equations. Model error decreases and sensitivity increases as a model becomes more complex, as such: i) uncertainty (in parameters, data input and model structure), and ii) model complexity, will be under investigation. Watershed data (water quality parameters together with stream flow, seasonal variations, surrounding landscape, stream temperature, and points/nonpoint sources) were obtained majorly using the HydroDesktop which is a free and open source GIS enabled desktop application to find, download, visualize, and analyze time series of water and climate data registered with the CUAHSI Hydrologic Information System. Processing, assessment of validity, and distribution of time-series data was explored using the GNU R language (statistical computing and graphics environment). Physical, chemical, and biological processes equations were written in FORTRAN codes (High Performance Fortran) in order to compute and solve their hyperbolic and parabolic complexities. Post analysis of results conducted using GNU R language. High performance computing (HPC) will be introduced to expedite solving complex computational processes using parallel programming. It is expected that the model will assess nonpoint sources and specific point sources data to understand pollutants' causes, transfer, dispersion, and concentration in different locations of Bear River. Investigation the impact of reduction/removal in non-point nutrient loading to Bear River water quality management could be addressed. Keywords: computer modeling; numerical solutions; sensitivity analysis; uncertainty analysis; ecosystem processes; high Performance computing; water quality.

  3. RESULTS OF PHOTOCHEMICAL SIMULATIONS OF SUBGRID SCALE POINT SOURCE EMISSIONS WITH THE MODELS-3 CMAQ MODELING SYSTEM

    EPA Science Inventory

    The Community Multiscale Air Quality (CMAQ) / Plume-in-Grid (PinG) model was applied on a domain encompassing the greater Nashville, Tennessee region. Model simulations were performed for selected days in July 1995 during the Southern Oxidant Study (SOS) field study program wh...

  4. Polarization from Thomson scattering of the light of a spherical, limb-darkened star

    NASA Technical Reports Server (NTRS)

    Rudy, R. J.

    1979-01-01

    The polarized flux produced by the Thomson scattering of the light of a spherical, limb-darkened star by optically thin, extrastellar regions of electrons is calculated and contrasted to previous models which treated the star as a point source. The point-source approximation is found to be valid for scattering by particles more than a stellar radius from the surface of the star but is inappropriate for those lying closer. The specific effect of limb darkening on the fractional polarization of the total light of a system is explored. If the principal source of light is the unpolarized flux of the star, the polarization is nearly independent of limb darkening.

  5. Development of Load Duration Curve System in Data Scarce Watersheds Based on a Distributed Hydrological Model

    NASA Astrophysics Data System (ADS)

    WANG, J.

    2017-12-01

    In stream water quality control, the total maximum daily load (TMDL) program is very effective. However, the load duration curves (LDC) of TMDL are difficult to be established because no sufficient observed flow and pollutant data can be provided in data-scarce watersheds in which no hydrological stations or consecutively long-term hydrological data are available. Although the point sources or a non-point sources of pollutants can be clarified easily with the aid of LDC, where does the pollutant come from and to where it will be transported in the watershed cannot be traced by LDC. To seek out the best management practices (BMPs) of pollutants in a watershed, and to overcome the limitation of LDC, we proposed to develop LDC based on a distributed hydrological model of SWAT for the water quality management in data scarce river basins. In this study, firstly, the distributed hydrological model of SWAT was established with the scarce-hydrological data. Then, the long-term daily flows were generated with the established SWAT model and rainfall data from the adjacent weather station. Flow duration curves (FDC) was then developed with the aid of generated daily flows by SWAT model. Considering the goal of water quality management, LDC curves of different pollutants can be obtained based on the FDC. With the monitored water quality data and the LDC curves, the water quality problems caused by the point or non-point source pollutants in different seasons can be ascertained. Finally, the distributed hydrological model of SWAT was employed again to tracing the spatial distribution and the origination of the pollutants of coming from what kind of agricultural practices and/or other human activities. A case study was conducted in the Jian-jiang river, a tributary of Yangtze river, of Duyun city, Guizhou province. Results indicate that this kind of method can realize the water quality management based on TMDL and find out the suitable BMPs for reducing pollutant in a watershed.

  6. Cloud-based calculators for fast and reliable access to NOAA's geomagnetic field models

    NASA Astrophysics Data System (ADS)

    Woods, A.; Nair, M. C.; Boneh, N.; Chulliat, A.

    2017-12-01

    While the Global Positioning System (GPS) provides accurate point locations, it does not provide pointing directions. Therefore, the absolute directional information provided by the Earth's magnetic field is of primary importance for navigation and for the pointing of technical devices such as aircrafts, satellites and lately, mobile phones. The major magnetic sources that affect compass-based navigation are the Earth's core, its magnetized crust and the electric currents in the ionosphere and magnetosphere. NOAA/CIRES Geomagnetism (ngdc.noaa.gov/geomag/) group develops and distributes models that describe all these important sources to aid navigation. Our geomagnetic models are used in variety of platforms including airplanes, ships, submarines and smartphones. While the magnetic field from Earth's core can be described in relatively fewer parameters and is suitable for offline computation, the magnetic sources from Earth's crust, ionosphere and magnetosphere require either significant computational resources or real-time capabilities and are not suitable for offline calculation. This is especially important for small navigational devices or embedded systems, where computational resources are limited. Recognizing the need for a fast and reliable access to our geomagnetic field models, we developed cloud-based application program interfaces (APIs) for NOAA's ionospheric and magnetospheric magnetic field models. In this paper we will describe the need for reliable magnetic calculators, the challenges faced in running geomagnetic field models in the cloud in real-time and the feedback from our user community. We discuss lessons learned harvesting and validating the data which powers our cloud services, as well as our strategies for maintaining near real-time service, including load-balancing, real-time monitoring, and instance cloning. We will also briefly talk about the progress we achieved on NOAA's Big Earth Data Initiative (BEDI) funded project to develop API interface to our Enhanced Magnetic Model (EMM).

  7. Coupling Hydrodynamic and Wave Propagation Codes for Modeling of Seismic Waves recorded at the SPE Test.

    NASA Astrophysics Data System (ADS)

    Larmat, C. S.; Rougier, E.; Delorey, A.; Steedman, D. W.; Bradley, C. R.

    2016-12-01

    The goal of the Source Physics Experiment (SPE) is to bring empirical and theoretical advances to the problem of detection and identification of underground nuclear explosions. For this, the SPE program includes a strong modeling effort based on first principles calculations with the challenge to capture both the source and near-source processes and those taking place later in time as seismic waves propagate within complex 3D geologic environments. In this paper, we report on results of modeling that uses hydrodynamic simulation codes (Abaqus and CASH) coupled with a 3D full waveform propagation code, SPECFEM3D. For modeling the near source region, we employ a fully-coupled Euler-Lagrange (CEL) modeling capability with a new continuum-based visco-plastic fracture model for simulation of damage processes, called AZ_Frac. These capabilities produce high-fidelity models of various factors believed to be key in the generation of seismic waves: the explosion dynamics, a weak grout-filled borehole, the surrounding jointed rock, and damage creation and deformations happening around the source and the free surface. SPECFEM3D, based on the Spectral Element Method (SEM) is a direct numerical method for full wave modeling with mathematical accuracy. The coupling interface consists of a series of grid points of the SEM mesh situated inside of the hydrodynamic code's domain. Displacement time series at these points are computed using output data from CASH or Abaqus (by interpolation if needed) and fed into the time marching scheme of SPECFEM3D. We will present validation tests with the Sharpe's model and comparisons of waveforms modeled with Rg waves (2-8Hz) that were recorded up to 2 km for SPE. We especially show effects of the local topography, velocity structure and spallation. Our models predict smaller amplitudes of Rg waves for the first five SPE shots compared to pure elastic models such as Denny &Johnson (1991).

  8. Theory of two-point correlations of jet noise

    NASA Technical Reports Server (NTRS)

    Ribner, H. S.

    1976-01-01

    A large body of careful experimental measurements of two-point correlations of far field jet noise was carried out. The model of jet-noise generation is an approximate version of an earlier work of Ribner, based on the foundations of Lighthill. The model incorporates isotropic turbulence superimposed on a specified mean shear flow, with assumed space-time velocity correlations, but with source convection neglected. The particular vehicle is the Proudman format, and the previous work (mean-square pressure) is extended to display the two-point space-time correlations of pressure. The shape of polar plots of correlation is found to derive from two main factors: (1) the noncompactness of the source region, which allows differences in travel times to the two microphones - the dominant effect; (2) the directivities of the constituent quadrupoles - a weak effect. The noncompactness effect causes the directional lobes in a polar plot to have pointed tips (cusps) and to be especially narrow in the plane of the jet axis. In these respects, and in the quantitative shapes of the normalized correlation curves, results of the theory show generally good agreement with Maestrello's experimental measurements.

  9. A Workflow to Model Microbial Loadings in Watersheds

    EPA Science Inventory

    Many watershed models simulate overland and instream microbial fate and transport, but few actually provide loading rates on land surfaces and point sources to the water body network. This paper describes the underlying general equations for microbial loading rates associated wit...

  10. Source attribution of human campylobacteriosis at the point of exposure by combining comparative exposure assessment and subtype comparison based on comparative genomic fingerprinting.

    PubMed

    Ravel, André; Hurst, Matt; Petrica, Nicoleta; David, Julie; Mutschall, Steven K; Pintar, Katarina; Taboada, Eduardo N; Pollari, Frank

    2017-01-01

    Human campylobacteriosis is a common zoonosis with a significant burden in many countries. Its prevention is difficult because humans can be exposed to Campylobacter through various exposures: foodborne, waterborne or by contact with animals. This study aimed at attributing campylobacteriosis to sources at the point of exposure. It combined comparative exposure assessment and microbial subtype comparison with subtypes defined by comparative genomic fingerprinting (CGF). It used isolates from clinical cases and from eight potential exposure sources (chicken, cattle and pig manure, retail chicken, beef, pork and turkey meat, and surface water) collected within a single sentinel site of an integrated surveillance system for enteric pathogens in Canada. Overall, 1518 non-human isolates and 250 isolates from domestically-acquired human cases were subtyped and their subtype profiles analyzed for source attribution using two attribution models modified to include exposure. Exposure values were obtained from a concurrent comparative exposure assessment study undertaken in the same area. Based on CGF profiles, attribution was possible for 198 (79%) human cases. Both models provide comparable figures: chicken meat was the most important source (65-69% of attributable cases) whereas exposure to cattle (manure) ranked second (14-19% of attributable cases), the other sources being minor (including beef meat). In comparison with other attributions conducted at the point of production, the study highlights the fact that Campylobacter transmission from cattle to humans is rarely meat borne, calling for a closer look at local transmission from cattle to prevent campylobacteriosis, in addition to increasing safety along the chicken supply chain.

  11. Modelling the Arrival of Invasive Organisms via the International Marine Shipping Network: A Khapra Beetle Study

    PubMed Central

    Paini, Dean R.; Yemshanov, Denys

    2012-01-01

    Species can sometimes spread significant distances beyond their natural dispersal ability by anthropogenic means. International shipping routes and the transport of shipping containers, in particular are a commonly recognised pathway for the introduction of invasive species. Species can gain access to a shipping container and remain inside, hidden and undetected for long periods. Currently, government biosecurity agencies charged with intercepting and removing these invasive species when they arrive to a county’s border only assess the most immediate point of loading in evaluating a shipping container’s risk profile. However, an invasive species could have infested a container previous to this point and travelled undetected before arriving at the border. To assess arrival risk for an invasive species requires analysing the international shipping network in order to identify the most likely source countries and the domestic ports of entry where the species is likely to arrive. We analysed an international shipping network and generated pathway simulations using a first-order Markov chain model to identify possible source ports and countries for the arrival of Khapra beetle (Trogoderma granarium) to Australia. We found Kaohsiung (Taiwan) and Busan (Republic of Korea) to be the most likely sources for Khapra beetle arrival, while the port of Melbourne was the most likely point of entry to Australia. Sensitivity analysis revealed significant stability in the rankings of foreign and Australian ports. This methodology provides a reliable modelling tool to identify and rank possible sources for an invasive species that could arrive at some time in the future. Such model outputs can be used by biosecurity agencies concerned with inspecting incoming shipping containers and wishing to optimise their inspection protocols. PMID:22970258

  12. DEVELOPMENT AND VALIDATION OF AN AIR-TO-BEEF FOOD CHAIN MODEL FOR DIOXIN-LIKE COMPOUNDS

    EPA Science Inventory

    A model for predicting concentrations of dioxin-like compounds in beef is developed and tested. The key premise of the model is that concentrations of these compounds in air are the source term, or starting point, for estimating beef concentrations. Vapor-phase concentrations t...

  13. Quantifying the errors due to the superposition of analytical deformation sources

    NASA Astrophysics Data System (ADS)

    Neuberg, J. W.; Pascal, K.

    2012-04-01

    The displacement field due to magma movement in the subsurface is often modelled using a Mogi point source or a dislocation Okada source embedded in a homogeneous elastic half-space. When the magmatic system cannot be modelled by a single source it is often represented by several sources, their respective deformation fields are superimposed. However, in such a case the assumption of homogeneity in the half-space is violated and the interaction between sources in an elastic medium is neglected. In this investigation we have quantified the effects of neglecting the interaction between sources on the surface deformation field. To do so, we calculated the vertical and horizontal displacements for models with adjacent sources and we tested them against the solutions of corresponding numerical 3D finite element models. We implemented several models combining spherical pressure sources and dislocation sources, varying the pressure or dislocation of the sources and their relative position. We also investigated three numerical methods to model a dike as a dislocation tensile source or as a pressurized tabular crack. We found that the discrepancies between simple superposition of the displacement field and a fully interacting numerical solution depend mostly on the source types and on their spacing. The errors induced when neglecting the source interaction are expected to vary greatly with the physical and geometrical parameters of the model. We demonstrated that for certain scenarios these discrepancies can be neglected (<5%) when the sources are separated by at least 4 radii for two combined Mogi sources and by at least 3 radii for juxtaposed Mogi and Okada sources

  14. Contaminant point source localization error estimates as functions of data quantity and model quality

    DOE PAGES

    Hansen, Scott K.; Vesselinov, Velimir Valentinov

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulatemore » well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. Furthermore, this greatly enhanced performance, but gains from additional data collection remained limited.« less

  15. Quantitative identification of riverine nitrogen from point, direct runoff and base flow sources.

    PubMed

    Huang, Hong; Zhang, Baifa; Lu, Jun

    2014-01-01

    We present a methodological example for quantifying the contributions of riverine total nitrogen (TN) from point, direct runoff and base flow sources by combining a recursive digital filter technique and statistical methods. First, we separated daily riverine flow into direct runoff and base flow using a recursive digital filter technique; then, a statistical model was established using daily simultaneous data for TN load, direct runoff rate, base flow rate, and temperature; and finally, the TN loading from direct runoff and base flow sources could be inversely estimated. As a case study, this approach was adopted to identify the TN source contributions in Changle River, eastern China. Results showed that, during 2005-2009, the total annual TN input to the river was 1,700.4±250.2 ton, and the contributions of point, direct runoff and base flow sources were 17.8±2.8%, 45.0±3.6%, and 37.2±3.9%, respectively. The innovation of the approach is that the nitrogen from direct runoff and base flow sources could be separately quantified. The approach is simple but detailed enough to take the major factors into account, providing an effective and reliable method for riverine nitrogen loading estimation and source apportionment.

  16. DEEP ATTRACTOR NETWORK FOR SINGLE-MICROPHONE SPEAKER SEPARATION.

    PubMed

    Chen, Zhuo; Luo, Yi; Mesgarani, Nima

    2017-03-01

    Despite the overwhelming success of deep learning in various speech processing tasks, the problem of separating simultaneous speakers in a mixture remains challenging. Two major difficulties in such systems are the arbitrary source permutation and unknown number of sources in the mixture. We propose a novel deep learning framework for single channel speech separation by creating attractor points in high dimensional embedding space of the acoustic signals which pull together the time-frequency bins corresponding to each source. Attractor points in this study are created by finding the centroids of the sources in the embedding space, which are subsequently used to determine the similarity of each bin in the mixture to each source. The network is then trained to minimize the reconstruction error of each source by optimizing the embeddings. The proposed model is different from prior works in that it implements an end-to-end training, and it does not depend on the number of sources in the mixture. Two strategies are explored in the test time, K-means and fixed attractor points, where the latter requires no post-processing and can be implemented in real-time. We evaluated our system on Wall Street Journal dataset and show 5.49% improvement over the previous state-of-the-art methods.

  17. Estimation of diffuse and point source microbial pollution in the ribble catchment discharging to bathing waters in the north west of England.

    PubMed

    Wither, A; Greaves, J; Dunhill, I; Wyer, M; Stapleton, C; Kay, D; Humphrey, N; Watkins, J; Francis, C; McDonald, A; Crowther, J

    2005-01-01

    Achieving compliance with the mandatory standards of the 1976 Bathing Water Directive (76/160/EEC) is required at all U.K. identified bathing waters. In recent years, the Fylde coast has been an area of significant investments in 'point source' control, which have not proven, in isolation, to satisfactorily achieve compliance with the mandatory, let alone the guide, levels of water quality in the Directive. The potential impact of riverine sources of pollution was first confirmed after a study in 1997. The completion of sewerage system enhancements offered the potential for the study of faecal indicator delivery from upstream sources comprising both point sources and diffuse agricultural sources. A research project to define these elements commenced in 2001. Initially, a desk study reported here, estimated the principal infrastructure contributions within the Ribble catchment. A second phase of this investigation has involved acquisition of empirical water quality and hydrological data from the catchment during the 2002 bathing season. These data have been used further to calibrate the 'budgets' and 'delivery' modelling and these data are still being analysed. This paper reports the initial desk study approach to faecal indicator budget estimation using available data from the sewerage infrastructure and catchment sources of faecal indicators.

  18. Mono-static GPR without transmitting anything for pavement damage inspection: interferometry by auto-correlation applied to mobile phone signals

    NASA Astrophysics Data System (ADS)

    Feld, R.; Slob, E. C.; Thorbecke, J.

    2015-12-01

    Creating virtual sources at locations where physical receivers have measured a response is known as seismic interferometry. A much appreciated benefit of interferometry is its independence of the actual source locations. The use of ambient noise as actual source is therefore not uncommon in this field. Ambient noise can be commercial noise, like for example mobile phone signals. For GPR this can be useful in cases where it is not possible to place a source, for instance when it is prohibited by laws and regulations. A mono-static GPR antenna can measure ambient noise. Interferometry by auto-correlation (AC) places a virtual source on this antenna's position, without actually transmitting anything. This can be used for pavement damage inspection. Earlier work showed very promising results with 2D numerical models of damaged pavement. 1D and 2D heterogeneities were compared, both modelled in a 2D pavement world. In a 1D heterogeneous model energy leaks away to the sides, whereas in a 2D heterogeneous model rays can reflect and therefore still add to the signal reconstruction (see illustration). In the first case the amount of stationary points is strictly limited, while in the other case the amount of stationary points is very large. We extend these models to a 3D world and optimise an experimental configuration. The illustration originates from the journal article under submission 'Non-destructive pavement damage inspection by mono-static GPR without transmitting anything' by R. Feld, E.C. Slob, and J.W. Thorbecke. (a) 2D heterogeneous pavement model with three irregular-shaped misalignments between the base and subbase layer (marked by arrows). Mono-antenna B-scan positions are shown schematically. (b) Ideal output: a real source at the receiver's position. The difference w.r.t. the trace found in the middle is shown. (c) AC output: a virtual source at the receiver's position. There is a clear overlap with the ideal output.

  19. On constraining pilot point calibration with regularization in PEST

    USGS Publications Warehouse

    Fienen, M.N.; Muffels, C.T.; Hunt, R.J.

    2009-01-01

    Ground water model calibration has made great advances in recent years with practical tools such as PEST being instrumental for making the latest techniques available to practitioners. As models and calibration tools get more sophisticated, however, the power of these tools can be misapplied, resulting in poor parameter estimates and/or nonoptimally calibrated models that do not suit their intended purpose. Here, we focus on an increasingly common technique for calibrating highly parameterized numerical models - pilot point parameterization with Tikhonov regularization. Pilot points are a popular method for spatially parameterizing complex hydrogeologic systems; however, additional flexibility offered by pilot points can become problematic if not constrained by Tikhonov regularization. The objective of this work is to explain and illustrate the specific roles played by control variables in the PEST software for Tikhonov regularization applied to pilot points. A recent study encountered difficulties implementing this approach, but through examination of that analysis, insight into underlying sources of potential misapplication can be gained and some guidelines for overcoming them developed. ?? 2009 National Ground Water Association.

  20. General error analysis in the relationship between free thyroxine and thyrotropin and its clinical relevance.

    PubMed

    Goede, Simon L; Leow, Melvin Khee-Shing

    2013-01-01

    This treatise investigates error sources in measurements applicable to the hypothalamus-pituitary-thyroid (HPT) system of analysis for homeostatic set point computation. The hypothalamus-pituitary transfer characteristic (HP curve) describes the relationship between plasma free thyroxine [FT4] and thyrotropin [TSH]. We define the origin, types, causes, and effects of errors that are commonly encountered in TFT measurements and examine how we can interpret these to construct a reliable HP function for set point establishment. The error sources in the clinical measurement procedures are identified and analyzed in relation to the constructed HP model. The main sources of measurement and interpretation uncertainties are (1) diurnal variations in [TSH], (2) TFT measurement variations influenced by timing of thyroid medications, (3) error sensitivity in ranges of [TSH] and [FT4] (laboratory assay dependent), (4) rounding/truncation of decimals in [FT4] which in turn amplify curve fitting errors in the [TSH] domain in the lower [FT4] range, (5) memory effects (rate-independent hysteresis effect). When the main uncertainties in thyroid function tests (TFT) are identified and analyzed, we can find the most acceptable model space with which we can construct the best HP function and the related set point area.

  1. Modeling tidal exchange and dispersion in Boston Harbor

    USGS Publications Warehouse

    Signell, Richard P.; Butman, Bradford

    1992-01-01

    Tidal dispersion and the horizontal exchange of water between Boston Harbor and the surrounding ocean are examined with a high-resolution (200 m) depth-averaged numerical model. The strongly varying bathymetry and coastline geometry of the harbor generate complex spatial patterns in the modeled tidal currents which are verified by shipboard acoustic Doppler surveys. Lagrangian exchange experiments demonstrate that tidal currents rapidly exchange and mix material near the inlets of the harbor due to asymmetry in the ebb/flood response. This tidal mixing zone extends roughly a tidal excursion from the inlets and plays an important role in the overall flushing of the harbor. Because the tides can only efficiently mix material in this limited region, however, harbor flushing must be considered a two step process: rapid exchange in the tidal mixing zone, followed by flushing of the tidal mixing zone by nontidal residual currents. Estimates of embayment flushing based on tidal calculations alone therefore can significantly overestimate the flushing time that would be expected under typical environmental conditions. Particle-release simulations from point sources also demonstrate that while the tides efficiently exchange material in the vicinity of the inlets, the exact nature of dispersion from point sources is extremely sensitive to the timing and location of the release, and the distribution of particles is streaky and patchlike. This suggests that high-resolution modeling of dispersion from point sources in these regions must be performed explicitly and cannot be parameterized as a plume with Gaussian-spreading in a larger scale flow field.

  2. Computer Modeling of High-Intensity Cs-Sputter Ion Sources

    NASA Astrophysics Data System (ADS)

    Brown, T. A.; Roberts, M. L.; Southon, J. R.

    The grid-point mesh program NEDLab has been used to computer model the interior of the high-intensity Cs-sputter source used in routine operations at the Center for Accelerator Mass Spectrometry (CAMS), with the goal of improving negative ion output. NEDLab has several features that are important to realistic modeling of such sources. First, space-charge effects are incorporated in the calculations through an automated ion-trajectories/Poissonelectric-fields successive-iteration process. Second, space charge distributions can be averaged over successive iterations to suppress model instabilities. Third, space charge constraints on ion emission from surfaces can be incorporate under Child's Law based algorithms. Fourth, the energy of ions emitted from a surface can be randomly chosen from within a thermal energy distribution. And finally, ions can be emitted from a surface at randomized angles The results of our modeling effort indicate that significant modification of the interior geometry of the source will double Cs+ ion production from our spherical ionizer and produce a significant increase in negative ion output from the source.

  3. Study on the quantitative relationship between Agricultural water and fertilization process and non-point source pollution based on field experiments

    NASA Astrophysics Data System (ADS)

    Wang, H.; Chen, K.; Wu, Z.; Guan, X.

    2017-12-01

    In recent years, with the prominent of water environment problem and the relative increase of point source pollution governance, especially the agricultural non-point source pollution problem caused by the extensive use of fertilizers and pesticides has become increasingly aroused people's concern and attention. In order to reveal the quantitative relationship between agriculture water and fertilizer and non-point source pollution, on the basis of elm field experiment and combined with agricultural drainage irrigation model, the agricultural irrigation water and the relationship between fertilizer and fertilization scheme and non-point source pollution were analyzed and calculated by field emission intensity index. The results show that the variation of displacement varies greatly under different irrigation conditions. When the irrigation water increased from 22cm to 42cm, the irrigation water increased by 20 cm while the field displacement increased by 11.92 cm, about 66.22% of the added value of irrigation water. Then the irrigation water increased from 42 to 68, irrigation water increased 26 cm, and the field displacement increased by 22.48 cm, accounting for 86.46% of irrigation water. So there is an "inflection point" between the irrigation water amount and field displacement amount. The load intensity increases with the increase of irrigation water and shows a significant power correlation. Under the different irrigation condition, the increase amplitude of load intensity with the increase of irrigation water is different. When the irrigation water is smaller, the load intensity increase relatively less, and when the irrigation water increased to about 42 cm, the load intensity will increase considerably. In addition, there was a positive correlation between the fertilization and load intensity. The load intensity had obvious difference in different fertilization modes even with same fertilization level, in which the fertilizer field unit load intensity increased the most in July. The results provide some basis for the field control and management of agricultural non-point source pollution.

  4. Linear dependence between the wavefront gradient and the masked intensity for the point source with a CCD sensor

    NASA Astrophysics Data System (ADS)

    Yang, Huizhen; Ma, Liang; Wang, Bin

    2018-01-01

    In contrast to the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system doesn't need a WFS to measure the wavefront aberrations. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. The model-based WFSless system has a great potential in real-time correction applications because of its fast convergence. The control algorithm of the model-based WFSless system is based on an important theory result that is the linear relation between the Mean-Square Gradient (MSG) magnitude of the wavefront aberration and the second moment of the masked intensity distribution in the focal plane (also called as Masked Detector Signal-MDS). The linear dependence between MSG and MDS for the point source imaging with a CCD sensor will be discussed from theory and simulation in this paper. The theory relationship between MSG and MDS is given based on our previous work. To verify the linear relation for the point source, we set up an imaging model under atmospheric turbulence. Additionally, the value of MDS will be deviate from that of theory because of the noise of detector and further the deviation will affect the correction effect. The theory results under noise will be obtained through theoretical derivation and then the linear relation between MDS and MDS under noise will be discussed through the imaging model. Results show the linear relation between MDS and MDS under noise is also maintained well, which provides a theoretical support to applications of the model-based WFSless system.

  5. The usability of the optical parametric amplification of light for high-angular-resolution imaging and fast astrometry

    NASA Astrophysics Data System (ADS)

    Kurek, A. R.; Stachowski, A.; Banaszek, K.; Pollo, A.

    2018-05-01

    High-angular-resolution imaging is crucial for many applications in modern astronomy and astrophysics. The fundamental diffraction limit constrains the resolving power of both ground-based and spaceborne telescopes. The recent idea of a quantum telescope based on the optical parametric amplification (OPA) of light aims to bypass this limit for the imaging of extended sources by an order of magnitude or more. We present an updated scheme of an OPA-based device and a more accurate model of the signal amplification by such a device. The semiclassical model that we present predicts that the noise in such a system will form so-called light speckles as a result of light interference in the optical path. Based on this model, we analysed the efficiency of OPA in increasing the angular resolution of the imaging of extended targets and the precise localization of a distant point source. According to our new model, OPA offers a gain in resolved imaging in comparison to classical optics. For a given time-span, we found that OPA can be more efficient in localizing a single distant point source than classical telescopes.

  6. Investigating the effects of methodological expertise and data randomness on the robustness of crowd-sourced SfM terrain models

    NASA Astrophysics Data System (ADS)

    Ratner, Jacqueline; Pyle, David; Mather, Tamsin

    2015-04-01

    Structure-from-motion (SfM) techniques are now widely available to quickly and cheaply generate digital terrain models (DTMs) from optical imagery. Topography can change rapidly during disaster scenarios and change the nature of local hazards, making ground-based SfM a particularly useful tool in hazard studies due to its low cost, accessibility, and potential for immediate deployment. Our study is designed to serve as an analogue to potential real-world use of the SfM method if employed for disaster risk reduction purposes. Experiments at a volcanic crater in Santorini, Greece, used crowd-sourced data collection to demonstrate the impact of user expertise and randomization of SfM data on the resultant DTM. Three groups of participants representing variable expertise levels utilized 16 different camera models, including four camera phones, to collect 1001 total photos in one hour of data collection. Datasets collected by each group were processed using the free and open source software VisualSFM. The point densities and overall quality of the resultant SfM point clouds were compared against each other and also against a LiDAR dataset for reference to the industry standard. Our results show that the point clouds are resilient to changes in user expertise and collection method and are comparable or even preferable in data density to LiDAR. We find that 'crowd-sourced' data collected by a moderately informed general public yields topography results comparable to those produced with data collected by experts. This means that in a real-world scenario involving participants with a diverse range of expertise levels, topography models could be produced from crowd-sourced data quite rapidly and to a very high standard. This could be beneficial to disaster risk reduction as a relatively quick, simple, and low-cost method to attain a rapidly updated knowledge of terrain attributes, useful for the prediction and mitigation of many natural hazards.

  7. Automated Mounting Bias Calibration for Airborne LIDAR System

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Jiang, W.; Jiang, S.

    2012-07-01

    Mounting bias is the major error source of Airborne LIDAR system. In this paper, an automated calibration method for estimating LIDAR system mounting parameters is introduced. LIDAR direct geo-referencing model is used to calculate systematic errors. Due to LIDAR footprints discretely sampled, the real corresponding laser points are hardly existence among different strips. The traditional corresponding point methodology does not seem to apply to LIDAR strip registration. We proposed a Virtual Corresponding Point Model to resolve the corresponding problem among discrete laser points. Each VCPM contains a corresponding point and three real laser footprints. Two rules are defined to calculate tie point coordinate from real laser footprints. The Scale Invariant Feature Transform (SIFT) is used to extract corresponding points in LIDAR strips, and the automatic flow of LIDAR system calibration based on VCPM is detailed described. The practical examples illustrate the feasibility and effectiveness of the proposed calibration method.

  8. Non-Point Source Pollutant Load Variation in Rapid Urbanization Areas by Remote Sensing, Gis and the L-THIA Model: A Case in Bao'an District, Shenzhen, China.

    PubMed

    Li, Tianhong; Bai, Fengjiao; Han, Peng; Zhang, Yuanyan

    2016-11-01

    Urban sprawl is a major driving force that alters local and regional hydrology and increases non-point source pollution. Using the Bao'an District in Shenzhen, China, a typical rapid urbanization area, as the study area and land-use change maps from 1988 to 2014 that were obtained by remote sensing, the contributions of different land-use types to NPS pollutant production were assessed with a localized long-term hydrologic impact assessment (L-THIA) model. The results show that the non-point source pollution load changed significantly both in terms of magnitude and spatial distribution. The loads of chemical oxygen demand, total suspended substances, total nitrogen and total phosphorus were affected by the interactions between event mean concentration and the magnitude of changes in land-use acreages and the spatial distribution. From 1988 to 2014, the loads of chemical oxygen demand, suspended substances and total phosphorus showed clearly increasing trends with rates of 132.48 %, 32.52 % and 38.76 %, respectively, while the load of total nitrogen decreased by 71.52 %. The immigrant population ratio was selected as an indicator to represent the level of rapid urbanization and industrialization in the study area, and a comparison analysis of the indicator with the four non-point source loads demonstrated that the chemical oxygen demand, total phosphorus and total nitrogen loads are linearly related to the immigrant population ratio. The results provide useful information for environmental improvement and city management in the study area.

  9. Non-Point Source Pollutant Load Variation in Rapid Urbanization Areas by Remote Sensing, Gis and the L-THIA Model: A Case in Bao'an District, Shenzhen, China

    NASA Astrophysics Data System (ADS)

    Li, Tianhong; Bai, Fengjiao; Han, Peng; Zhang, Yuanyan

    2016-11-01

    Urban sprawl is a major driving force that alters local and regional hydrology and increases non-point source pollution. Using the Bao'an District in Shenzhen, China, a typical rapid urbanization area, as the study area and land-use change maps from 1988 to 2014 that were obtained by remote sensing, the contributions of different land-use types to NPS pollutant production were assessed with a localized long-term hydrologic impact assessment (L-THIA) model. The results show that the non-point source pollution load changed significantly both in terms of magnitude and spatial distribution. The loads of chemical oxygen demand, total suspended substances, total nitrogen and total phosphorus were affected by the interactions between event mean concentration and the magnitude of changes in land-use acreages and the spatial distribution. From 1988 to 2014, the loads of chemical oxygen demand, suspended substances and total phosphorus showed clearly increasing trends with rates of 132.48 %, 32.52 % and 38.76 %, respectively, while the load of total nitrogen decreased by 71.52 %. The immigrant population ratio was selected as an indicator to represent the level of rapid urbanization and industrialization in the study area, and a comparison analysis of the indicator with the four non-point source loads demonstrated that the chemical oxygen demand, total phosphorus and total nitrogen loads are linearly related to the immigrant population ratio. The results provide useful information for environmental improvement and city management in the study area.

  10. A Model For Rapid Estimation of Economic Loss

    NASA Astrophysics Data System (ADS)

    Holliday, J. R.; Rundle, J. B.

    2012-12-01

    One of the loftier goals in seismic hazard analysis is the creation of an end-to-end earthquake prediction system: a "rupture to rafters" work flow that takes a prediction of fault rupture, propagates it with a ground shaking model, and outputs a damage or loss profile at a given location. So far, the initial prediction of an earthquake rupture (either as a point source or a fault system) has proven to be the most difficult and least solved step in this chain. However, this may soon change. The Collaboratory for the Study of Earthquake Predictability (CSEP) has amassed a suite of earthquake source models for assorted testing regions worldwide. These models are capable of providing rate-based forecasts for earthquake (point) sources over a range of time horizons. Furthermore, these rate forecasts can be easily refined into probabilistic source forecasts. While it's still difficult to fully assess the "goodness" of each of these models, progress is being made: new evaluation procedures are being devised and earthquake statistics continue to accumulate. The scientific community appears to be heading towards a better understanding of rupture predictability. Ground shaking mechanics are better understood, and many different sophisticated models exists. While these models tend to be computationally expensive and often regionally specific, they do a good job at matching empirical data. It is perhaps time to start addressing the third step in the seismic hazard prediction system. We present a model for rapid economic loss estimation using ground motion (PGA or PGV) and socioeconomic measures as its input. We show that the model can be calibrated on a global scale and applied worldwide. We also suggest how the model can be improved and generalized to non-seismic natural disasters such as hurricane and severe wind storms.

  11. MODELING MERCURY FATE IN SEVEN GEORGIA WATERSHEDS

    EPA Science Inventory

    Field and modeling studies were conducted in support of total maximum daily loads (TMDLs)for mercury in six south Georgia rivers and the Savannah River. Mercury is introduced to these rivers primarily by atmospheric deposition, with minor point source loadings. To produce mercu...

  12. Simulation of conservation practices using the APEX model

    USDA-ARS?s Scientific Manuscript database

    Information on agricultural Best Management Practices (BMPs) and their effectiveness in controlling agricultural non-point source pollution is crucial in developing Clean Water Act programs such as the Total Maximum Daily Loads for impaired watersheds. A modeling study was conducted to evaluate var...

  13. Multiobjective Sensitivity Analysis Of Sediment And Nitrogen Processes With A Watershed Model

    EPA Science Inventory

    This paper presents a computational analysis for evaluating critical non-point-source sediment and nutrient (specifically nitrogen) processes and management actions at the watershed scale. In the analysis, model parameters that bear key uncertainties were presumed to reflect the ...

  14. A Workflow to Model Microbial Loadings in Watersheds (proceedings)

    EPA Science Inventory

    Many watershed models simulate overland and instream microbial fate and transport, but few actually provide loading rates on land surfaces and point sources to the water body network. This paper describes the underlying general equations for microbial loading rates associated wit...

  15. Watershed Management Tool for Selection and Spacial Allocation of Non-Point Source Pollution Control Practices

    EPA Science Inventory

    Distributed-parameter watershed models are often utilized for evaluating the effectiveness of sediment and nutrient abatement strategies through the traditional {calibrate→ validate→ predict} approach. The applicability of the method is limited due to modeling approximations. In ...

  16. [Regulation framework of watershed landscape pattern for non-point source pollution control based on 'source-sink' theory: A case study in the watershed of Maluan Bay, Xiamen City, China].

    PubMed

    Huang, Ning; Wang, Hong Ying; Lin, Tao; Liu, Qi Ming; Huang, Yun Feng; Li, Jian Xiong

    2016-10-01

    Watershed landscape pattern regulation and optimization based on 'source-sink' theory for non-point source pollution control is a cost-effective measure and still in the exploratory stage. Taking whole watershed as the research object, on the basis of landscape ecology, related theories and existing research results, a regulation framework of watershed landscape pattern for non-point source pollution control was developed at two levels based on 'source-sink' theory in this study: 1) at watershed level: reasonable basic combination and spatial pattern of 'source-sink' landscape was analyzed, and then holistic regulation and optimization method of landscape pattern was constructed; 2) at landscape patch level: key 'source' landscape was taken as the focus of regulation and optimization. Firstly, four identification criteria of key 'source' landscape including landscape pollutant loading per unit area, landscape slope, long and narrow transfer 'source' landscape, pollutant loading per unit length of 'source' landscape along the riverbank were developed. Secondly, nine types of regulation and optimization methods for different key 'source' landscape in rural and urban areas were established, according to three regulation and optimization rules including 'sink' landscape inlay, banding 'sink' landscape supplement, pollutants capacity of original 'sink' landscape enhancement. Finally, the regulation framework was applied for the watershed of Maluan Bay in Xiamen City. Holistic regulation and optimization mode of watershed landscape pattern of Maluan Bay and key 'source' landscape regulation and optimization measures for the three zones were made, based on GIS technology, remote sensing images and DEM model.

  17. Large-Eddy Simulation of Chemically Reactive Pollutant Transport from a Point Source in Urban Area

    NASA Astrophysics Data System (ADS)

    Du, Tangzheng; Liu, Chun-Ho

    2013-04-01

    Most air pollutants are chemically reactive so using inert scalar as the tracer in pollutant dispersion modelling would often overlook their impact on urban inhabitants. In this study, large-eddy simulation (LES) is used to examine the plume dispersion of chemically reactive pollutants in a hypothetical atmospheric boundary layer (ABL) in neutral stratification. The irreversible chemistry mechanism of ozone (O3) titration is integrated into the LES model. Nitric oxide (NO) is emitted from an elevated point source in a rectangular spatial domain doped with O3. The LES results are compared well with the wind tunnel results available in literature. Afterwards, the LES model is applied to idealized two-dimensional (2D) street canyons of unity aspect ratio to study the behaviours of chemically reactive plume over idealized urban roughness. The relation among various time scales of reaction/turbulence and dimensionless number are analysed.

  18. Seismic Waves, 4th order accurate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2013-08-16

    SW4 is a program for simulating seismic wave propagation on parallel computers. SW4 colves the seismic wave equations in Cartesian corrdinates. It is therefore appropriate for regional simulations, where the curvature of the earth can be neglected. SW4 implements a free surface boundary condition on a realistic topography, absorbing super-grid conditions on the far-field boundaries, and a kinematic source model consisting of point force and/or point moment tensor source terms. SW4 supports a fully 3-D heterogeneous material model that can be specified in several formats. SW4 can output synthetic seismograms in an ASCII test format, or in the SAC finarymore » format. It can also present simulation information as GMT scripts, whixh can be used to create annotated maps. Furthermore, SW4 can output the solution as well as the material model along 2-D grid planes.« less

  19. Validating Pseudo-dynamic Source Models against Observed Ground Motion Data at the SCEC Broadband Platform, Ver 16.5

    NASA Astrophysics Data System (ADS)

    Song, S. G.

    2016-12-01

    Simulation-based ground motion prediction approaches have several benefits over empirical ground motion prediction equations (GMPEs). For instance, full 3-component waveforms can be produced and site-specific hazard analysis is also possible. However, it is important to validate them against observed ground motion data to confirm their efficiency and validity before practical uses. There have been community efforts for these purposes, which are supported by the Broadband Platform (BBP) project at the Southern California Earthquake Center (SCEC). In the simulation-based ground motion prediction approaches, it is a critical element to prepare a possible range of scenario rupture models. I developed a pseudo-dynamic source model for Mw 6.5-7.0 by analyzing a number of dynamic rupture models, based on 1-point and 2-point statistics of earthquake source parameters (Song et al. 2014; Song 2016). In this study, the developed pseudo-dynamic source models were tested against observed ground motion data at the SCEC BBP, Ver 16.5. The validation was performed at two stages. At the first stage, simulated ground motions were validated against observed ground motion data for past events such as the 1992 Landers and 1994 Northridge, California, earthquakes. At the second stage, they were validated against the latest version of empirical GMPEs, i.e., NGA-West2. The validation results show that the simulated ground motions produce ground motion intensities compatible with observed ground motion data at both stages. The compatibility of the pseudo-dynamic source models with the omega-square spectral decay and the standard deviation of the simulated ground motion intensities are also discussed in the study

  20. Runoff characteristics and non-point source pollution analysis in the Taihu Lake Basin: a case study of the town of Xueyan, China.

    PubMed

    Zhu, Q D; Sun, J H; Hua, G F; Wang, J H; Wang, H

    2015-10-01

    Non-point source pollution is a significant environmental issue in small watersheds in China. To study the effects of rainfall on pollutants transported by runoff, rainfall was monitored in Xueyan town in the Taihu Lake Basin (TLB) for over 12 consecutive months. The concentrations of different forms of nitrogen (N) and phosphorus (P), and chemical oxygen demand, were monitored in runoff and river water across different land use types. The results indicated that pollutant loads were highly variable. Most N losses due to runoff were found around industrial areas (printing factories), while residential areas exhibited the lowest nitrogen losses through runoff. Nitrate nitrogen (NO3-N) and ammonia nitrogen (NH4-N) were the dominant forms of soluble N around printing factories and hotels, respectively. The levels of N in river water were stable prior to the generation of runoff from a rainfall event, after which they were positively correlated to rainfall intensity. In addition, three sites with different areas were selected for a case study to analyze trends in pollutant levels during two rainfall events, using the AnnAGNPS model. The modeled results generally agreed with the observed data, which suggests that AnnAGNPS can be used successfully for modeling runoff nutrient loading in this region. The conclusions of this study provide important information on controlling non-point source pollution in TLB.

  1. The Investigation of the Impact of SO2 Emissions from the Hong Kong International Airport

    NASA Astrophysics Data System (ADS)

    Gray, J. P.; Lau, A. K.; Yuan, Z.

    2009-12-01

    A previous study of the emissions from Hong Kong’s International Airport (HKIA) utilized a semi-quantitative wind direction and speed technique and identified HKIA as a significant source of SO2 in the region. This study however was based on a single data point and the conclusions reached appeared to be inconsistent with accepted thinking regarding aircraft and airport emissions, prompting an in-depth look at airport emissions and their impact on neighbouring region. Varied modelling techniques, making use of a more complete dataset, were employed to ensure a more comprehensive and defensible result. A similar analysis technique and the same monitoring station used in the previous study (Tung Chung) were combined with three additional stations to provided coverage to reach more certain conclusions. While results at Tung Chung were similar to those in the previous study, information from the other three sensors pointed to a source further to the north in the direction of the Black Point Coal Power Station and other power plants further to the north in Mainland China. This conclusion was confirmed by use of the CALMET / CALPUFF model to reproduce emission plumes from major sources within the region on problem days. The modelled results clearly showed that, in the cases simulated, pollution events noted at Tung Chung were primarily influenced by emissions originating at Hong Kong’s and Mainland China’s power stations, and the impact from HKIA is small. This study reiterates the importance of proper identification of all major sources in wind receptor type studies.

  2. New approach for point pollution source identification in rivers based on the backward probability method.

    PubMed

    Wang, Jiabiao; Zhao, Jianshi; Lei, Xiaohui; Wang, Hao

    2018-06-13

    Pollution risk from the discharge of industrial waste or accidental spills during transportation poses a considerable threat to the security of rivers. The ability to quickly identify the pollution source is extremely important to enable emergency disposal of pollutants. This study proposes a new approach for point source identification of sudden water pollution in rivers, which aims to determine where (source location), when (release time) and how much pollutant (released mass) was introduced into the river. Based on the backward probability method (BPM) and the linear regression model (LR), the proposed LR-BPM converts the ill-posed problem of source identification into an optimization model, which is solved using a Differential Evolution Algorithm (DEA). The decoupled parameters of released mass are not dependent on prior information, which improves the identification efficiency. A hypothetical case study with a different number of pollution sources was conducted to test the proposed approach, and the largest relative errors for identified location, release time, and released mass in all tests were not greater than 10%. Uncertainty in the LR-BPM is mainly due to a problem with model equifinality, but averaging the results of repeated tests greatly reduces errors. Furthermore, increasing the gauging sections further improves identification results. A real-world case study examines the applicability of the LR-BPM in practice, where it is demonstrated to be more accurate and time-saving than two existing approaches, Bayesian-MCMC and basic DEA. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Atmospheric inverse modeling via sparse reconstruction

    NASA Astrophysics Data System (ADS)

    Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten

    2017-10-01

    Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.

  4. A new method of building footprints detection using airborne laser scanning data and multispectral image

    NASA Astrophysics Data System (ADS)

    Luo, Yiping; Jiang, Ting; Gao, Shengli; Wang, Xin

    2010-10-01

    It presents a new approach for detecting building footprints in a combination of registered aerial image with multispectral bands and airborne laser scanning data synchronously obtained by Leica-Geosystems ALS40 and Applanix DACS-301 on the same platform. A two-step method for building detection was presented consisting of selecting 'building' candidate points and then classifying candidate points. A digital surface model(DSM) derived from last pulse laser scanning data was first filtered and the laser points were classified into classes 'ground' and 'building or tree' based on mathematic morphological filter. Then, 'ground' points were resample into digital elevation model(DEM), and a Normalized DSM(nDSM) was generated from DEM and DSM. The candidate points were selected from 'building or tree' points by height value and area threshold in nDSM. The candidate points were further classified into building points and tree points by using the support vector machines(SVM) classification method. Two classification tests were carried out using features only from laser scanning data and associated features from two input data sources. The features included height, height finite difference, RGB bands value, and so on. The RGB value of points was acquired by matching laser scanning data and image using collinear equation. The features of training points were presented as input data for SVM classification method, and cross validation was used to select best classification parameters. The determinant function could be constructed by the classification parameters and the class of candidate points was determined by determinant function. The result showed that associated features from two input data sources were superior to features only from laser scanning data. The accuracy of more than 90% was achieved for buildings in first kind of features.

  5. Evidence for Legacy Contamination of Nitrate in Groundwater of North Carolina Using Monitoring and Private Well Data Models

    NASA Astrophysics Data System (ADS)

    Messier, K. P.; Kane, E.; Bolich, R.; Serre, M. L.

    2014-12-01

    Nitrate (NO3-) is a widespread contaminant of groundwater and surface water across the United States that has deleterious effects to human and ecological health. Legacy contamination, or past releases of NO3-, is thought to be impacting current groundwater and surface water of North Carolina. This study develops a model for predicting point-level groundwater NO3- at a state scale for monitoring wells and private wells of North Carolina. A land use regression (LUR) model selection procedure known as constrained forward nonlinear regression and hyperparameter optimization (CFN-RHO) is developed for determining nonlinear model explanatory variables when they are known to be correlated. Bayesian Maximum Entropy (BME) is then used to integrate the LUR model to create a LUR-BME model of spatial/temporal varying groundwater NO3- concentrations. LUR-BME results in a leave-one-out cross-validation r2 of 0.74 and 0.33 for monitoring and private wells, effectively predicting within spatial covariance ranges. The major finding regarding legacy sources NO3- in this study is that the LUR-BME models show the geographical extent of low-level contamination of deeper drinking-water aquifers is beyond that of the shallower monitoring well. Groundwater NO3- in monitoring wells is highly variable with many areas predicted above the current Environmental Protection Agency standard of 10 mg/L. Contrarily, the private well results depict widespread, low-level NO3-concentrations. This evidence supports that in addition to downward transport, there is also a significant outward transport of groundwater NO3- in the drinking water aquifer to areas outside the range of sources. Results indicate that the deeper aquifers are potentially acting as a reservoir that is not only deeper, but also covers a larger geographical area, than the reservoir formed by the shallow aquifers. Results are of interest to agencies that regulate surface water and drinking water sources impacted by the effects of legacy NO3- sources. Additionally, the results can provide guidance on factors affecting the point-level variability of groundwater NO3- and areas where monitoring is needed to reduce uncertainty. Lastly, LUR-BME predictions can be integrated into surface water models for more accurate management of non-point sources of nitrogen.

  6. Adaptive Neuro-Fuzzy Modeling of UH-60A Pilot Vibration

    NASA Technical Reports Server (NTRS)

    Kottapalli, Sesi; Malki, Heidar A.; Langari, Reza

    2003-01-01

    Adaptive neuro-fuzzy relationships have been developed to model the UH-60A Black Hawk pilot floor vertical vibration. A 200 point database that approximates the entire UH-60A helicopter flight envelope is used for training and testing purposes. The NASA/Army Airloads Program flight test database was the source of the 200 point database. The present study is conducted in two parts. The first part involves level flight conditions and the second part involves the entire (200 point) database including maneuver conditions. The results show that a neuro-fuzzy model can successfully predict the pilot vibration. Also, it is found that the training phase of this neuro-fuzzy model takes only two or three iterations to converge for most cases. Thus, the proposed approach produces a potentially viable model for real-time implementation.

  7. Metals Fate And Transport Modelling In Streams And Watersheds: State Of The Science And USEPA Workshop Review

    EPA Science Inventory

    Metals pollution in surface waters from point and non-point sources (NPS) is a widespread problem in the United States and worldwide (Lofts et al., 2007; USEPA, 2007). In the western United States, metals associated with acid mine drainage (AMD) from hardrock mines in mou...

  8. Using Socioeconomic Data to Calibrate Loss Estimates

    NASA Astrophysics Data System (ADS)

    Holliday, J. R.; Rundle, J. B.

    2013-12-01

    One of the loftier goals in seismic hazard analysis is the creation of an end-to-end earthquake prediction system: a "rupture to rafters" work flow that takes a prediction of fault rupture, propagates it with a ground shaking model, and outputs a damage or loss profile at a given location. So far, the initial prediction of an earthquake rupture (either as a point source or a fault system) has proven to be the most difficult and least solved step in this chain. However, this may soon change. The Collaboratory for the Study of Earthquake Predictability (CSEP) has amassed a suite of earthquake source models for assorted testing regions worldwide. These models are capable of providing rate-based forecasts for earthquake (point) sources over a range of time horizons. Furthermore, these rate forecasts can be easily refined into probabilistic source forecasts. While it's still difficult to fully assess the "goodness" of each of these models, progress is being made: new evaluation procedures are being devised and earthquake statistics continue to accumulate. The scientific community appears to be heading towards a better understanding of rupture predictability. Ground shaking mechanics are better understood, and many different sophisticated models exists. While these models tend to be computationally expensive and often regionally specific, they do a good job at matching empirical data. It is perhaps time to start addressing the third step in the seismic hazard prediction system. We present a model for rapid economic loss estimation using ground motion (PGA or PGV) and socioeconomic measures as its input. We show that the model can be calibrated on a global scale and applied worldwide. We also suggest how the model can be improved and generalized to non-seismic natural disasters such as hurricane and severe wind storms.

  9. Deformation data modeling through numerical models: an efficient method for tracking magma transport

    NASA Astrophysics Data System (ADS)

    Charco, M.; Gonzalez, P. J.; Galán del Sastre, P.

    2017-12-01

    Nowadays, multivariate collected data and robust physical models at volcano observatories are becoming crucial for providing effective volcano monitoring. Nevertheless, the forecast of volcanic eruption is notoriously difficult. Wthin this frame one of the most promising methods to evaluate the volcano hazard is the use of surface ground deformation and in the last decades many developments in the field of deformation modeling has been achieved. In particular, numerical modeling allows realistic media features such as topography and crustal heterogeneities to be included, although it is still very time cosuming to solve the inverse problem for near-real time interpretations. Here, we present a method that can be efficiently used to estimate the location and evolution of magmatic sources base on real-time surface deformation data and Finite Element (FE) models. Generally, the search for the best-fitting magmatic (point) source(s) is conducted for an array of 3-D locations extending below a predefined volume region and the Green functions for all the array components have to be precomputed. We propose a FE model for the pre-computation of Green functions in a mechanically heterogeneous domain which eventually will lead to a better description of the status of the volcanic area. The number of Green functions is reduced here to the number of observational points by using their reciprocity relationship. We present and test this methodology with an optimization method base on a Genetic Algorithm. Following synthetic and sensitivity test to estimate the uncertainty of the model parameters, we apply the tool for magma tracking during 2007 Kilauea volcano intrusion and eruption. We show how data inversion with numerical models can speed up the source parameters estimations for a given volcano showing signs of unrest.

  10. X-ray emission from galaxies - The distribution of low-luminosity X-ray sources in the Galactic Centre region

    NASA Astrophysics Data System (ADS)

    Heard, Victoria; Warwick, Robert

    2012-09-01

    We report a study of the extended X-ray emission observed in the Galactic Centre (GC) region based on archival XMM-Newton data. The GC diffuse emission can be decomposed into three distinct components: the emission from low-luminosity point sources; the fluorescence of (and reflection from) dense molecular material; and soft (kT ~1 keV), diffuse thermal plasma emission most likely energised by supernova explosions. Here, we examine the emission due to unresolved point sources. We show that this source component accounts for the bulk of the 6.7-keV and 6.9-keV line emission. We fit the surface brightness distribution evident in these lines with an empirical 2-d model, which we then compare with a prediction derived from a 3-d mass model for the old stellar population in the GC region. We find that the X-ray surface brightness declines more rapidly with angular offset from Sgr A* than the mass-model prediction. One interpretation is that the X-ray luminosity per solar mass characterising the GC source population is increasing towards the GC. Alternatively, some refinement of the mass-distribution within the nuclear stellar disc may be required. The unresolved X-ray source population is most likely dominated by magnetic CVs. We use the X-ray observations to set constraints on the number density of such sources in the GC region. Our analysis does not support the premise that the GC is pervaded by very hot (~ 7.5 keV) thermal plasma, which is truly diffuse in nature.

  11. Capturing interactions between nitrogen and hydrological cycles under historical climate and land use: Susquehanna watershed analysis with the GFDL land model LM3-TAN

    USGS Publications Warehouse

    Lee, M.; Malyshev, S.; Shevliakova, E.; Milly, Paul C. D.; Jaffé, P. R.

    2014-01-01

    We developed a process model LM3-TAN to assess the combined effects of direct human influences and climate change on terrestrial and aquatic nitrogen (TAN) cycling. The model was developed by expanding NOAA's Geophysical Fluid Dynamics Laboratory land model LM3V-N of coupled terrestrial carbon and nitrogen (C-N) cycling and including new N cycling processes and inputs such as a soil denitrification, point N sources to streams (i.e., sewage), and stream transport and microbial processes. Because the model integrates ecological, hydrological, and biogeochemical processes, it captures key controls of the transport and fate of N in the vegetation–soil–river system in a comprehensive and consistent framework which is responsive to climatic variations and land-use changes. We applied the model at 1/8° resolution for a study of the Susquehanna River Basin. We simulated with LM3-TAN stream dissolved organic-N, ammonium-N, and nitrate-N loads throughout the river network, and we evaluated the modeled loads for 1986–2005 using data from 16 monitoring stations as well as a reported budget for the entire basin. By accounting for interannual hydrologic variability, the model was able to capture interannual variations of stream N loadings. While the model was calibrated with the stream N loads only at the last downstream Susquehanna River Basin Commission station Marietta (40°02' N, 76°32' W), it captured the N loads well at multiple locations within the basin with different climate regimes, land-use types, and associated N sources and transformations in the sub-basins. Furthermore, the calculated and previously reported N budgets agreed well at the level of the whole Susquehanna watershed. Here we illustrate how point and non-point N sources contributing to the various ecosystems are stored, lost, and exported via the river. Local analysis of six sub-basins showed combined effects of land use and climate on soil denitrification rates, with the highest rates in the Lower Susquehanna Sub-Basin (extensive agriculture; Atlantic coastal climate) and the lowest rates in the West Branch Susquehanna Sub-Basin (mostly forest; Great Lakes and Midwest climate). In the re-growing secondary forests, most of the N from non-point sources was stored in the vegetation and soil, but in the agricultural lands most N inputs were removed by soil denitrification, indicating that anthropogenic N applications could drive substantial increase of N2O emission, an intermediate of the denitrification process.

  12. Modeling and Implementing a Digitally Embedded Maximum Power Point Tracking Algorithm and a Series-Loaded Resonant DC-DC Converter to Integrate a Photovoltaic Array with a Micro-Grid

    DTIC Science & Technology

    2014-09-01

    These renewable energy sources can include solar, wind, geothermal , biomass, hydroelectric, and nuclear. Of these sources, photovoltaic (PV) arrays...renewable energy source [1]. These renewable energy sources can include solar, wind, geothermal , biomass, hydroelectric, and nuclear. Of these sources...26, May 2011. [6] H. G. Xu, J. P. He, Y. Qin, and Y. H. Li, “Energy management and control strategy for DC micro-grid in data center,” China

  13. IMPLEMENTATION OF THE SMOKE EMISSION DATA PROCESSOR AND SMOKE TOOL INPUT DATA PROCESSOR IN MODELS-3

    EPA Science Inventory

    The U.S. Environmental Protection Agency has implemented Version 1.3 of SMOKE (Sparse Matrix Object Kernel Emission) processor for preparation of area, mobile, point, and biogenic sources emission data within Version 4.1 of the Models-3 air quality modeling framework. The SMOK...

  14. Is the gamma-ray source 3FGL J2212.5+0703 a dark matter subhalo?

    NASA Astrophysics Data System (ADS)

    Bertoni, Bridget; Hooper, Dan; Linden, Tim

    2016-05-01

    In a previous paper, we pointed out that the gamma-ray source 3FGL J2212.5+\\linebreak 0703 shows evidence of being spatially extended. If a gamma-ray source without detectable emission at other wavelengths were unambiguously determined to be spatially extended, it could not be explained by known astrophysics, and would constitute a smoking gun for dark matter particles annihilating in a nearby subhalo. With this prospect in mind, we scrutinize the gamma-ray emission from this source, finding that it prefers a spatially extended profile over that of a single point-like source with 5.1σ statistical significance. We also use a large sample of active galactic nuclei and other known gamma-rays sources as a control group, confirming, as expected, that statistically significant extension is rare among such objects. We argue that the most likely (non-dark matter) explanation for this apparent extension is a pair of bright gamma-ray sources that serendipitously lie very close to each other, and estimate that there is a chance probability of ~2% that such a pair would exist somewhere on the sky. In the case of 3FGL J2212.5+0703, we test an alternative model that includes a second gamma-ray point source at the position of the radio source BZQ J2212+0646, and find that the addition of this source alongside a point source at the position of 3FGL J2212.5+0703 yields a fit of comparable quality to that obtained for a single extended source. If 3FGL J2212.5+0703 is a dark matter subhalo, it would imply that dark matter particles have a mass of ~18-33 GeV and an annihilation cross section on the order of σ v ~ 10-26 cm3/s (for the representative case of annihilations to bbar b), similar to the values required to generate the Galactic Center gamma-ray excess.

  15. Is the gamma-ray source 3FGL J2212.5+0703 a dark matter subhalo?

    DOE PAGES

    Bertoni, Bridget; Hooper, Dan; Linden, Tim

    2016-05-23

    In a previous study, we pointed out that the gamma-ray source 3FGL J2212.5+0703 shows evidence of being spatially extended. If a gamma-ray source without detectable emission at other wavelengths were unambiguously determined to be spatially extended, it could not be explained by known astrophysics, and would constitute a smoking gun for dark matter particles annihilating in a nearby subhalo. With this prospect in mind, we scrutinize the gamma-ray emission from this source, finding that it prefers a spatially extended profile over that of a single point-like source with 5.1σ statistical significance. We also use a large sample of active galactic nuclei and other known gamma-rays sources as a control group, confirming, as expected, that statistically significant extension is rare among such objects. We argue that the most likely (non-dark matter) explanation for this apparent extension is a pair of bright gamma-ray sources that serendipitously lie very close to each other, and estimate that there is a chance probability of ~2% that such a pair would exist somewhere on the sky. In the case of 3FGL J2212.5+0703, we test an alternative model that includes a second gamma-ray point source at the position of the radio source BZQ J2212+0646, and find that the addition of this source alongside a point source at the position of 3FGL J2212.5+0703 yields a fit of comparable quality to that obtained for a single extended source. If 3FGL J2212.5+0703 is a dark matter subhalo, it would imply that dark matter particles have a mass of ~18–33 GeV and an annihilation cross section on the order of σv ~ 10 –26 cm(3)/s (for the representative case of annihilations tomore » $$b\\bar{b}$$), similar to the values required to generate the Galactic Center gamma-ray excess.« less

  16. Is the gamma-ray source 3FGL J2212.5+0703 a dark matter subhalo?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertoni, Bridget; Hooper, Dan; Linden, Tim

    In a previous study, we pointed out that the gamma-ray source 3FGL J2212.5+0703 shows evidence of being spatially extended. If a gamma-ray source without detectable emission at other wavelengths were unambiguously determined to be spatially extended, it could not be explained by known astrophysics, and would constitute a smoking gun for dark matter particles annihilating in a nearby subhalo. With this prospect in mind, we scrutinize the gamma-ray emission from this source, finding that it prefers a spatially extended profile over that of a single point-like source with 5.1σ statistical significance. We also use a large sample of active galactic nuclei and other known gamma-rays sources as a control group, confirming, as expected, that statistically significant extension is rare among such objects. We argue that the most likely (non-dark matter) explanation for this apparent extension is a pair of bright gamma-ray sources that serendipitously lie very close to each other, and estimate that there is a chance probability of ~2% that such a pair would exist somewhere on the sky. In the case of 3FGL J2212.5+0703, we test an alternative model that includes a second gamma-ray point source at the position of the radio source BZQ J2212+0646, and find that the addition of this source alongside a point source at the position of 3FGL J2212.5+0703 yields a fit of comparable quality to that obtained for a single extended source. If 3FGL J2212.5+0703 is a dark matter subhalo, it would imply that dark matter particles have a mass of ~18–33 GeV and an annihilation cross section on the order of σv ~ 10 –26 cm(3)/s (for the representative case of annihilations tomore » $$b\\bar{b}$$), similar to the values required to generate the Galactic Center gamma-ray excess.« less

  17. Numerical modeling of subsurface communication

    NASA Astrophysics Data System (ADS)

    Burke, G. J.; Dease, C. G.; Didwall, E. M.; Lytle, R. J.

    1985-02-01

    Techniques are described for numerical modeling of through-the-Earth communication. The basic problem considered is evaluation of the field at a surface or airborne station due to an antenna buried in the Earth. Equations are given for the field of a point source in a homogeneous or stratified earth. These expressions involve infinite integrals over wave number, sometimes known as Sommerfield integrals. Numerical techniques used for evaluating these integrals are outlined. The problem of determining the current on a real antenna in the Earth, including the effect of insulation, is considered. Results are included for the fields of a point source in homogeneous and stratified earths and the field of a finite insulated dipole. The results are for electromagnetic propagation in the ELF-VLF range, but the codes also can address propagation problems at higher frequencies.

  18. DEVELOPING SEASONAL AMMONIA EMISSION ESTIMATES WITH AN INVERSE MODELING TECHNIQUE

    EPA Science Inventory

    Significant uncertainty exists in magnitude and variability of ammonia (NH3) emissions, which are needed for air quality modeling of aerosols and deposition of nitrogen compounds. Approximately 85% of NH3 emissions are estimated to come from agricultural non-point sources. We sus...

  19. Induction heating pure vapor source of high temperature melting point materials on electron cyclotron resonance ion source.

    PubMed

    Kutsumi, Osamu; Kato, Yushi; Matsui, Yuuki; Kitagawa, Atsushi; Muramatsu, Masayuki; Uchida, Takashi; Yoshida, Yoshikazu; Sato, Fuminobu; Iida, Toshiyuki

    2010-02-01

    Multicharged ions that are needed are produced from solid pure material with high melting point in an electron cyclotron resonance ion source. We develop an evaporator by using induction heating (IH) with multilayer induction coil, which is made from bare molybdenum or tungsten wire without water cooling and surrounding the pure vaporized material. We optimize the shapes of induction coil and vaporized materials and operation of rf power supply. We conduct experiment to investigate the reproducibility and stability in the operation and heating efficiency. IH evaporator produces pure material vapor because materials directly heated by eddy currents have no contact with insulated materials, which are usually impurity gas sources. The power and the frequency of the induction currents range from 100 to 900 W and from 48 to 23 kHz, respectively. The working pressure is about 10(-4)-10(-3) Pa. We measure the temperature of the vaporized materials with different shapes, and compare them with the result of modeling. We estimate the efficiency of the IH vapor source. We are aiming at the evaporator's higher melting point material than that of iron.

  20. Accuracy assessment of building point clouds automatically generated from iphone images

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R.

    2014-06-01

    Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m.) and (μ2 = 0.025 m., σ2 = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.

  1. [Multiple time scales analysis of spatial differentiation characteristics of non-point source nitrogen loss within watershed].

    PubMed

    Liu, Mei-bing; Chen, Xing-wei; Chen, Ying

    2015-07-01

    Identification of the critical source areas of non-point source pollution is an important means to control the non-point source pollution within the watershed. In order to further reveal the impact of multiple time scales on the spatial differentiation characteristics of non-point source nitrogen loss, a SWAT model of Shanmei Reservoir watershed was developed. Based on the simulation of total nitrogen (TN) loss intensity of all 38 subbasins, spatial distribution characteristics of nitrogen loss and critical source areas were analyzed at three time scales of yearly average, monthly average and rainstorms flood process, respectively. Furthermore, multiple linear correlation analysis was conducted to analyze the contribution of natural environment and anthropogenic disturbance on nitrogen loss. The results showed that there were significant spatial differences of TN loss in Shanmei Reservoir watershed at different time scales, and the spatial differentiation degree of nitrogen loss was in the order of monthly average > yearly average > rainstorms flood process. TN loss load mainly came from upland Taoxi subbasin, which was identified as the critical source area. At different time scales, land use types (such as farmland and forest) were always the dominant factor affecting the spatial distribution of nitrogen loss, while the effect of precipitation and runoff on the nitrogen loss was only taken in no fertilization month and several processes of storm flood at no fertilization date. This was mainly due to the significant spatial variation of land use and fertilization, as well as the low spatial variability of precipitation and runoff.

  2. Comparison of hybrid receptor models to locate PCB sources in Chicago

    NASA Astrophysics Data System (ADS)

    Hsu, Ying-Kuang; Holsen, Thomas M.; Hopke, Philip K.

    Results of three hybrid receptor models, potential source contribution function (PSCF), concentration weighted trajectory (CWT), and residence time weighted concentration (RTWC), were compared for locating polychlorinated biphenyl (PCB) sources contributing to the atmospheric concentrations in Chicago. Variations of these models, including PSCF using mean and 75% criterion concentrations, joint probability PSCF (JP-PSCF), changes of point filters and grid cell sizes for RTWC, and PSCF using wind trajectories started at different altitudes, are also discussed. Modeling results were relatively consistent between models. However, no single model provided as complete information as was obtained by using all of them. CWT and 75% PSCF appears to be able to distinguish between larger sources and moderate ones. RTWC resolved high potential source areas. RTWC and JP-PSCF pooling data from all sampling sites removed the trailing effect often seen in PSCF modeling. PSCF results using average concentration criteria, appears to identify both moderate and major sources. Each model has advantages and disadvantages. However, used in combination, they provide information that is not available if only one of them is used. For short-range atmospheric transport, PSCF results were consistent when using wind trajectories starting at different heights. Based on the archived PCB data, the modeling results indicate there is a large potential source area between Joliet and Kankakee, IL, and two moderate sources to the northwest and south of Chicago. On the south side of Chicago in the neighborhood of Lake Calumet, several PCB sources were identified. Other unidentified potential source location(s) will require additional upwind/downwind field sampling to verify modeling results.

  3. STARBLADE: STar and Artefact Removal with a Bayesian Lightweight Algorithm from Diffuse Emission

    NASA Astrophysics Data System (ADS)

    Knollmüller, Jakob; Frank, Philipp; Ensslin, Torsten A.

    2018-05-01

    STARBLADE (STar and Artefact Removal with a Bayesian Lightweight Algorithm from Diffuse Emission) separates superimposed point-like sources from a diffuse background by imposing physically motivated models as prior knowledge. The algorithm can also be used on noisy and convolved data, though performing a proper reconstruction including a deconvolution prior to the application of the algorithm is advised; the algorithm could also be used within a denoising imaging method. STARBLADE learns the correlation structure of the diffuse emission and takes it into account to determine the occurrence and strength of a superimposed point source.

  4. Prediction of phosphorus loads in an artificially drained lowland catchment using a modified SWAT model

    NASA Astrophysics Data System (ADS)

    Bauwe, Andreas; Eckhardt, Kai-Uwe; Lennartz, Bernd

    2017-04-01

    Eutrophication is still one of the main environmental problems in the Baltic Sea. Currently, agricultural diffuse sources constitute the major portion of phosphorus (P) fluxes to the Baltic Sea and have to be reduced to achieve the HELCOM targets and improve the ecological status. Eco-hydrological models are suitable tools to identify sources of nutrients and possible measures aiming at reducing nutrient loads into surface waters. In this study, the Soil and Water Assessment Tool (SWAT) was applied to the Warnow river basin (3300 km2), the second largest watershed in Germany discharging into the Baltic Sea. The Warnow river basin is located in northeastern Germany and characterized by lowlands with a high proportion of artificially drained areas. The aim of this study were (i) to estimate P loadings for individual flow fractions (point sources, surface runoff, tile flow, groundwater flow), spatially distributed on sub-basin scale. Since the official version of SWAT does not allow for the modeling of P in tile drains, we tested (ii) two different approaches of simulating P in tile drains by changing the SWAT source code. The SWAT source code was modified so that (i) the soluble P concentration of the groundwater was transferred to the tile water and (ii) the soluble P in the soil was transferred to the tiles. The SWAT model was first calibrated (2002-2011) and validated (1992-2001) for stream flow at 7 headwater catchments at a daily time scale. Based on this, the stream flow at the outlet of the Warnow river basin was simulated. Performance statistics indicated at least satisfactory model results for each sub-basin. Breaking down the discharge into flow constituents, it becomes visible that stream flow is mainly governed by groundwater and tile flow. Due to the topographic situation with gentle slopes, surface runoff played only a minor role. Results further indicate that the prediction of soluble P loads was improved by the modified SWAT versions. Major sources of P in rivers are groundwater and tile flow. P was also released by surface runoff during large storm events when sediment was eroded into the rivers. The contributions of point sources in terms of waste water treatment plants to the overall P loading were low. The modifications made in the SWAT source code should be considered as a starting point to simulate P loads in artificially drained landscapes more precisely. Further testing and development of the code is required.

  5. Pollutant source identification model for water pollution incidents in small straight rivers based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Shou-ping; Xin, Xiao-kang

    2017-07-01

    Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.

  6. A 3D modeling approach to complex faults with multi-source data

    NASA Astrophysics Data System (ADS)

    Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan

    2015-04-01

    Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.

  7. High frequency sound propagation in a network of interconnecting streets

    NASA Astrophysics Data System (ADS)

    Hewett, D. P.

    2012-12-01

    We propose a new model for the propagation of acoustic energy from a time-harmonic point source through a network of interconnecting streets in the high frequency regime, in which the wavelength is small compared to typical macro-lengthscales such as street widths/lengths and building heights. Our model, which is based on geometrical acoustics (ray theory), represents the acoustic power flow from the source along any pathway through the network as the integral of a power density over the launch angle of a ray emanating from the source, and takes into account the key phenomena involved in the propagation, namely energy loss by wall absorption, energy redistribution at junctions, and, in 3D, energy loss to the atmosphere. The model predicts strongly anisotropic decay away from the source, with the power flow decaying exponentially in the number of junctions from the source, except along the axial directions of the network, where the decay is algebraic.

  8. Column Number Density Expressions Through M = 0 and M = 1 Point Source Plumes Along Any Straight Path

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael S.

    2016-01-01

    Analytical expressions for column number density (CND) are developed for optical line of sight paths through a variety of steady free molecule point source models including directionally-constrained effusion (Mach number M = 0) and flow from a sonic orifice (M 1). Sonic orifice solutions are approximate, developed using a fair simulacrum fitted to the free molecule solution. Expressions are also developed for a spherically-symmetric thermal expansion (M = 0). CND solutions are found for the most general paths relative to these sources and briefly explored. It is determined that the maximum CND from a distant location through directed effusion and sonic orifice cases occurs along the path parallel to the source plane that intersects the plume axis. For the effusive case this value is exactly twice the CND found along the ray originating from that point of intersection and extending to infinity along the plumes axis. For sonic plumes this ratio is reduced to about 43. For high Mach number cases the maximum CND will be found along the axial centerline path.

  9. The Atacama Cosmology Telescope: A Measurement of the 600 less than l less than 8000 Cosmic Microwave Background Power Spectrum at 148 GHz

    NASA Technical Reports Server (NTRS)

    Fowler, J. W.; Acquaviva, V.; Ade, P. A. R.; Aguirre, P.; Amiri, M.; Appel, J. W.; Barrientos, L. F.; Bassistelli, E. S.; Bond, J. R.; Brown, B.; hide

    2010-01-01

    We present a measurement of the angular power spectrum of the cosmic microwave background (CMB) radiation observed at 148 GHz. The measurement uses maps with 1.4' angular resolution made with data from the Atacama Cosmology Telescope (ACT). The observations cover 228 deg(sup 2) of the southern sky, in a 4 deg. 2-wide strip centered on declination 53 deg. South. The CMB at arc minute angular scales is particularly sensitive to the Silk damping scale, to the Sunyaev-Zel'dovich (SZ) effect from galaxy dusters, and to emission by radio sources and dusty galaxies. After masking the 108 brightest point sources in our maps, we estimate the power spectrum between 600 less than l less than 8000 using the adaptive multi-taper method to minimize spectral leakage and maximize use of the full data set. Our absolute calibration is based on observations of Uranus. To verify the calibration and test the fidelity of our map at large angular scales, we cross-correlate the ACT map to the WMAP map and recover the WMAP power spectrum from 250 less than l less than 1150. The power beyond the Silk damping tail of the CMB (l approximately 5000) is consistent with models of the emission from point sources. We quantify the contribution of SZ clusters to the power spectrum by fitting to a model normalized to sigma 8 = 0.8. We constrain the model's amplitude A(sub sz) less than 1.63 (95% CL). If interpreted as a measurement of as, this implies sigma (sup SZ) (sub 8) less than 0.86 (95% CL) given our SZ model. A fit of ACT and WMAP five-year data jointly to a 6-parameter ACDM model plus point sources and the SZ effect is consistent with these results.

  10. A Proposal for a Subcritical Reactivity Meter based on Gandini and Salvatores' point kinetics equations for Multiplying Subcritical Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinto, Leticia N.; Dos Santos, Adimir

    2015-07-01

    Multiplying Subcritical Systems were for a long time poorly studied and its theoretical description remains with plenty open questions. Great interest on such systems arose partly due to the improvement of hybrid concepts, such as the Accelerator-Driven Systems (ADS). Along with the need for new technologies to be developed, further study and understanding of subcritical systems are essential also in more practical situations, such as in the case of a PWR criticalization in their physical startup tests. Point kinetics equations are fundamental to continuously monitor the reactivity behavior to a possible variation of external sources intensity. In this case, quicklymore » and accurately predicting power transients and reactivity becomes crucial. It is known that conventional Reactivity Meters cannot operate in subcritical levels nor describe the dynamics of multiplying systems in these conditions, by the very structure of the classical kinetic equations. Several theoretical models have been proposed to characterize the kinetics of such systems with special regard to the reactivity, as the one developed by Gandini and Salvatores among others. This work presents a discussion about the derivation of point kinetics equations for subcritical systems and the importance of considering the external source. From the point of view of the Gandini and Salvatores' point kinetics model and based on the experimental results provided by Lee and dos Santos, it was possible to develop an innovative approach. This article proposes an algorithm that describes the subcritical reactivity with external source, contributing to the advancement of studies in the field. (authors)« less

  11. Approaches to highly parameterized inversion: Pilot-point theory, guidelines, and research directions

    USGS Publications Warehouse

    Doherty, John E.; Fienen, Michael N.; Hunt, Randall J.

    2011-01-01

    Pilot points have been used in geophysics and hydrogeology for at least 30 years as a means to bridge the gap between estimating a parameter value in every cell of a model and subdividing models into a small number of homogeneous zones. Pilot points serve as surrogate parameters at which values are estimated in the inverse-modeling process, and their values are interpolated onto the modeling domain in such a way that heterogeneity can be represented at a much lower computational cost than trying to estimate parameters in every cell of a model. Although the use of pilot points is increasingly common, there are few works documenting the mathematical implications of their use and even fewer sources of guidelines for their implementation in hydrogeologic modeling studies. This report describes the mathematics of pilot-point use, provides guidelines for their use in the parameter-estimation software suite (PEST), and outlines several research directions. Two key attributes for pilot-point definitions are highlighted. First, the difference between the information contained in the every-cell parameter field and the surrogate parameter field created using pilot points should be in the realm of parameters which are not informed by the observed data (the null space). Second, the interpolation scheme for projecting pilot-point values onto model cells ideally should be orthogonal. These attributes are informed by the mathematics and have important ramifications for both the guidelines and suggestions for future research.

  12. High frequency seismic signal generated by landslides on complex topographies: from point source to spatially distributed sources

    NASA Astrophysics Data System (ADS)

    Mangeney, A.; Kuehnert, J.; Capdeville, Y.; Durand, V.; Stutzmann, E.; Kone, E. H.; Sethi, S.

    2017-12-01

    During their flow along the topography, landslides generate seismic waves in a wide frequency range. These so called landquakes can be recorded at very large distances (a few hundreds of km for large landslides). The recorded signals depend on the landslide seismic source and the seismic wave propagation. If the wave propagation is well understood, the seismic signals can be inverted for the seismic source and thus can be used to get information on the landslide properties and dynamics. Analysis and modeling of long period seismic signals (10-150s) have helped in this way to discriminate between different landslide scenarios and to constrain rheological parameters (e.g. Favreau et al., 2010). This was possible as topography poorly affects wave propagation at these long periods and the landslide seismic source can be approximated as a point source. In the near-field and at higher frequencies (> 1 Hz) the spatial extent of the source has to be taken into account and the influence of the topography on the recorded seismic signal should be quantified in order to extract information on the landslide properties and dynamics. The characteristic signature of distributed sources and varying topographies is studied as a function of frequency and recording distance.The time dependent spatial distribution of the forces applied to the ground by the landslide are obtained using granular flow numerical modeling on 3D topography. The generated seismic waves are simulated using the spectral element method. The simulated seismic signal is compared to observed seismic data from rockfalls at the Dolomieu Crater of Piton de la Fournaise (La Réunion).Favreau, P., Mangeney, A., Lucas, A., Crosta, G., and Bouchut, F. (2010). Numerical modeling of landquakes. Geophysical Research Letters, 37(15):1-5.

  13. Source apportion of atmospheric particulate matter: a joint Eulerian/Lagrangian approach.

    PubMed

    Riccio, A; Chianese, E; Agrillo, G; Esposito, C; Ferrara, L; Tirimberio, G

    2014-12-01

    PM2.5 samples were collected during an annual monitoring campaign (January 2012-January 2013) in the urban area of Naples, one of the major cities in Southern Italy. Samples were collected by means of a standard gravimetric sampler (Tecora Echo model) and characterized from a chemical point of view by ion chromatography. As a result, 143 samples together with their ionic composition have been collected. We extend traditional source apportionment techniques, usually based on multivariate factor analysis, interpreting the chemical analysis results within a Lagrangian framework. The Hybrid Single-Particle Lagrangian Integrated Trajectory Model (HYSPLIT) model was used, providing linkages to the source regions in the upwind areas. Results were analyzed in order to quantify the relative weight of different source types/areas. Model results suggested that PM concentrations are strongly affected not only by local emissions but also by transboundary emissions, especially from the Eastern and Northern European countries and African Saharan dust episodes.

  14. Probing the Spatial Distribution of the Interstellar Dust Medium by High Angular Resolution X-ray Halos of Point Sources

    NASA Astrophysics Data System (ADS)

    Xiang, Jingen

    X-rays are absorbed and scattered by dust grains when they travel through the interstellar medium. The scattering within small angles results in an X-ray ``halo''. The halo properties are significantly affected by the energy of radiation, the optical depth of the scattering, the grain size distributions and compositions, and the spatial distribution of dust along the line of sight (LOS). Therefore analyzing the X-ray halo properties is an important tool to study the size distribution and spatial distribution of interstellar grains, which plays a central role in the astrophysical study of the interstellar medium, such as the thermodynamics and chemistry of the gas and the dynamics of star formation. With excellent angular resolution, good energy resolution and broad energy band, the Chandra ACIS is so far the best instrument for studying the X-ray halos. But the direct images of bright sources obtained with ACIS usually suffer from severe pileup which prevents us from obtaining the halos in small angles. We first improve the method proposed by Yao et al to resolve the X-ray dust scattering halos of point sources from the zeroth order data in CC-mode or the first order data in TE mode with Chandra HETG/ACIS. Using this method we re-analyze the Cygnus X-1 data observed with Chandra. Then we studied the X-ray dust scattering halos around 17 bright X-ray point sources using Chandra data. All sources were observed with the HETG/ACIS in CC-mode or TE-mode. Using the interstellar grain models of WD01 model and MRN model to fit the halo profiles, we get the hydrogen column densities and the spatial distributions of the scattering dust grains along the line of sights (LOS) to these sources. We find there is a good linear correlation not only between the scattering hydrogen column density from WD01 model and the one from MRN model, but also between N_{H} derived from spectral fits and the one derived from the grain models WD01 and MRN (except for GX 301-2 and Vela X-1): N_{H,WD01} = (0.720±0.009) × N_{H,abs} + (0.051±0.013) and N_{H, MRN} = (1.156±0.016) × N_{H,abs} + (0.062±0.024) in the units 10^{22} cm^{-2}. Then the correlation between FHI and N_{H} is obtained. Both WD01 model and MRN model fits show that the scattering dust density very close to these sources is much higher than the normal interstellar medium and we consider it is the evidence of molecular clouds around these X-ray binaries. We also find that there is the linear correlation between the effective distance through the galactic dust layer and hydrogen scattering olumn density N_{H} excluding the one in x=0.99-1.0 but the correlation does not exist between he effective distance and the N_{H} in x=0.99-1.0. It shows that the dust nearby the X-ray sources is not the dust from galactic disk. Then we estimate the structure and density of the stellar wind around the special X-ray pulsars Vela X-1 and GX 301-2. Finally we discuss the possibility of probing the three dimensional structure of the interstellar using the X-ray halos of the transient sources, probing the spatial distributions of interstellar dust medium nearby the point sources, even the structure of the stellar winds using higher angular resolution X-ray dust scattering halos and testing the model that the black hole can be formed from the direct collapse of a massive star without supernova using the statistical distribution of the dust density nearby the X-ray binaries.

  15. Cornell Mixing Zone Expert System

    EPA Pesticide Factsheets

    This page provides an overview Cornell Mixing Zone Expert System water quality modeling and decision support system designed for environmental impact assessment of mixing zones resulting from wastewater discharge from point sources

  16. Modification and validation of an analytical source model for external beam radiotherapy Monte Carlo dose calculations.

    PubMed

    Davidson, Scott E; Cui, Jing; Kry, Stephen; Deasy, Joseph O; Ibbott, Geoffrey S; Vicic, Milos; White, R Allen; Followill, David S

    2016-08-01

    A dose calculation tool, which combines the accuracy of the dose planning method (DPM) Monte Carlo code and the versatility of a practical analytical multisource model, which was previously reported has been improved and validated for the Varian 6 and 10 MV linear accelerators (linacs). The calculation tool can be used to calculate doses in advanced clinical application studies. One shortcoming of current clinical trials that report dose from patient plans is the lack of a standardized dose calculation methodology. Because commercial treatment planning systems (TPSs) have their own dose calculation algorithms and the clinical trial participant who uses these systems is responsible for commissioning the beam model, variation exists in the reported calculated dose distributions. Today's modern linac is manufactured to tight specifications so that variability within a linac model is quite low. The expectation is that a single dose calculation tool for a specific linac model can be used to accurately recalculate dose from patient plans that have been submitted to the clinical trial community from any institution. The calculation tool would provide for a more meaningful outcome analysis. The analytical source model was described by a primary point source, a secondary extra-focal source, and a contaminant electron source. Off-axis energy softening and fluence effects were also included. The additions of hyperbolic functions have been incorporated into the model to correct for the changes in output and in electron contamination with field size. A multileaf collimator (MLC) model is included to facilitate phantom and patient dose calculations. An offset to the MLC leaf positions was used to correct for the rudimentary assumed primary point source. Dose calculations of the depth dose and profiles for field sizes 4 × 4 to 40 × 40 cm agree with measurement within 2% of the maximum dose or 2 mm distance to agreement (DTA) for 95% of the data points tested. The model was capable of predicting the depth of the maximum dose within 1 mm. Anthropomorphic phantom benchmark testing of modulated and patterned MLCs treatment plans showed agreement to measurement within 3% in target regions using thermoluminescent dosimeters (TLD). Using radiochromic film normalized to TLD, a gamma criteria of 3% of maximum dose and 2 mm DTA was applied with a pass rate of least 85% in the high dose, high gradient, and low dose regions. Finally, recalculations of patient plans using DPM showed good agreement relative to a commercial TPS when comparing dose volume histograms and 2D dose distributions. A unique analytical source model coupled to the dose planning method Monte Carlo dose calculation code has been modified and validated using basic beam data and anthropomorphic phantom measurement. While this tool can be applied in general use for a particular linac model, specifically it was developed to provide a singular methodology to independently assess treatment plan dose distributions from those clinical institutions participating in National Cancer Institute trials.

  17. HEAO-1 analysis of Low Energy Detectors (LED)

    NASA Technical Reports Server (NTRS)

    Nousek, John A.

    1992-01-01

    The activities at Penn State University are described. During the period Oct. 1990 to Dec. 1991 work on HEAO-1 analysis of the Low Energy Detectors (LED) concentrated on using the improved detector spectral simulation model and fitting diffuse x-ray background spectral data. Spectral fitting results, x-ray point sources, and diffuse x-ray sources are described.

  18. Mapping Points of Interest: An Analysis of Students' Engagement with Digital Primary Sources

    ERIC Educational Resources Information Center

    Rysavy, Monica D. T.; Michalak, Russell; Hunt, Kevin

    2018-01-01

    The Digital Archival Advertisements Survey Process (DAASP) model is a collaborative active learning exercise designed to aid students in evaluating primary source documents of print-based advertisements. By deploying DAASP, the researchers were able to assess the students' ability to evaluate their biases of the advertisements in a first-year…

  19. Using spatial-stream-network models and long-term data to understand and predict dynamics of faecal contamination in a mixed land-use catchment.

    PubMed

    Neill, Aaron James; Tetzlaff, Doerthe; Strachan, Norval James Colin; Hough, Rupert Lloyd; Avery, Lisa Marie; Watson, Helen; Soulsby, Chris

    2018-01-15

    An 11year dataset of concentrations of E. coli at 10 spatially-distributed sites in a mixed land-use catchment in NE Scotland (52km 2 ) revealed that concentrations were not clearly associated with flow or season. The lack of a clear flow-concentration relationship may have been due to greater water fluxes from less-contaminated headwaters during high flows diluting downstream concentrations, the importance of persistent point sources of E. coli both anthropogenic and agricultural, and possibly the temporal resolution of the dataset. Point sources and year-round grazing of livestock probably obscured clear seasonality in concentrations. Multiple linear regression models identified potential for contamination by anthropogenic point sources as a significant predictor of long-term spatial patterns of low, average and high concentrations of E. coli. Neither arable nor pasture land was significant, even when accounting for hydrological connectivity with a topographic-index method. However, this may have reflected coarse-scale land-cover data inadequately representing "point sources" of agricultural contamination (e.g. direct defecation of livestock into the stream) and temporal changes in availability of E. coli from diffuse sources. Spatial-stream-network models (SSNMs) were applied in a novel context, and had value in making more robust catchment-scale predictions of concentrations of E. coli with estimates of uncertainty, and in enabling identification of potential "hot spots" of faecal contamination. Successfully managing faecal contamination of surface waters is vital for safeguarding public health. Our finding that concentrations of E. coli could not clearly be associated with flow or season may suggest that management strategies should not necessarily target only high flow events or summer when faecal contamination risk is often assumed to be greatest. Furthermore, we identified SSNMs as valuable tools for identifying possible "hot spots" of contamination which could be targeted for management, and for highlighting areas where additional monitoring could help better constrain predictions relating to faecal contamination. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Null-space Monte Carlo particle tracking to assess groundwater PCE (Tetrachloroethene) diffuse pollution in north-eastern Milan functional urban area.

    PubMed

    Alberti, Luca; Colombo, Loris; Formentin, Giovanni

    2018-04-15

    The Lombardy Region in Italy is one of the most urbanized and industrialized areas in Europe. The presence of countless sources of groundwater pollution is therefore a matter of environmental concern. The sources of groundwater contamination can be classified into two different categories: 1) Point Sources (PS), which correspond to areas releasing plumes of high concentrations (i.e. hot-spots) and 2) Multiple-Point Sources (MPS) consisting in a series of unidentifiable small sources clustered within large areas, generating an anthropogenic diffuse contamination. The latter category frequently predominates in European Functional Urban Areas (FUA) and cannot be managed through standard remediation techniques, mainly because detecting the many different source areas releasing small contaminant mass in groundwater is unfeasible. A specific legislative action has been recently enacted at Regional level (DGR IX/3510-2012), in order to identify areas prone to anthropogenic diffuse pollution and their level of contamination. With a view to defining a management plan, it is necessary to find where MPS are most likely positioned. This paper describes a methodology devised to identify the areas with the highest likelihood to host potential MPS. A groundwater flow model was implemented for a pilot area located in the Milan FUA and through the PEST code, a Null-Space Monte Carlo method was applied in order to generate a suite of several hundred hydraulic conductivity field realizations, each maintaining the model in a calibrated state and each consistent with the modelers' expert-knowledge. Thereafter, the MODPATH code was applied to generate back-traced advective flowpaths for each of the models built using the conductivity field realizations. Maps were then created displaying the number of backtracked particles that crossed each model cell in each stochastic calibrated model. The result is considered to be representative of the FUAs areas with the highest likelihood to host MPS responsible for diffuse contamination. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Mortality rates of pathogen indicator microorganisms discharged from point and non-point sources in an urban area.

    PubMed

    Kim, Geonha; Hur, Jin

    2010-01-01

    This research measured the mortality rates of pathogen indicator microorganisms discharged from various point and non-point sources in an urban area. Water samples were collected from a domestic sewer, a combined sewer overflow, the effluent of a wastewater treatment plant, and an urban river. Mortality rates of indicator microorganisms in sediment of an urban river were also measured. Mortality rates of indicator microorganisms in domestic sewage, estimated by assuming first order kinetics at 20 degrees C were 0.197 day(-1), 0.234 day(-1), 0.258 day(-1) and 0.276 day(-1) for total coliform, fecal coliform, Escherichia coli, and fecal streptococci, respectively. Effects of temperature, sunlight irradiation and settlement on the mortality rate were measured. Results of this research can be used as input data for water quality modeling or can be used as design factors for treatment facilities.

  2. Finite-Length Line Source Superposition Model (FLLSSM)

    NASA Astrophysics Data System (ADS)

    1980-03-01

    A linearized thermal conduction model was developed to economically determine media temperatures in geologic repositories for nuclear wastes. Individual canisters containing either high level waste or spent fuel assemblies were represented as finite length line sources in a continuous media. The combined effects of multiple canisters in a representative storage pattern were established at selected points of interest by superposition of the temperature rises calculated for each canister. The methodology is outlined and the computer code FLLSSM which performs required numerical integrations and superposition operations is described.

  3. A framework for fast probabilistic centroid-moment-tensor determination—inversion of regional static displacement measurements

    NASA Astrophysics Data System (ADS)

    Käufl, Paul; Valentine, Andrew P.; O'Toole, Thomas B.; Trampert, Jeannot

    2014-03-01

    The determination of earthquake source parameters is an important task in seismology. For many applications, it is also valuable to understand the uncertainties associated with these determinations, and this is particularly true in the context of earthquake early warning (EEW) and hazard mitigation. In this paper, we develop a framework for probabilistic moment tensor point source inversions in near real time. Our methodology allows us to find an approximation to p(m|d), the conditional probability of source models (m) given observations (d). This is obtained by smoothly interpolating a set of random prior samples, using Mixture Density Networks (MDNs)-a class of neural networks which output the parameters of a Gaussian mixture model. By combining multiple networks as `committees', we are able to obtain a significant improvement in performance over that of a single MDN. Once a committee has been constructed, new observations can be inverted within milliseconds on a standard desktop computer. The method is therefore well suited for use in situations such as EEW, where inversions must be performed routinely and rapidly for a fixed station geometry. To demonstrate the method, we invert regional static GPS displacement data for the 2010 MW 7.2 El Mayor Cucapah earthquake in Baja California to obtain estimates of magnitude, centroid location and depth and focal mechanism. We investigate the extent to which we can constrain moment tensor point sources with static displacement observations under realistic conditions. Our inversion results agree well with published point source solutions for this event, once the uncertainty bounds of each are taken into account.

  4. Viscous remanent magnetization model for the Broken Ridge satellite magnetic anomaly

    NASA Technical Reports Server (NTRS)

    Johnson, B. D.

    1985-01-01

    An equivalent source model solution of the satellite magnetic field over Australia obtained by Mayhew et al. (1980) showed that the satellite anomalies could be related to geological features in Australia. When the processing and selection of the Magsat data over the Australian region had progressed to the point where interpretation procedures could be initiated, it was decided to start by attempting to model the Broken Ridge satellite anomaly, which represents one of the very few relatively isolated anomalies in the Magsat maps, with an unambiguous source region. Attention is given to details concerning the Broken Ridge satellite magnetic anomaly, the modeling method used, the Broken Ridge models, modeling results, and characteristics of magnetization.

  5. A comparison of skyshine computational methods.

    PubMed

    Hertel, Nolan E; Sweezy, Jeremy E; Shultis, J Kenneth; Warkentin, J Karl; Rose, Zachary J

    2005-01-01

    A variety of methods employing radiation transport and point-kernel codes have been used to model two skyshine problems. The first problem is a 1 MeV point source of photons on the surface of the earth inside a 2 m tall and 1 m radius silo having black walls. The skyshine radiation downfield from the point source was estimated with and without a 30-cm-thick concrete lid on the silo. The second benchmark problem is to estimate the skyshine radiation downfield from 12 cylindrical canisters emplaced in a low-level radioactive waste trench. The canisters are filled with ion-exchange resin with a representative radionuclide loading, largely 60Co, 134Cs and 137Cs. The solution methods include use of the MCNP code to solve the problem by directly employing variance reduction techniques, the single-scatter point kernel code GGG-GP, the QADMOD-GP point kernel code, the COHORT Monte Carlo code, the NAC International version of the SKYSHINE-III code, the KSU hybrid method and the associated KSU skyshine codes.

  6. Localization from near-source quasi-static electromagnetic fields

    NASA Astrophysics Data System (ADS)

    Mosher, J. C.

    1993-09-01

    A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. The nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUltiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.

  7. Localization from near-source quasi-static electromagnetic fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, John Compton

    1993-09-01

    A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. Themore » nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUtiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.« less

  8. Application of genetic algorithm for the simultaneous identification of atmospheric pollution sources

    NASA Astrophysics Data System (ADS)

    Cantelli, A.; D'Orta, F.; Cattini, A.; Sebastianelli, F.; Cedola, L.

    2015-08-01

    A computational model is developed for retrieving the positions and the emission rates of unknown pollution sources, under steady state conditions, starting from the measurements of the concentration of the pollutants. The approach is based on the minimization of a fitness function employing a genetic algorithm paradigm. The model is tested considering both pollutant concentrations generated through a Gaussian model in 25 points in a 3-D test case domain (1000m × 1000m × 50 m) and experimental data such as the Prairie Grass field experiments data in which about 600 receptors were located along five concentric semicircle arcs and the Fusion Field Trials 2007. The results show that the computational model is capable to efficiently retrieve up to three different unknown sources.

  9. Efficient inversion of volcano deformation based on finite element models : An application to Kilauea volcano, Hawaii

    NASA Astrophysics Data System (ADS)

    Charco, María; González, Pablo J.; Galán del Sastre, Pedro

    2017-04-01

    The Kilauea volcano (Hawaii, USA) is one of the most active volcanoes world-wide and therefore one of the better monitored volcanoes around the world. Its complex system provides a unique opportunity to investigate the dynamics of magma transport and supply. Geodetic techniques, as Interferometric Synthetic Aperture Radar (InSAR) are being extensively used to monitor ground deformation at volcanic areas. The quantitative interpretation of such surface ground deformation measurements using geodetic data requires both, physical modelling to simulate the observed signals and inversion approaches to estimate the magmatic source parameters. Here, we use synthetic aperture radar data from Sentinel-1 radar interferometry satellite mission to image volcano deformation sources during the inflation along Kilauea's Southwest Rift Zone in April-May 2015. We propose a Finite Element Model (FEM) for the calculation of Green functions in a mechanically heterogeneous domain. The key aspect of the methodology lies in applying the reciprocity relationship of the Green functions between the station and the source for efficient numerical inversions. The search for the best-fitting magmatic (point) source(s) is generally conducted for an array of 3-D locations extending below a predefined volume region. However, our approach allows to reduce the total number of Green functions to the number of the observation points by using the, above mentioned, reciprocity relationship. This new methodology is able to accurately represent magmatic processes using physical models capable of simulating volcano deformation in non-uniform material properties distribution domains, which eventually will lead to better description of the status of the volcano.

  10. Statistical atmospheric inversion of local gas emissions by coupling the tracer release technique and local-scale transport modelling: a test case with controlled methane emissions

    NASA Astrophysics Data System (ADS)

    Ars, Sébastien; Broquet, Grégoire; Yver Kwok, Camille; Roustan, Yelva; Wu, Lin; Arzoumanian, Emmanuel; Bousquet, Philippe

    2017-12-01

    This study presents a new concept for estimating the pollutant emission rates of a site and its main facilities using a series of atmospheric measurements across the pollutant plumes. This concept combines the tracer release method, local-scale atmospheric transport modelling and a statistical atmospheric inversion approach. The conversion between the controlled emission and the measured atmospheric concentrations of the released tracer across the plume places valuable constraints on the atmospheric transport. This is used to optimise the configuration of the transport model parameters and the model uncertainty statistics in the inversion system. The emission rates of all sources are then inverted to optimise the match between the concentrations simulated with the transport model and the pollutants' measured atmospheric concentrations, accounting for the transport model uncertainty. In principle, by using atmospheric transport modelling, this concept does not strongly rely on the good colocation between the tracer and pollutant sources and can be used to monitor multiple sources within a single site, unlike the classical tracer release technique. The statistical inversion framework and the use of the tracer data for the configuration of the transport and inversion modelling systems should ensure that the transport modelling errors are correctly handled in the source estimation. The potential of this new concept is evaluated with a relatively simple practical implementation based on a Gaussian plume model and a series of inversions of controlled methane point sources using acetylene as a tracer gas. The experimental conditions are chosen so that they are suitable for the use of a Gaussian plume model to simulate the atmospheric transport. In these experiments, different configurations of methane and acetylene point source locations are tested to assess the efficiency of the method in comparison to the classic tracer release technique in coping with the distances between the different methane and acetylene sources. The results from these controlled experiments demonstrate that, when the targeted and tracer gases are not well collocated, this new approach provides a better estimate of the emission rates than the tracer release technique. As an example, the relative error between the estimated and actual emission rates is reduced from 32 % with the tracer release technique to 16 % with the combined approach in the case of a tracer located 60 m upwind of a single methane source. Further studies and more complex implementations with more advanced transport models and more advanced optimisations of their configuration will be required to generalise the applicability of the approach and strengthen its robustness.

  11. Solution of the weighted symmetric similarity transformations based on quaternions

    NASA Astrophysics Data System (ADS)

    Mercan, H.; Akyilmaz, O.; Aydin, C.

    2017-12-01

    A new method through Gauss-Helmert model of adjustment is presented for the solution of the similarity transformations, either 3D or 2D, in the frame of errors-in-variables (EIV) model. EIV model assumes that all the variables in the mathematical model are contaminated by random errors. Total least squares estimation technique may be used to solve the EIV model. Accounting for the heteroscedastic uncertainty both in the target and the source coordinates, that is the more common and general case in practice, leads to a more realistic estimation of the transformation parameters. The presented algorithm can handle the heteroscedastic transformation problems, i.e., positions of the both target and the source points may have full covariance matrices. Therefore, there is no limitation such as the isotropic or the homogenous accuracy for the reference point coordinates. The developed algorithm takes the advantage of the quaternion definition which uniquely represents a 3D rotation matrix. The transformation parameters: scale, translations, and the quaternion (so that the rotation matrix) along with their covariances, are iteratively estimated with rapid convergence. Moreover, prior least squares (LS) estimation of the unknown transformation parameters is not required to start the iterations. We also show that the developed method can also be used to estimate the 2D similarity transformation parameters by simply treating the problem as a 3D transformation problem with zero (0) values assigned for the z-components of both target and source points. The efficiency of the new algorithm is presented with the numerical examples and comparisons with the results of the previous studies which use the same data set. Simulation experiments for the evaluation and comparison of the proposed and the conventional weighted LS (WLS) method is also presented.

  12. Virtual-source diffusion approximation for enhanced near-field modeling of photon-migration in low-albedo medium.

    PubMed

    Jia, Mengyu; Chen, Xueying; Zhao, Huijuan; Cui, Shanshan; Liu, Ming; Liu, Lingling; Gao, Feng

    2015-01-26

    Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we herein report on an improved explicit model for a semi-infinite geometry, referred to as "Virtual Source" (VS) diffuse approximation (DA), to fit for low-albedo medium and short source-detector separation. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the near-field to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. This parameterized scheme is proved to inherit the mathematical simplicity of the DA approximation while considerably extending its validity in modeling the near-field photon migration in low-albedo medium. The superiority of the proposed VS-DA method to the established ones is demonstrated in comparison with Monte-Carlo simulations over wide ranges of the source-detector separation and the medium optical properties.

  13. [A landscape ecological approach for urban non-point source pollution control].

    PubMed

    Guo, Qinghai; Ma, Keming; Zhao, Jingzhu; Yang, Liu; Yin, Chengqing

    2005-05-01

    Urban non-point source pollution is a new problem appeared with the speeding development of urbanization. The particularity of urban land use and the increase of impervious surface area make urban non-point source pollution differ from agricultural non-point source pollution, and more difficult to control. Best Management Practices (BMPs) are the effective practices commonly applied in controlling urban non-point source pollution, mainly adopting local repairing practices to control the pollutants in surface runoff. Because of the close relationship between urban land use patterns and non-point source pollution, it would be rational to combine the landscape ecological planning with local BMPs to control the urban non-point source pollution, which needs, firstly, analyzing and evaluating the influence of landscape structure on water-bodies, pollution sources and pollutant removal processes to define the relationships between landscape spatial pattern and non-point source pollution and to decide the key polluted fields, and secondly, adjusting inherent landscape structures or/and joining new landscape factors to form new landscape pattern, and combining landscape planning and management through applying BMPs into planning to improve urban landscape heterogeneity and to control urban non-point source pollution.

  14. Guided wave radiation from a point source in the proximity of a pipe bend

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brath, A. J.; Nagy, P. B.; Simonetti, F.

    Throughout the oil and gas industry corrosion and erosion damage monitoring play a central role in managing asset integrity. Recently, the use of guided wave technology in conjunction with tomography techniques has provided the possibility of obtaining point-by-point maps of wall thickness loss over the entire volume of a pipeline section between two ring arrays of ultrasonic transducers. However, current research has focused on straight pipes while little work has been done on pipe bends which are also the most susceptible to developing damage. Tomography of the bend is challenging due to the complexity and computational cost of the 3-Dmore » elastic model required to accurately describe guided wave propagation. To overcome this limitation, we introduce a 2-D anisotropic inhomogeneous acoustic model which represents a generalization of the conventional unwrapping used for straight pipes. The shortest-path ray-tracing method is then applied to the 2-D model to compute ray paths and predict the arrival times of the fundamental flexural mode, A0, excited by a point source on the straight section of pipe entering the bend and detected on the opposite side. Good agreement is found between predictions and experiments performed on an 8” diameter (D) pipe with 1.5 D bend radius. The 2-D model also reveals the existence of an acoustic lensing effect which leads to a focusing phenomenon also confirmed by the experiments. The computational efficiency of the 2-D model makes it ideally suited for tomography algorithms.« less

  15. An Overview of FlamMap Fire Modeling Capabilities

    Treesearch

    Mark A. Finney

    2006-01-01

    Computerized and manual systems for modeling wildland fire behavior have long been available (Rothermel 1983, Andrews 1986). These systems focus on one-dimensional behaviors and assume the fire geometry is a spreading line-fire (in contrast with point or area-source fires). Models included in these systems were developed to calculate fire spread rate (Rothermel 1972,...

  16. Effective pollutant emission heights for atmospheric transport modelling based on real-world information.

    PubMed

    Pregger, Thomas; Friedrich, Rainer

    2009-02-01

    Emission data needed as input for the operation of atmospheric models should not only be spatially and temporally resolved. Another important feature is the effective emission height which significantly influences modelled concentration values. Unfortunately this information, which is especially relevant for large point sources, is usually not available and simple assumptions are often used in atmospheric models. As a contribution to improve knowledge on emission heights this paper provides typical default values for the driving parameters stack height and flue gas temperature, velocity and flow rate for different industrial sources. The results were derived from an analysis of the probably most comprehensive database of real-world stack information existing in Europe based on German industrial data. A bottom-up calculation of effective emission heights applying equations used for Gaussian dispersion models shows significant differences depending on source and air pollutant and compared to approaches currently used for atmospheric transport modelling.

  17. HSPF Toolkit: a New Tool for Stormwater Management at the Watershed Scale

    EPA Science Inventory

    The Hydrological Simulation Program - FORTRAN (HSPF) is a comprehensive watershed model endorsed by US EPA for simulating point and nonpoint source pollutants. The model is used for developing total maximum daily load (TMDL) plans for impaired water bodies; as such, HSPF is the c...

  18. A Search Technique for Weak and Long-Duration Gamma-Ray Bursts from Background Model Residuals

    NASA Technical Reports Server (NTRS)

    Skelton, R. T.; Mahoney, W. A.

    1993-01-01

    We report on a planned search technique for Gamma-Ray Bursts too weak to trigger the on-board threshold. The technique is to search residuals from a physically based background model used for analysis of point sources by the Earth occultation method.

  19. Spatial optimization of watershed management practices for nitrogen load reduction using a modeling-optimization framework

    EPA Science Inventory

    Best management practices (BMPs) are perceived as being effective in reducing nutrient loads transported from non-point sources (NPS) to receiving water bodies. The objective of this study was to develop a modeling-optimization framework that can be used by watershed management p...

  20. SIMULATIONS OF AEROSOLS AND PHOTOCHEMICAL SPECIES WITH THE CMAQ PLUME-IN-GRID MODELING SYSTEM

    EPA Science Inventory

    A plume-in-grid (PinG) method has been an integral component of the CMAQ modeling system and has been designed in order to realistically simulate the relevant processes impacting pollutant concentrations in plumes released from major point sources. In particular, considerable di...

  1. Transient pressure analysis of fractured well in bi-zonal gas reservoirs

    NASA Astrophysics Data System (ADS)

    Zhao, Yu-Long; Zhang, Lie-Hui; Liu, Yong-hui; Hu, Shu-Yong; Liu, Qi-Guo

    2015-05-01

    For hydraulic fractured well, how to evaluate the properties of fracture and formation are always tough jobs and it is very complex to use the conventional method to do that, especially for partially penetrating fractured well. Although the source function is a very powerful tool to analyze the transient pressure for complex structure well, the corresponding reports on gas reservoir are rare. In this paper, the continuous point source functions in anisotropic reservoirs are derived on the basis of source function theory, Laplace transform method and Duhamel principle. Application of construction method, the continuous point source functions in bi-zonal gas reservoir with closed upper and lower boundaries are obtained. Sequentially, the physical models and transient pressure solutions are developed for fully and partially penetrating fractured vertical wells in this reservoir. Type curves of dimensionless pseudo-pressure and its derivative as function of dimensionless time are plotted as well by numerical inversion algorithm, and the flow periods and sensitive factors are also analyzed. The source functions and solutions of fractured well have both theoretical and practical application in well test interpretation for such gas reservoirs, especial for the well with stimulated reservoir volume around the well in unconventional gas reservoir by massive hydraulic fracturing which always can be described with the composite model.

  2. Multiscale Spatial Modeling of Human Exposure from Local Sources to Global Intake.

    PubMed

    Wannaz, Cedric; Fantke, Peter; Jolliet, Olivier

    2018-01-16

    Exposure studies, used in human health risk and impact assessments of chemicals, are largely performed locally or regionally. It is usually not known how global impacts resulting from exposure to point source emissions compare to local impacts. To address this problem, we introduce Pangea, an innovative multiscale, spatial multimedia fate and exposure assessment model. We study local to global population exposure associated with emissions from 126 point sources matching locations of waste-to-energy plants across France. Results for three chemicals with distinct physicochemical properties are expressed as the evolution of the population intake fraction through inhalation and ingestion as a function of the distance from sources. For substances with atmospheric half-lives longer than a week, less than 20% of the global population intake through inhalation (median of 126 emission scenarios) can occur within a 100 km radius from the source. This suggests that, by neglecting distant low-level exposure, local assessments might only account for fractions of global cumulative intakes. We also study ∼10 000 emission locations covering France more densely to determine per chemical and exposure route which locations minimize global intakes. Maps of global intake fractions associated with each emission location show clear patterns associated with population and agriculture production densities.

  3. Stochastic sensitivity analysis of nitrogen pollution to climate change in a river basin with complex pollution sources.

    PubMed

    Yang, Xiaoying; Tan, Lit; He, Ruimin; Fu, Guangtao; Ye, Jinyin; Liu, Qun; Wang, Guoqing

    2017-12-01

    It is increasingly recognized that climate change could impose both direct and indirect impacts on the quality of the water environment. Previous studies have mostly concentrated on evaluating the impacts of climate change on non-point source pollution in agricultural watersheds. Few studies have assessed the impacts of climate change on the water quality of river basins with complex point and non-point pollution sources. In view of the gap, this paper aims to establish a framework for stochastic assessment of the sensitivity of water quality to future climate change in a river basin with complex pollution sources. A sub-daily soil and water assessment tool (SWAT) model was developed to simulate the discharge, transport, and transformation of nitrogen from multiple point and non-point pollution sources in the upper Huai River basin of China. A weather generator was used to produce 50 years of synthetic daily weather data series for all 25 combinations of precipitation (changes by - 10, 0, 10, 20, and 30%) and temperature change (increases by 0, 1, 2, 3, and 4 °C) scenarios. The generated daily rainfall series was disaggregated into the hourly scale and then used to drive the sub-daily SWAT model to simulate the nitrogen cycle under different climate change scenarios. Our results in the study region have indicated that (1) both total nitrogen (TN) loads and concentrations are insensitive to temperature change; (2) TN loads are highly sensitive to precipitation change, while TN concentrations are moderately sensitive; (3) the impacts of climate change on TN concentrations are more spatiotemporally variable than its impacts on TN loads; and (4) wide distributions of TN loads and TN concentrations under individual climate change scenario illustrate the important role of climatic variability in affecting water quality conditions. In summary, the large variability in SWAT simulation results within and between each climate change scenario highlights the uncertainty of the impacts of climate change and the need to incorporate extreme conditions in managing water environment and developing climate change adaptation and mitigation strategies.

  4. Parameterized source term in the diffusion approximation for enhanced near-field modeling of collimated light

    NASA Astrophysics Data System (ADS)

    Jia, Mengyu; Wang, Shuang; Chen, Xueying; Gao, Feng; Zhao, Huijuan

    2016-03-01

    Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we have reported on an improved explicit model, referred to as "Virtual Source" (VS) diffuse approximation (DA), to inherit the mathematical simplicity of the DA while considerably extend its validity in modeling the near-field photon migration in low-albedo medium. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the nearfield to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. The proposed VS-DA model is validated by comparing with the Monte Carlo simulations, and further introduced in the image reconstruction of the Laminar Optical Tomography system.

  5. Time-dependent clustering analysis of the second BATSE gamma-ray burst catalog

    NASA Technical Reports Server (NTRS)

    Brainerd, J. J.; Meegan, C. A.; Briggs, Michael S.; Pendleton, G. N.; Brock, M. N.

    1995-01-01

    A time-dependent two-point correlation-function analysis of the Burst and Transient Source Experiment (BATSE) 2B catalog finds no evidence of burst repetition. As part of this analysis, we discuss the effects of sky exposure on the observability of burst repetition and present the equation describing the signature of burst repetition in the data. For a model of all burst repetition from a source occurring in less than five days we derive upper limits on the number of bursts in the catalog from repeaters and model-dependent upper limits on the fraction of burst sources that produce multiple outbursts.

  6. DEVELOPMENT OF THE MODEL OF GALACTIC INTERSTELLAR EMISSION FOR STANDARD POINT-SOURCE ANALYSIS OF FERMI LARGE AREA TELESCOPE DATA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acero, F.; Ballet, J.; Ackermann, M.

    2016-04-01

    Most of the celestial γ rays detected by the Large Area Telescope (LAT) on board the Fermi Gamma-ray Space Telescope originate from the interstellar medium when energetic cosmic rays interact with interstellar nucleons and photons. Conventional point-source and extended-source studies rely on the modeling of this diffuse emission for accurate characterization. Here, we describe the development of the Galactic Interstellar Emission Model (GIEM), which is the standard adopted by the LAT Collaboration and is publicly available. This model is based on a linear combination of maps for interstellar gas column density in Galactocentric annuli and for the inverse-Compton emission producedmore » in the Galaxy. In the GIEM, we also include large-scale structures like Loop I and the Fermi bubbles. The measured gas emissivity spectra confirm that the cosmic-ray proton density decreases with Galactocentric distance beyond 5 kpc from the Galactic Center. The measurements also suggest a softening of the proton spectrum with Galactocentric distance. We observe that the Fermi bubbles have boundaries with a shape similar to a catenary at latitudes below 20° and we observe an enhanced emission toward their base extending in the north and south Galactic directions and located within ∼4° of the Galactic Center.« less

  7. Study of a new central compact object: The neutron star in the supernova remnant G15.9+0.2

    NASA Astrophysics Data System (ADS)

    Klochkov, D.; Suleimanov, V.; Sasaki, M.; Santangelo, A.

    2016-08-01

    We present our study of the central point source CXOU J181852.0-150213 in the young Galactic supernova remnant (SNR) G15.9+0.2 based on the recent ~90 ks Chandra observations. The point source was discovered in 2005 in shorter Chandra observations and was hypothesized to be a neutron star associated with the SNR. Our X-ray spectral analysis strongly supports the hypothesis of a thermally emitting neutron star associated with G15.9+0.2. We conclude that the object belongs to the class of young cooling low-magnetized neutron stars referred to as central compact objects (CCOs). We modeled the spectrum of the neutron star with a blackbody spectral function and with our hydrogen and carbon neutron star atmosphere models, assuming that the radiation is uniformly emitted by the entire stellar surface. Under this assumption, only the carbon atmosphere models yield a distance that is compatible with a source located in the Galaxy. In this respect, CXOU J181852.0-150213 is similar to two other well-studied CCOs, the neutron stars in Cas A and in HESS J1731-347, for which carbon atmosphere models were used to reconcile their emission with the known or estimated distances.

  8. Development of the Model of Galactic Interstellar Emission for Standard Point-Source Analysis of Fermi Large Area Telescope Data

    NASA Technical Reports Server (NTRS)

    Acero, F.; Ackermann, M.; Ajello, M.; Albert, A.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bellazzini, R.; Brandt, T. J.; hide

    2016-01-01

    Most of the celestial gamma rays detected by the Large Area Telescope (LAT) on board the Fermi Gamma-ray Space Telescope originate from the interstellar medium when energetic cosmic rays interact with interstellar nucleons and photons. Conventional point-source and extended-source studies rely on the modeling of this diffuse emission for accurate characterization. Here, we describe the development of the Galactic Interstellar Emission Model (GIEM),which is the standard adopted by the LAT Collaboration and is publicly available. This model is based on a linear combination of maps for interstellar gas column density in Galactocentric annuli and for the inverse-Compton emission produced in the Galaxy. In the GIEM, we also include large-scale structures like Loop I and the Fermi bubbles. The measured gas emissivity spectra confirm that the cosmic-ray proton density decreases with Galactocentric distance beyond 5 kpc from the Galactic Center. The measurements also suggest a softening of the proton spectrum with Galactocentric distance. We observe that the Fermi bubbles have boundaries with a shape similar to a catenary at latitudes below 20deg and we observe an enhanced emission toward their base extending in the north and south Galactic directions and located within approximately 4deg of the Galactic Center.

  9. Study on gas diffusion emitted from different height of point source.

    PubMed

    Yassin, Mohamed F

    2009-01-01

    The flow and dispersion of stack-gas emitted from different elevated point source around flow obstacles in an urban environment have been investigated, using computational fluid dynamics models (CFD). The results were compared with the experimental results obtained from the diffusion wind tunnel under different conditions of thermal stability (stable, neutral or unstable). The flow and dispersion fields in the boundary layer in an urban environment were examined with different flow obstacles. Gaseous pollutant was discharged in the simulated boundary layer over the flat area. The CFD models used for the simulation were based on the steady-state Reynolds-Average Navier-Stoke equations (RANS) with kappa-epsilon turbulence models; standard kappa-epsilon and RNG kappa-epsilon models. The flow and dispersion data measured in the wind tunnel experiments were compared with the results of the CFD models in order to evaluate the prediction accuracy of the pollutant dispersion. The results of the CFD models showed good agreement with the results of the wind tunnel experiments. The results indicate that the turbulent velocity is reduced by the obstacles models. The maximum dispersion appears around the wake region of the obstacles.

  10. Tsunami Forecasting in the Atlantic Basin

    NASA Astrophysics Data System (ADS)

    Knight, W. R.; Whitmore, P.; Sterling, K.; Hale, D. A.; Bahng, B.

    2012-12-01

    The mission of the West Coast and Alaska Tsunami Warning Center (WCATWC) is to provide advance tsunami warning and guidance to coastal communities within its Area-of-Responsibility (AOR). Predictive tsunami models, based on the shallow water wave equations, are an important part of the Center's guidance support. An Atlantic-based counterpart to the long-standing forecasting ability in the Pacific known as the Alaska Tsunami Forecast Model (ATFM) is now developed. The Atlantic forecasting method is based on ATFM version 2 which contains advanced capabilities over the original model; including better handling of the dynamic interactions between grids, inundation over dry land, new forecast model products, an optional non-hydrostatic approach, and the ability to pre-compute larger and more finely gridded regions using parallel computational techniques. The wide and nearly continuous Atlantic shelf region presents a challenge for forecast models. Our solution to this problem has been to develop a single unbroken high resolution sub-mesh (currently 30 arc-seconds), trimmed to the shelf break. This allows for edge wave propagation and for kilometer scale bathymetric feature resolution. Terminating the fine mesh at the 2000m isobath keeps the number of grid points manageable while allowing for a coarse (4 minute) mesh to adequately resolve deep water tsunami dynamics. Higher resolution sub-meshes are then included around coastal forecast points of interest. The WCATWC Atlantic AOR includes eastern U.S. and Canada, the U.S. Gulf of Mexico, Puerto Rico, and the Virgin Islands. Puerto Rico and the Virgin Islands are in very close proximity to well-known tsunami sources. Because travel times are under an hour and response must be immediate, our focus is on pre-computing many tsunami source "scenarios" and compiling those results into a database accessible and calibrated with observations during an event. Seismic source evaluation determines the order of model pre-computation - starting with those sources that carry the highest risk. Model computation zones are confined to regions at risk to save computation time. For example, Atlantic sources have been shown to not propagate into the Gulf of Mexico. Therefore, fine grid computations are not performed in the Gulf for Atlantic sources. Outputs from the Atlantic model include forecast marigrams at selected sites, maximum amplitudes, drawdowns, and currents for all coastal points. The maximum amplitude maps will be supplemented with contoured energy flux maps which show more clearly the effects of bathymetric features on tsunami wave propagation. During an event, forecast marigrams will be compared to observations to adjust the model results. The modified forecasts will then be used to set alert levels between coastal breakpoints, and provided to emergency management.

  11. A multi-model approach to monitor emissions of CO2 and CO from an urban-industrial complex

    NASA Astrophysics Data System (ADS)

    Super, Ingrid; Denier van der Gon, Hugo A. C.; van der Molen, Michiel K.; Sterk, Hendrika A. M.; Hensen, Arjan; Peters, Wouter

    2017-11-01

    Monitoring urban-industrial emissions is often challenging because observations are scarce and regional atmospheric transport models are too coarse to represent the high spatiotemporal variability in the resulting concentrations. In this paper we apply a new combination of an Eulerian model (Weather Research and Forecast, WRF, with chemistry) and a Gaussian plume model (Operational Priority Substances - OPS). The modelled mixing ratios are compared to observed CO2 and CO mole fractions at four sites along a transect from an urban-industrial complex (Rotterdam, the Netherlands) towards rural conditions for October-December 2014. Urban plumes are well-mixed at our semi-urban location, making this location suited for an integrated emission estimate over the whole study area. The signals at our urban measurement site (with average enhancements of 11 ppm CO2 and 40 ppb CO over the baseline) are highly variable due to the presence of distinct source areas dominated by road traffic/residential heating emissions or industrial activities. This causes different emission signatures that are translated into a large variability in observed ΔCO : ΔCO2 ratios, which can be used to identify dominant source types. We find that WRF-Chem is able to represent synoptic variability in CO2 and CO (e.g. the median CO2 mixing ratio is 9.7 ppm, observed, against 8.8 ppm, modelled), but it fails to reproduce the hourly variability of daytime urban plumes at the urban site (R2 up to 0.05). For the urban site, adding a plume model to the model framework is beneficial to adequately represent plume transport especially from stack emissions. The explained variance in hourly, daytime CO2 enhancements from point source emissions increases from 30 % with WRF-Chem to 52 % with WRF-Chem in combination with the most detailed OPS simulation. The simulated variability in ΔCO :  ΔCO2 ratios decreases drastically from 1.5 to 0.6 ppb ppm-1, which agrees better with the observed standard deviation of 0.4 ppb ppm-1. This is partly due to improved wind fields (increase in R2 of 0.10) but also due to improved point source representation (increase in R2 of 0.05) and dilution (increase in R2 of 0.07). Based on our analysis we conclude that a plume model with detailed and accurate dispersion parameters adds substantially to top-down monitoring of greenhouse gas emissions in urban environments with large point source contributions within a ˜ 10 km radius from the observation sites.

  12. Numerical modeling of subsurface communication, revision 1

    NASA Astrophysics Data System (ADS)

    Burke, G. J.; Dease, C. G.; Didwall, E. M.; Lytle, R. J.

    1985-08-01

    Techniques are described for numerical modeling of through-the-Earth communication. The basic problem considered is evaluation of the field at a surface or airborne station due to an antenna buried in the earth. Equations are given for the field of a point source in a homogeneous or stratified Earth. These expressions involve infinite integrals over wave number, sometimes known as Sommerfeld integrals. Numerical techniques used for evaluating these integrals are outlined. The problem of determining the current on a real antenna in the Earth, including the effect of insulation, is considered. Results are included for the fields of a point source in homogeneous and stratified earths and the field of a finite insulated dipole. The results are for electromagnetic propagation in the ELF-VLF range, but the codes also can address propagation problems at higher frequencies.

  13. AN IMAGE-PLANE ALGORITHM FOR JWST'S NON-REDUNDANT APERTURE MASK DATA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenbaum, Alexandra Z.; Pueyo, Laurent; Sivaramakrishnan, Anand

    2015-01-10

    The high angular resolution technique of non-redundant masking (NRM) or aperture masking interferometry (AMI) has yielded images of faint protoplanetary companions of nearby stars from the ground. AMI on James Webb Space Telescope (JWST)'s Near Infrared Imager and Slitless Spectrograph (NIRISS) has a lower thermal background than ground-based facilities and does not suffer from atmospheric instability. NIRISS AMI images are likely to have 90%-95% Strehl ratio between 2.77 and 4.8 μm. In this paper we quantify factors that limit the raw point source contrast of JWST NRM. We develop an analytic model of the NRM point spread function which includesmore » different optical path delays (pistons) between mask holes and fit the model parameters with image plane data. It enables a straightforward way to exclude bad pixels, is suited to limited fields of view, and can incorporate effects such as intra-pixel sensitivity variations. We simulate various sources of noise to estimate their effect on the standard deviation of closure phase, σ{sub CP} (a proxy for binary point source contrast). If σ{sub CP} < 10{sup –4} radians—a contrast ratio of 10 mag—young accreting gas giant planets (e.g., in the nearby Taurus star-forming region) could be imaged with JWST NIRISS. We show the feasibility of using NIRISS' NRM with the sub-Nyquist sampled F277W, which would enable some exoplanet chemistry characterization. In the presence of small piston errors, the dominant sources of closure phase error (depending on pixel sampling, and filter bandwidth) are flat field errors and unmodeled variations in intra-pixel sensitivity. The in-flight stability of NIRISS will determine how well these errors can be calibrated by observing a point source. Our results help develop efficient observing strategies for space-based NRM.« less

  14. Overview of the SHARP campaign: Motivation, design, and major outcomes

    NASA Astrophysics Data System (ADS)

    Olaguer, Eduardo P.; Kolb, Charles E.; Lefer, Barry; Rappenglück, Bernhard; Zhang, Renyi; Pinto, Joseph P.

    2014-03-01

    The Study of Houston Atmospheric Radical Precursors (SHARP) was a field campaign developed by the Houston Advanced Research Center on behalf of the Texas Environmental Research Consortium. SHARP capitalized on previous research associated with the Second Texas Air Quality Study and the development of the State Implementation Plan (SIP) for the Houston-Galveston-Brazoria (HGB) ozone nonattainment area. These earlier studies pointed to an apparent deficit in ozone production in the SIP attainment demonstration model despite the enhancement of simulated emissions of highly reactive volatile organic compounds in accordance with the findings of the original Texas Air Quality Study in 2000. The scientific hypothesis underlying the SHARP campaign was that there are significant undercounted primary and secondary sources of the radical precursors, formaldehyde, and nitrous acid, in both heavily industrialized and more typical urban areas of Houston. These sources, if properly taken into account, could increase the production of ozone in the SIP model and the simulated efficacy of control strategies designed to bring the HGB area into ozone attainment. This overview summarizes the precursor studies and motivations behind SHARP, as well as the overall experimental design and major findings of the 2009 field campaign. These findings include significant combustion sources of formaldehyde at levels greater than accounted for in current point source emission inventories; the underestimation of formaldehyde and nitrous acid emissions, as well as CO/NOx and NO2/NOx ratios, by mobile source models; and the enhancement of nitrous acid by atmospheric organic aerosol.

  15. Improved source inversion from joint measurements of translational and rotational ground motions

    NASA Astrophysics Data System (ADS)

    Donner, S.; Bernauer, M.; Reinwald, M.; Hadziioannou, C.; Igel, H.

    2017-12-01

    Waveform inversion for seismic point (moment tensor) and kinematic sources is a standard procedure. However, especially in the local and regional distances a lack of appropriate velocity models, the sparsity of station networks, or a low signal-to-noise ratio combined with more complex waveforms hamper the successful retrieval of reliable source solutions. We assess the potential of rotational ground motion recordings to increase the resolution power and reduce non-uniquenesses for point and kinematic source solutions. Based on synthetic waveform data, we perform a Bayesian (i.e. probabilistic) inversion. Thus, we avoid the subjective selection of the most reliable solution according the lowest misfit or other constructed criterion. In addition, we obtain unbiased measures of resolution and possible trade-offs. Testing different earthquake mechanisms and scenarios, we can show that the resolution of the source solutions can be improved significantly. Especially depth dependent components show significant improvement. Next to synthetic data of station networks, we also tested sparse-network and single station cases.

  16. Error analysis in stereo vision for location measurement of 3D point

    NASA Astrophysics Data System (ADS)

    Li, Yunting; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.

  17. Hybrid Skyshine Calculations for Complex Neutron and Gamma-Ray Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shultis, J. Kenneth

    2000-10-15

    A two-step hybrid method is described for computationally efficient estimation of neutron and gamma-ray skyshine doses far from a shielded source. First, the energy and angular dependence of radiation escaping into the atmosphere from a source containment is determined by a detailed transport model such as MCNP. Then, an effective point source with this energy and angular dependence is used in the integral line-beam method to transport the radiation through the atmosphere up to 2500 m from the source. An example spent-fuel storage cask is analyzed with this hybrid method and compared to detailed MCNP skyshine calculations.

  18. NO(x) Concentrations in the Upper Troposphere as a Result of Lightning

    NASA Technical Reports Server (NTRS)

    Penner, Joyce E.

    1998-01-01

    Upper tropospheric NO(x) controls, in part, the distribution of ozone in this greenhouse-sensitive region of the atmosphere. Many factors control NO(x) in this region. As a result it is difficult to assess uncertainties in anthropogenic perturbations to NO from aircraft, for example, without understanding the role of the other major NO(x) sources in the upper troposphere. These include in situ sources (lightning, aircraft), convection from the surface (biomass burning, fossil fuels, soils), stratospheric intrusions, and photochemical recycling from HNO3. This work examines the separate contribution to upper tropospheric "primary" NO(x) from each source category and uses two different chemical transport models (CTMS) to represent a range of possible atmospheric transport. Because aircraft emissions are tied to particular pressure altitudes, it is important to understand whether those emissions are placed in the model stratosphere or troposphere and to assess whether the models can adequately differentiate stratospheric air from tropospheric air. We examine these issues by defining a point-by-point "tracer tropopause" in order to differentiate stratosphere from troposphere in terms of NO(x) perturbations. Both models predict similar zonal average peak enhancements of primary NO(x) due to aircraft (approx. = 10-20 parts per trillion by volume (pptv) in both January and July); however, the placement of this peak is primarily in a region of large stratospheric influence in one model and centered near the level evaluated as the tracer tropopause in the second. Below the tracer tropopause, both models show negligible NO(x) derived directly from the stratospheric source. Also, they predict a typically low background of 1 - 20 pptv NO(x) when tropospheric HNO3 is constrained to be 100 pptv of HNO3. The two models calculate large differences in the total background NO(x) (defined as the source of NO(x) from lightning + stratosphere + surface + HNO3) when using identical loss frequencies for NO(x). This difference is primarily due to differing treatments of vertical transport. An improved diagnosis of this transport that is relevant to NO(x) requires either measurements of a surface-based tracer with a substantially shorter lifetime than Rn-222 or diagnosis and mapping of tracer correlations with different source signatures. Because of differences in transport by the two models we cannot constrain the source of NO(x) from lightning through comparison of average model concentrations with observations of NO(x).

  19. A moving medium formulation for prediction of propeller noise at incidence

    NASA Astrophysics Data System (ADS)

    Ghorbaniasl, Ghader; Lacor, Chris

    2012-01-01

    This paper presents a time domain formulation for the sound field radiated by moving bodies in a uniform steady flow with arbitrary orientation. The aim is to provide a formulation for prediction of noise from body so that effects of crossflow on a propeller can be modeled in the time domain. An established theory of noise generation by a moving source is combined with the moving medium Green's function for derivation of the formulation. A formula with Doppler factor is developed because it is more easily interpreted and is more helpful in examining the physic of systems. Based on the technique presented, the source of asymmetry of the sound field can be explained in terms of physics of a moving source. It is shown that the derived formulation can be interpreted as an extension of formulation 1 and 1A of Farassat based on the Ffowcs Williams and Hawkings (FW-H) equation for moving medium problems. Computational results for a stationary monopole and dipole point source in moving medium, a rotating point force in crossflow, a model of helicopter blade at incidence and a propeller case with subsonic tips at incidence verify the formulation.

  20. A new three-dimensional nonscanning laser imaging system based on the illumination pattern of a point-light-source array

    NASA Astrophysics Data System (ADS)

    Xia, Wenze; Ma, Yayun; Han, Shaokun; Wang, Yulin; Liu, Fei; Zhai, Yu

    2018-06-01

    One of the most important goals of research on three-dimensional nonscanning laser imaging systems is the improvement of the illumination system. In this paper, a new three-dimensional nonscanning laser imaging system based on the illumination pattern of a point-light-source array is proposed. This array is obtained using a fiber array connected to a laser array with each unit laser having independent control circuits. This system uses a point-to-point imaging process, which is realized using the exact corresponding optical relationship between the point-light-source array and a linear-mode avalanche photodiode array detector. The complete working process of this system is explained in detail, and the mathematical model of this system containing four equations is established. A simulated contrast experiment and two real contrast experiments which use the simplified setup without a laser array are performed. The final results demonstrate that unlike a conventional three-dimensional nonscanning laser imaging system, the proposed system meets all the requirements of an eligible illumination system. Finally, the imaging performance of this system is analyzed under defocusing situations, and analytical results show that the system has good defocusing robustness and can be easily adjusted in real applications.

  1. Point X-ray sources in the SNR G 315.4-2.30 (MSH 14-63, RCW 86)

    NASA Astrophysics Data System (ADS)

    Gvaramadze, V. V.; Vikhlinin, A. A.

    2003-04-01

    We report the results of a search for a point X-ray source (stellar remnant) in the southwest protrusion of the supernova remnant G 315.4-2.30 (MSH 14-63, RCW 86) using the archival data of the Chandra X-ray Observatory. The search was motivated by a hypothesis that G 315.4-2.30 is the result of an off-centered cavity supernova explosion of a moving massive star, which ended its evolution just near the edge of the main-sequence wind-driven bubble. This hypothesis implies that the southwest protrusion in G 315.4-2.30 is the remainder of a pre-existing bow shock-like structure created by the interaction of the supernova progenitor's wind with the interstellar medium and that the actual location of the supernova blast center is near the center of this hemispherical structure. We have discovered two point X-ray sources in the ``proper" place. One of the sources has an optical counterpart with the photographic magnitude 13.38+/-0.40, while the spectrum of the source can be fitted with an optically thin plasma model. We interpret this source as a foreground active star of late spectral type. The second source has no optical counterpart to a limiting magnitude ~ 21. The spectrum of this source can be fitted almost equally well with several simple models (power law: photon index =1.87; two-temperature blackbody: kT1 =0.11 keV, R1 =2.34 km and kT2 =0.71 keV, R2 =0.06 km; blackbody plus power law: kT =0.07 keV, photon index =2.3). We interpret this source as a candidate stellar remnant (neutron star), while the photon index and non-thermal luminosity of the source (almost the same as those of the Vela pulsar and the recently discovered pulsar PSR J 0205+6449 in the supernova remnant 3C 58) suggest that it can be a young ``ordinary" pulsar.

  2. Multiple magma emplacement and its effect on the superficial deformation: hints from analogue models

    NASA Astrophysics Data System (ADS)

    Montanari, Domenico; Bonini, Marco; Corti, Giacomo; del Ventisette, Chiara

    2017-04-01

    To test the effect exerted by multiple magma emplacement on the deformation pattern, we have run analogue models with synchronous, as well as diachronous magma injection from different, aligned inlets. The distance between injection points, as well as the activation in time of injection points was varied for each model. Our model results show how the position and activation in time of injection points (which reproduce multiple magma batches in nature) strongly influence model evolution. In the case of synchronous injection at different inlets, the intrusions and associated surface deformation were elongated. Forced folds and annular bounding reverse faults were quite elliptical, and with the main axis of the elongated dome trending sub-parallel to the direction of the magma input points. Model results also indicate that the injection from multiple aligned sources could reproduce the same features of systems associated with planar feeder dikes, thereby suggesting that caution should be taken when trying to infer the feeding areas on the basis of the deformation features observed at the surface or in seismic profiles. Diachronous injection from different injection points showed that the deformation observed at surface does not necessarily reflect the location and/or geometry of their feeders. Most notably, these experiments suggest that coeval magma injection from different sources favor the lateral migration of magma rather than the vertical growth, promoting the development of laterally interconnected intrusions. Recently, some authors (Magee et al., 2014, 2016; Schofield et al., 2015) have suggested that, based on seismic reflection data analysis, interconnected sills and inclined sheets can facilitate the transport of magma over great vertical distances and laterally for large distances. Intrusions and volcanoes fed by sill complexes may thus be laterally offset significantly from the melt source. Our model results strongly support these findings, by reproducing in the laboratory a strong lateral magma migration, and suggesting a possible mechanism. The models also confirmed that lateral magma migration could take place with little or no accompanying surface deformation. The research leading to these results has received funding from the European Community's Seventh Framework Programme under grant agreement No. 608553 (Project IMAGE). References: Magee et al., 2014. Basin Research, v. 26, p. 85-105, doi: 10 .1111 /bre.12044. Magee et al., 2016. Geosphere, v. 12, p. 809-841, ISSN: 1553-040X. Schofield et al., 2015. Basin Research, v. 29, p. 41-63, doi:10.1111/bre.12164.

  3. A Method for Identifying Pollution Sources of Heavy Metals and PAH for a Risk-Based Management of a Mediterranean Harbour

    PubMed Central

    Moranda, Arianna

    2017-01-01

    A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities. PMID:29270328

  4. A Method for Identifying Pollution Sources of Heavy Metals and PAH for a Risk-Based Management of a Mediterranean Harbour.

    PubMed

    Paladino, Ombretta; Moranda, Arianna; Seyedsalehi, Mahdi

    2017-01-01

    A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities.

  5. A screening-level modeling approach to estimate nitrogen ...

    EPA Pesticide Factsheets

    This paper presents a screening-level modeling approach that can be used to rapidly estimate nutrient loading and assess numerical nutrient standard exceedance risk of surface waters leading to potential classification as impaired for designated use. It can also be used to explore best management practice (BMP) implementation to reduce loading. The modeling framework uses a hybrid statistical and process based approach to estimate source of pollutants, their transport and decay in the terrestrial and aquatic parts of watersheds. The framework is developed in the ArcGIS environment and is based on the total maximum daily load (TMDL) balance model. Nitrogen (N) is currently addressed in the framework, referred to as WQM-TMDL-N. Loading for each catchment includes non-point sources (NPS) and point sources (PS). NPS loading is estimated using export coefficient or event mean concentration methods depending on the temporal scales, i.e., annual or daily. Loading from atmospheric deposition is also included. The probability of a nutrient load to exceed a target load is evaluated using probabilistic risk assessment, by including the uncertainty associated with export coefficients of various land uses. The computed risk data can be visualized as spatial maps which show the load exceedance probability for all stream segments. In an application of this modeling approach to the Tippecanoe River watershed in Indiana, USA, total nitrogen (TN) loading and risk of standard exce

  6. Origin and Fate of Phosphorus In The Seine River Watershed

    NASA Astrophysics Data System (ADS)

    Némery, J.; Garnier, J.; Billen, G.; Meybeck, M.; Morel, C.

    In the large man impacted river systems, like the Seine basin, phosphorus originates from both diffuse sources, i.e. runoff on agricultural soils and point sources generally well localised and quantified, i.e. industrial and domestic sewage. On the basis of our biogeochemical model of the Seine river ecological functioning (RIVERSTRAHLER: Billen et al., 1994; Garnier et al., 1995), a reduction of eutrophication and a better oxygenation of the larger streamorders could only be obtained by reducing P-point sources by 80 %. We are considering here P-sources, pathways and budgets through a nested approach from the Blaise sub-basin (600 km2, cattle breeding), the Grand Morin (1200 km, agricultural), the Marne (12 000 km, agricultural/urbanized) and the whole Seine catchment (65 000 km2, 17 M inhabitants). Particulate P mobility is also studied by the 32P isotopic exchange method developed in agronomy (Fardeau, 1993; Morel, 1995). The progressive reduction of polyphosphate content in washing powders and phosphorus retention in sewage treatment plants over the last ten years has led to a marked relative decrease of P point sources with regards to the diffuse ones, particularly for Paris megacity (10 M inhabitants). Major P inputs on the Marne basin are fertilizers (17 000 106 g P y-1) and 400 106 g P y-1 for treated wastewaters. Riverine output (900 106 g P y-1) is 1/3 associated to suspended matter (TSS) and is 2/3 as P-PO43-. Most fertilizer P is therefore retained on soils and exported in food supply. First results on P mobility show an important proportion of potentially remobilised P from TSS used for phytoplankton development (streamorder 5 to 8) and from deposited sediment used by macrophytes (streamorder 2 to 5). These kinetics of P exchange will improve the P sub-model in the whole basin ecological model.

  7. Space-Borne Laser Altimeter Geolocation Error Analysis

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Fang, J.; Ai, Y.

    2018-05-01

    This paper reviews the development of space-borne laser altimetry technology over the past 40 years. Taking the ICESAT satellite as an example, a rigorous space-borne laser altimeter geolocation model is studied, and an error propagation equation is derived. The influence of the main error sources, such as the platform positioning error, attitude measurement error, pointing angle measurement error and range measurement error, on the geolocation accuracy of the laser spot are analysed by simulated experiments. The reasons for the different influences on geolocation accuracy in different directions are discussed, and to satisfy the accuracy of the laser control point, a design index for each error source is put forward.

  8. An innovative use of instant messaging technology to support a library's single-service point.

    PubMed

    Horne, Andrea S; Ragon, Bart; Wilson, Daniel T

    2012-01-01

    A library service model that provides reference and instructional services by summoning reference librarians from a single service point is described. The system utilizes Libraryh3lp, an open-source, multioperator instant messaging system. The selection and refinement of this solution and technical challenges encountered are explored, as is the design of public services around this technology, usage of the system, and best practices. This service model, while a major cultural and procedural change at first, is now a routine aspect of customer service for this library.

  9. Advanced sensor-simulation capability

    NASA Astrophysics Data System (ADS)

    Cota, Stephen A.; Kalman, Linda S.; Keller, Robert A.

    1990-09-01

    This paper provides an overview of an advanced simulation capability currently in use for analyzing visible and infrared sensor systems. The software system, called VISTAS (VISIBLE/INFRARED SENSOR TRADES, ANALYSES, AND SIMULATIONS) combines classical image processing techniques with detailed sensor models to produce static and time dependent simulations of a variety of sensor systems including imaging, tracking, and point target detection systems. Systems modelled to date include space-based scanning line-array sensors as well as staring 2-dimensional array sensors which can be used for either imaging or point source detection.

  10. Real-time Estimation of Fault Rupture Extent for Recent Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Yamada, M.; Mori, J. J.

    2009-12-01

    Current earthquake early warning systems assume point source models for the rupture. However, for large earthquakes, the fault rupture length can be of the order of tens to hundreds of kilometers, and the prediction of ground motion at a site requires the approximated knowledge of the rupture geometry. Early warning information based on a point source model may underestimate the ground motion at a site, if a station is close to the fault but distant from the epicenter. We developed an empirical function to classify seismic records into near-source (NS) or far-source (FS) records based on the past strong motion records (Yamada et al., 2007). Here, we defined the near-source region as an area with a fault rupture distance less than 10km. If we have ground motion records at a station, the probability that the station is located in the near-source region is; P = 1/(1+exp(-f)) f = 6.046log10(Za) + 7.885log10(Hv) - 27.091 where Za and Hv denote the peak values of the vertical acceleration and horizontal velocity, respectively. Each observation provides the probability that the station is located in near-source region, so the resolution of the proposed method depends on the station density. The information of the fault rupture location is a group of points where the stations are located. However, for practical purposes, the 2-dimensional configuration of the fault is required to compute the ground motion at a site. In this study, we extend the methodology of NS/FS classification to characterize 2-dimensional fault geometries and apply them to strong motion data observed in recent large earthquakes. We apply a cosine-shaped smoothing function to the probability distribution of near-source stations, and convert the point fault location to 2-dimensional fault information. The estimated rupture geometry for the 2007 Niigata-ken Chuetsu-oki earthquake 10 seconds after the origin time is shown in Figure 1. Furthermore, we illustrate our method with strong motion data of the 2007 Noto-hanto earthquake, 2008 Iwate-Miyagi earthquake, and 2008 Wenchuan earthquake. The on-going rupture extent can be estimated for all datasets as the rupture propagates. For earthquakes with magnitude about 7.0, the determination of the fault parameters converges to the final geometry within 10 seconds.

  11. NON-POINT SOURCE POLLUTION

    EPA Science Inventory

    Non-point source pollution is a diffuse source that is difficult to measure and is highly variable due to different rain patterns and other climatic conditions. In many areas, however, non-point source pollution is the greatest source of water quality degradation. Presently, stat...

  12. Investigating the effects of point source and nonpoint source pollution on the water quality of the East River (Dongjiang) in South China

    USGS Publications Warehouse

    Wu, Yiping; Chen, Ji

    2013-01-01

    Understanding the physical processes of point source (PS) and nonpoint source (NPS) pollution is critical to evaluate river water quality and identify major pollutant sources in a watershed. In this study, we used the physically-based hydrological/water quality model, Soil and Water Assessment Tool, to investigate the influence of PS and NPS pollution on the water quality of the East River (Dongjiang in Chinese) in southern China. Our results indicate that NPS pollution was the dominant contribution (>94%) to nutrient loads except for mineral phosphorus (50%). A comprehensive Water Quality Index (WQI) computed using eight key water quality variables demonstrates that water quality is better upstream than downstream despite the higher level of ammonium nitrogen found in upstream waters. Also, the temporal (seasonal) and spatial distributions of nutrient loads clearly indicate the critical time period (from late dry season to early wet season) and pollution source areas within the basin (middle and downstream agricultural lands), which resource managers can use to accomplish substantial reduction of NPS pollutant loadings. Overall, this study helps our understanding of the relationship between human activities and pollutant loads and further contributes to decision support for local watershed managers to protect water quality in this region. In particular, the methods presented such as integrating WQI with watershed modeling and identifying the critical time period and pollutions source areas can be valuable for other researchers worldwide.

  13. User's Guide for the Agricultural Non-Point Source (AGNPS) Pollution Model Data Generator

    USGS Publications Warehouse

    Finn, Michael P.; Scheidt, Douglas J.; Jaromack, Gregory M.

    2003-01-01

    BACKGROUND Throughout this user guide, we refer to datasets that we used in conjunction with developing of this software for supporting cartographic research and producing the datasets to conduct research. However, this software can be used with these datasets or with more 'generic' versions of data of the appropriate type. For example, throughout the guide, we refer to national land cover data (NLCD) and digital elevation model (DEM) data from the U.S. Geological Survey (USGS) at a 30-m resolution, but any digital terrain model or land cover data at any appropriate resolution will produce results. Another key point to keep in mind is to use a consistent data resolution for all the datasets per model run. The U.S. Department of Agriculture (USDA) developed the Agricultural Nonpoint Source (AGNPS) pollution model of watershed hydrology in response to the complex problem of managing nonpoint sources of pollution. AGNPS simulates the behavior of runoff, sediment, and nutrient transport from watersheds that have agriculture as their prime use. The model operates on a cell basis and is a distributed parameter, event-based model. The model requires 22 input parameters. Output parameters are grouped primarily by hydrology, sediment, and chemical output (Young and others, 1995.) Elevation, land cover, and soil are the base data from which to extract the 22 input parameters required by the AGNPS. For automatic parameter extraction, follow the general process described in this guide of extraction from the geospatial data through the AGNPS Data Generator to generate input parameters required by the pollution model (Finn and others, 2002.)

  14. Influence of Diffused Sourcers of Water Pollution In The Basin of Volga River

    NASA Astrophysics Data System (ADS)

    Vasilchenco, O.

    The intensive development of industry and agriculture, great growth of cities in the last decades result in an increase of the nature water consumption and deterioration. Different anthropogenic load change characteristics of water objects regime and de- pletion and qualitative degradation of water resources. Sources of pollution are divided on two classes: controlled and uncontrolled. The first includes industrial and domestic wastewater disposal. Their discharge and concentration of pollutants are quite stable. These sources of pollution are identified as "point". Surface run-off from of cities, industrial platforms, agricultural object, navigation, recreation are not controlled have dispersed nature and are identification as diffuse. Pollution from such sources is es- timates by computation. Quantitative assumption of pollution amounts reaches water objects is complicated and independent problem. The significant amount of full-scale observations and information processes of concerning dissolved and fluidized frag- ments movement are required. According to available guidelines the part of the pollu- tant entering water objects, is about 1-10For estimation of pollution mass and transport are mathematical modeling. Preliminary calculations of contaminants transport for different territories under an- thropogenic impact of river-Volga basin were made either for point sources of pol- lution or for non-point. Received data made it possible to analyze the correlation of contaminant volumes, coming from different sources pollution.

  15. Gravity-height correlations for unrest at calderas

    NASA Astrophysics Data System (ADS)

    Berrino, G.; Rymer, H.; Brown, G. C.; Corrado, G.

    1992-11-01

    Calderas represent the sites of the world's most serious volcanic hazards. Although eruptions are not frequent at such structures on the scale of human lifetimes, there are nevertheless often physical changes at calderas that are measurable over periods of years or decades. Such calderas are said to be in a state of unrest, and it is by studying the nature of this unrest that we may begin to understand the dynamics of eruption precursors. Here we review combined gravity and elevation data from several restless calderas, and present new data on their characteristic signatures during periods of inflation and deflation. We find that unless the Bouguer gravity anomaly at a caldera is extremely small, the free-air gradient used to correct gravity data for observed elevation changes must be the measured or calculated gradient, and not the theoretical gradient, use of which may introduce significant errors. In general, there are two models that fit most of the available data. The first involves a Mogi-type point source, and the second is a Bouguer-type infinite horizontal plane source. The density of the deforming material (usually a magma chamber) is calculated from the gravity and ground deformation data, and the best fitting model is, to a first approximation, the one producing the most realistic density. No realistic density is obtained where there are real density changes, or where the data do not fit the point source or slab model. We find that a point source model fits most of the available data, and that most data are for periods of caldera inflation. The limited examples of deflation from large silicic calderas indicate that the amount of mass loss, or magma drainage, is usually much less than the mass gain during the preceding magma intrusion. In contrast, deflationary events at basaltic calderas formed in extensional tectonic environments are associated with more significant mass loss as magma is injected into the associated fissure swarms.

  16. Improved moving source photometry with TRIPPy

    NASA Astrophysics Data System (ADS)

    Alexandersen, Mike; Fraser, Wesley Cristopher

    2017-10-01

    Photometry of moving sources is more complicated than for stationary sources, because the sources trail their signal out over more pixels than a point source of the same magnitude. Using a circular aperture of same size as would be appropriate for point sources can cut out a large amount of flux if a moving source moves substantially relative to the size of the aperture during the exposure, resulting in underestimated fluxes. Using a large circular aperture can mitigate this issue at the cost of a significantly reduced signal to noise compared to a point source, as a result of the inclusion of a larger background region within the aperture.Trailed Image Photometry in Python (TRIPPy) solves this problem by using a pill-shaped aperture: the traditional circular aperture is sliced in half perpendicular to the direction of motion and separated by a rectangle as long as the total motion of the source during the exposure. TRIPPy can also calculate the appropriate aperture correction (which will depend both on the radius and trail length of the pill-shaped aperture), and has features for selecting good PSF stars, creating a PSF model (convolved moffat profile + lookup table) and selecting a custom sky-background area in order to ensure no other sources contribute to the background estimate.In this poster, we present an overview of the TRIPPy features and demonstrate the improvements resulting from using TRIPPy compared to photometry obtained by other methods with examples from real projects where TRIPPy has been implemented in order to obtain the best-possible photometric measurements of Solar System objects. While TRIPPy has currently mainly been used for Trans-Neptunian Objects, the improvement from using the pill-shaped aperture increases with source motion, making TRIPPy highly relevant for asteroid and centaur photometry as well.

  17. What are the Starting Points? Evaluating Base-Year Assumptions in the Asian Modeling Exercise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chaturvedi, Vaibhav; Waldhoff, Stephanie; Clarke, Leon E.

    2012-12-01

    A common feature of model inter-comparison efforts is that the base year numbers for important parameters such as population and GDP can differ substantially across models. This paper explores the sources and implications of this variation in Asian countries across the models participating in the Asian Modeling Exercise (AME). Because the models do not all have a common base year, each team was required to provide data for 2005 for comparison purposes. This paper compares the year 2005 information for different models, noting the degree of variation in important parameters, including population, GDP, primary energy, electricity, and CO2 emissions. Itmore » then explores the difference in these key parameters across different sources of base-year information. The analysis confirms that the sources provide different values for many key parameters. This variation across data sources and additional reasons why models might provide different base-year numbers, including differences in regional definitions, differences in model base year, and differences in GDP transformation methodologies, are then discussed in the context of the AME scenarios. Finally, the paper explores the implications of base-year variation on long-term model results.« less

  18. Modeling urban air pollution in Budapest using WRF-Chem model

    NASA Astrophysics Data System (ADS)

    Kovács, Attila; Leelőssy, Ádám; Lagzi, István; Mészáros, Róbert

    2017-04-01

    Air pollution is a major problem for urban areas since the industrial revolution, including Budapest, the capital and largest city of Hungary. The main anthropogenic sources of air pollutants are industry, traffic and residential heating. In this study, we investigated the contribution of major industrial point sources to the urban air pollution in Budapest. We used the WRF (Weather Research and Forecasting) nonhydrostatic mesoscale numerical weather prediction system online coupled with chemistry (WRF-Chem, version 3.6).The model was configured with three nested domains with grid spacings of 15, 5 and 1 km, representing Central Europe, the Carpathian Basin and Budapest with its surrounding area. Emission data was obtained from the National Environmental Information System. The point source emissions were summed in their respective cells in the second nested domain according to latitude-longitude coordinates. The main examined air pollutants were carbon monoxide (CO) and nitrogen oxides (NOx), from which the secondary compound, ozone (O3) forms through chemical reactions. Simulations were performed under different weather conditions and compared to observations from the automatic monitoring site of the Hungarian Air Quality Network. Our results show that the industrial emissions have a relatively weak role in the urban background air pollution, confirming the effect of industrial developments and regulations in the recent decades. However, a few significant industrial sources and their impact area has been demonstrated.

  19. Analysis of 3d Building Models Accuracy Based on the Airborne Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Ostrowski, W.; Pilarska, M.; Charyton, J.; Bakuła, K.

    2018-05-01

    Creating 3D building models in large scale is becoming more popular and finds many applications. Nowadays, a wide term "3D building models" can be applied to several types of products: well-known CityGML solid models (available on few Levels of Detail), which are mainly generated from Airborne Laser Scanning (ALS) data, as well as 3D mesh models that can be created from both nadir and oblique aerial images. City authorities and national mapping agencies are interested in obtaining the 3D building models. Apart from the completeness of the models, the accuracy aspect is also important. Final accuracy of a building model depends on various factors (accuracy of the source data, complexity of the roof shapes, etc.). In this paper the methodology of inspection of dataset containing 3D models is presented. The proposed approach check all building in dataset with comparison to ALS point clouds testing both: accuracy and level of details. Using analysis of statistical parameters for normal heights for reference point cloud and tested planes and segmentation of point cloud provides the tool that can indicate which building and which roof plane in do not fulfill requirement of model accuracy and detail correctness. Proposed method was tested on two datasets: solid and mesh model.

  20. Ground Motion Simulation for a Large Active Fault System using Empirical Green's Function Method and the Strong Motion Prediction Recipe - a Case Study of the Noubi Fault Zone -

    NASA Astrophysics Data System (ADS)

    Kuriyama, M.; Kumamoto, T.; Fujita, M.

    2005-12-01

    The 1995 Hyogo-ken Nambu Earthquake (1995) near Kobe, Japan, spurred research on strong motion prediction. To mitigate damage caused by large earthquakes, a highly precise method of predicting future strong motion waveforms is required. In this study, we applied empirical Green's function method to forward modeling in order to simulate strong ground motion in the Noubi Fault zone and examine issues related to strong motion prediction for large faults. Source models for the scenario earthquakes were constructed using the recipe of strong motion prediction (Irikura and Miyake, 2001; Irikura et al., 2003). To calculate the asperity area ratio of a large fault zone, the results of a scaling model, a scaling model with 22% asperity by area, and a cascade model were compared, and several rupture points and segmentation parameters were examined for certain cases. A small earthquake (Mw: 4.6) that occurred in northern Fukui Prefecture in 2004 were examined as empirical Green's function, and the source spectrum of this small event was found to agree with the omega-square scaling law. The Nukumi, Neodani, and Umehara segments of the 1891 Noubi Earthquake were targeted in the present study. The positions of the asperity area and rupture starting points were based on the horizontal displacement distributions reported by Matsuda (1974) and the fault branching pattern and rupture direction model proposed by Nakata and Goto (1998). Asymmetry in the damage maps for the Noubi Earthquake was then examined. We compared the maximum horizontal velocities for each case that had a different rupture starting point. In the case, rupture started at the center of the Nukumi Fault, while in another case, rupture started on the southeastern edge of the Umehara Fault; the scaling model showed an approximately 2.1-fold difference between these cases at observation point FKI005 of K-Net. This difference is considered to relate to the directivity effect associated with the direction of rupture propagation. Moreover, it was clarified that the horizontal velocities by assuming the cascade model was underestimated more than one standard deviation of empirical relation by Si and Midorikawa (1999). The scaling and cascade models showed an approximately 6.4-fold difference for the case, in which the rupture started along the southeastern edge of the Umehara Fault at observation point GIF020. This difference is significantly large in comparison with the effect of different rupture starting points, and shows that it is important to base scenario earthquake assumptions on active fault datasets before establishing the source characterization model. The distribution map of seismic intensity for the 1891 Noubi Earthquake also suggests that the synthetic waveforms in the southeastern Noubi Fault zone may be underestimated. Our results indicate that outer fault parameters (e.g., earthquake moment) related to the construction of scenario earthquakes influence strong motion prediction, rather than inner fault parameters such as the rupture starting point. Based on these methods, we will predict strong motion for approximately 140 to 150 km of the Itoigawa-Shizuoka Tectonic Line.

  1. Using Lunar Observations to Validate In-Flight Calibrations of Clouds and Earth Radiant Energy System Instruments

    NASA Technical Reports Server (NTRS)

    Daniels, Janet L.; Smith, G. Louis; Priestley, Kory J.; Thomas, Susan

    2014-01-01

    The validation of in-orbit instrument performance requires stability in both instrument and calibration source. This paper describes a method of validation using lunar observations scanning near full moon by the Clouds and Earth Radiant Energy System (CERES) instruments. Unlike internal calibrations, the Moon offers an external source whose signal variance is predictable and non-degrading. From 2006 to present, in-orbit observations have become standardized and compiled for the Flight Models-1 and -2 aboard the Terra satellite, for Flight Models-3 and -4 aboard the Aqua satellite, and beginning 2012, for Flight Model-5 aboard Suomi-NPP. Instrument performance parameters which can be gleaned are detector gain, pointing accuracy and static detector point response function validation. Lunar observations are used to examine the stability of all three detectors on each of these instruments from 2006 to present. This validation method has yielded results showing trends per CERES data channel of 1.2% per decade or less.

  2. Watershed Models for Predicting Nitrogen Loads from Artificially Drained Lands

    Treesearch

    R. Wayne Skaggs; George M. Chescheir; Glenn Fernandez; Devendra M. Amatya

    2003-01-01

    Non-point sources of pollutants originate at the field scale but water quality problems usually occur at the watershed or basin scale. This paper describes a series of models developed for poorly drained watersheds. The models use DRAINMOD to predict hydrology at the field scale and a range of methods to predict channel hydraulics and nitrogen transport. In-stream...

  3. Gravity fields of the solar system

    NASA Technical Reports Server (NTRS)

    Zendell, A.; Brown, R. D.; Vincent, S.

    1975-01-01

    The most frequently used formulations of the gravitational field are discussed and a standard set of models for the gravity fields of the earth, moon, sun, and other massive bodies in the solar system are defined. The formulas are presented in standard forms, some with instructions for conversion. A point-source or inverse-square model, which represents the external potential of a spherically symmetrical mass distribution by a mathematical point mass without physical dimensions, is considered. An oblate spheroid model is presented, accompanied by an introduction to zonal harmonics. This spheroid model is generalized and forms the basis for a number of the spherical harmonic models which were developed for the earth and moon. The triaxial ellipsoid model is also presented. These models and their application to space missions are discussed.

  4. Watershed modeling of dissolved oxygen and biochemical oxygen demand using a hydrological simulation Fortran program.

    PubMed

    Liu, Zhijun; Kieffer, Janna M; Kingery, William L; Huddleston, David H; Hossain, Faisal

    2007-11-01

    Several inland water bodies in the St. Louis Bay watershed have been identified as being potentially impaired due to low level of dissolved oxygen (DO). In order to calculate the total maximum daily loads (TMDL), a standard watershed model supported by U.S. Environmental Protection Agency, Hydrological Simulation Program Fortran (HSPF), was used to simulate water temperature, DO, and bio-chemical oxygen demand (BOD). Both point and non-point sources of BOD were included in watershed modeling. The developed model was calibrated at two time periods: 1978 to 1986 and 2000 to 2001 with simulated DO closely matched the observed data and captured the seasonal variations. The model represented the general trend and average condition of observed BOD. Water temperature and BOD decay are the major factors that affect DO simulation, whereas nutrient processes, including nitrification, denitrification, and phytoplankton cycle, have slight impacts. The calibrated water quality model provides a representative linkage between the sources of BOD and in-stream DO\\BOD concentrations. The developed input parameters in this research could be extended to similar coastal watersheds for TMDL determination and Best Management Practice (BMP) evaluation.

  5. Data-based diffraction kernels for surface waves from convolution and correlation processes through active seismic interferometry

    NASA Astrophysics Data System (ADS)

    Chmiel, Malgorzata; Roux, Philippe; Herrmann, Philippe; Rondeleux, Baptiste; Wathelet, Marc

    2018-05-01

    We investigated the construction of diffraction kernels for surface waves using two-point convolution and/or correlation from land active seismic data recorded in the context of exploration geophysics. The high density of controlled sources and receivers, combined with the application of the reciprocity principle, allows us to retrieve two-dimensional phase-oscillation diffraction kernels (DKs) of surface waves between any two source or receiver points in the medium at each frequency (up to 15 Hz, at least). These DKs are purely data-based as no model calculations and no synthetic data are needed. They naturally emerge from the interference patterns of the recorded wavefields projected on the dense array of sources and/or receivers. The DKs are used to obtain multi-mode dispersion relations of Rayleigh waves, from which near-surface shear velocity can be extracted. Using convolution versus correlation with a grid of active sources is an important step in understanding the physics of the retrieval of surface wave Green's functions. This provides the foundation for future studies based on noise sources or active sources with a sparse spatial distribution.

  6. Envelope of coda waves for a double couple source due to non-linear elasticity

    NASA Astrophysics Data System (ADS)

    Calisto, Ignacia; Bataille, Klaus

    2014-10-01

    Non-linear elasticity has recently been considered as a source of scattering, therefore contributing to the coda of seismic waves, in particular for the case of explosive sources. This idea is analysed further here, theoretically solving the expression for the envelope of coda waves generated by a point moment tensor in order to compare with earthquake data. For weak non-linearities, one can consider each point of the non-linear medium as a source of scattering within a homogeneous and linear medium, for which Green's functions can be used to compute the total displacement of scattered waves. These sources of scattering have specific radiation patterns depending on the incident and scattered P or S waves, respectively. In this approach, the coda envelope depends on three scalar parameters related to the specific non-linearity of the medium; however these parameters only change the scale of the coda envelope. The shape of the coda envelope is sensitive to both the source time function and the intrinsic attenuation. We compare simulations using this model with data from earthquakes in Taiwan, with a good fit.

  7. Detection prospects for high energy neutrino sources from the anisotropic matter distribution in the local Universe

    NASA Astrophysics Data System (ADS)

    Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene

    2017-03-01

    Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe. Assuming that the distribution of the neutrino sources follows that of matter, we look for correlations between ``warm'' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance (including that of IceCube-Gen2), we demonstrate that sources with local density exceeding 10-6 Mpc-3 and neutrino luminosity Lν lesssim 1042 erg s-1 (1041 erg s-1) will be efficiently revealed by our method using IceCube (IceCube-Gen2). At low luminosities such as will be probed by IceCube-Gen2, the sensitivity of this analysis is superior to requiring statistically significant direct observation of a point source.

  8. Estimating the susceptibility of surface water in Texas to nonpoint-source contamination by use of logistic regression modeling

    USGS Publications Warehouse

    Battaglin, William A.; Ulery, Randy L.; Winterstein, Thomas; Welborn, Toby

    2003-01-01

    In the State of Texas, surface water (streams, canals, and reservoirs) and ground water are used as sources of public water supply. Surface-water sources of public water supply are susceptible to contamination from point and nonpoint sources. To help protect sources of drinking water and to aid water managers in designing protective yet cost-effective and risk-mitigated monitoring strategies, the Texas Commission on Environmental Quality and the U.S. Geological Survey developed procedures to assess the susceptibility of public water-supply source waters in Texas to the occurrence of 227 contaminants. One component of the assessments is the determination of susceptibility of surface-water sources to nonpoint-source contamination. To accomplish this, water-quality data at 323 monitoring sites were matched with geographic information system-derived watershed- characteristic data for the watersheds upstream from the sites. Logistic regression models then were developed to estimate the probability that a particular contaminant will exceed a threshold concentration specified by the Texas Commission on Environmental Quality. Logistic regression models were developed for 63 of the 227 contaminants. Of the remaining contaminants, 106 were not modeled because monitoring data were available at less than 10 percent of the monitoring sites; 29 were not modeled because there were less than 15 percent detections of the contaminant in the monitoring data; 27 were not modeled because of the lack of any monitoring data; and 2 were not modeled because threshold values were not specified.

  9. Assimilating Flow Data into Complex Multiple-Point Statistical Facies Models Using Pilot Points Method

    NASA Astrophysics Data System (ADS)

    Ma, W.; Jafarpour, B.

    2017-12-01

    We develop a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information:: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) and its multiple data assimilation variant (ES-MDA) are adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at select locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.

  10. Experimental and Analytical Studies of Shielding Concepts for Point Sources and Jet Noises.

    NASA Astrophysics Data System (ADS)

    Wong, Raymond Lee Man

    This analytical and experimental study explores concepts for jet noise shielding. Model experiments centre on solid planar shields, simulating engine-over-wing installations, and 'sugar scoop' shields. Tradeoff on effective shielding length is set by interference 'edge noise' as the shield trailing edge approaches the spreading jet. Edge noise is minimized by (i) hyperbolic cutouts which trim off the portions of most intense interference between the jet flow and the barrier and (ii) hybrid shields--a thermal refractive extension (a flame); for (ii) the tradeoff is combustion noise. In general, shielding attenuation increases steadily with frequency, following low frequency enhancement by edge noise. Although broadband attenuation is typically only several dB, the reduction of the subjectively weighted perceived noise levels is higher. In addition, calculated ground contours of peak PN dB show a substantial contraction due to shielding: this reaches 66% for one of the 'sugar scoop' shields for the 90 PN dB contour. The experiments are complemented by analytical predictions. They are divided into an engineering scheme for jet noise shielding and more rigorous analysis for point source shielding. The former approach combines point source shielding with a suitable jet source distribution. The results are synthesized into a predictive algorithm for jet noise shielding: the jet is modelled as a line distribution of incoherent sources with narrow band frequency (TURN)(axial distance)('-1). The predictive version agrees well with experiment (1 to 1.5 dB) up to moderate frequencies. The insertion loss deduced from the point source measurements for semi-infinite as well as finite rectangular shields agrees rather well with theoretical calculation based on the exact half plane solution and the superposition of asymptotic closed-form solutions. An approximate theory, the Maggi-Rubinowicz line integral, is found to yield reasonable predictions for thin barriers including cutouts if a certain correction is applied. The more exact integral equation approach (solved numerically) is applied to a more demanding geometry: a half round sugar scoop shield. It is found that the solutions of integral equation derived from Helmholtz formula in normal derivative form show satisfactory agreement with measurements.

  11. LCS-1: a high-resolution global model of the lithospheric magnetic field derived from CHAMP and Swarm satellite observations

    NASA Astrophysics Data System (ADS)

    Olsen, Nils; Ravat, Dhananjay; Finlay, Christopher C.; Kother, Livia K.

    2017-12-01

    We derive a new model, named LCS-1, of Earth's lithospheric field based on four years (2006 September-2010 September) of magnetic observations taken by the CHAMP satellite at altitudes lower than 350 km, as well as almost three years (2014 April-2016 December) of measurements taken by the two lower Swarm satellites Alpha and Charlie. The model is determined entirely from magnetic 'gradient' data (approximated by finite differences): the north-south gradient is approximated by first differences of 15 s along-track data (for CHAMP and each of the two Swarm satellites), while the east-west gradient is approximated by the difference between observations taken by Swarm Alpha and Charlie. In total, we used 6.2 mio data points. The model is parametrized by 35 000 equivalent point sources located on an almost equal-area grid at a depth of 100 km below the surface (WGS84 ellipsoid). The amplitudes of these point sources are determined by minimizing the misfit to the magnetic satellite 'gradient' data together with the global average of |Br| at the ellipsoid surface (i.e. applying an L1 model regularization of Br). In a final step, we transform the point-source representation to a spherical harmonic expansion. The model shows very good agreement with previous satellite-derived lithospheric field models at low degree (degree correlation above 0.8 for degrees n ≤ 133). Comparison with independent near-surface aeromagnetic data from Australia yields good agreement (coherence >0.55) at horizontal wavelengths down to at least 250 km, corresponding to spherical harmonic degree n ≈ 160. The LCS-1 vertical component and field intensity anomaly maps at Earth's surface show similar features to those exhibited by the WDMAM2 and EMM2015 lithospheric field models truncated at degree 185 in regions where they include near-surface data and provide unprecedented detail where they do not. Example regions of improvement include the Bangui anomaly region in central Africa, the west African cratons, the East African Rift region, the Bay of Bengal, the southern 90°E ridge, the Cretaceous quiet zone south of the Walvis Ridge and the younger parts of the South Atlantic.

  12. A Spectroscopic and Photometric Study of Gravitational Microlensing Events

    NASA Astrophysics Data System (ADS)

    Kane, Stephen R.

    2000-08-01

    Gravitational microlensing has generated a great deal of scientific interest over recent years. This has been largely due to the realization of its wide-reaching applications, such as the search for dark matter, the detection of planets, and the study of Galactic structure. A significant observational advance has been that most microlensing events can be identified in real-time while the source is still being lensed. More than 400 microlensing events have now been detected towards the Galactic bulge and Magellanic Clouds by the microlensing survey teams EROS, MACHO, OGLE, DUO, and MOA. The real-time detection of these events allows detailed follow-up observations with much denser sampling, both photometrically and spectroscopically. The research undertaken in this project on photometric studies of gravitational microlensing events has been performed as a member of the PLANET (Probing Lensing Anomalies NETwork) collaboration. This is a worldwide collaboration formed in the early part of 1995 to study microlensing anomalies - departures from an achromatic point source, point lens light curve - through rapidly-sampled, multi-band, photometry. PLANET has demonstrated that it can achieve 1% photometry under ideal circumstances, making PLANET observations sensitive to detection of Earth-mass planets which require characterization of 1%--2% deviations from a standard microlensing light curve. The photometric work in this project involved over 5 months using the 1.0 m telescope at Canopus Observatory in Australia, and 3 separate observing runs using the 0.9 m telescope at the Cerro Tololo Inter-American Observatory (CTIO) in Chile. Methods were developed to reduce the vast amount of photometric data using the image analysis software MIDAS and the photometry package DoPHOT. Modelling routines were then written to analyse a selection of the resulting light curves in order to detect any deviation from an achromatic point source - point lens light curve. The photometric results presented in this thesis are from observations of 34 microlensing events over three consecutive bulge seasons. These results are presented along with a discussion of the observations and the data reduction procedures. The colour-magnitude diagrams indicate that the microlensed sources are main sequence and red clump giant stars. Most of the events appear to exhibit standard Paczynski point source - point lens curves whilst a few deviate significantly from the standard model. Various microlensing models that include anomalous structure are fitted to a selection of the observed events resulting in the discovery of a possible binary source event. These fitted events are used to estimate the sensitivity to extra-solar planets and it is found that the sampling rate for these events was insufficient by about a factor of 7.5 for detecting a Jupiter-mass planet. This result assumes that deviations of 5% can be reliably detected. If microlensing is caused predominantly by bulge stars, as has been suggested by Kiraga and Paczynski, the lensed stars should have larger extinction than other observed stars since they would preferentially be located at the far side of the Galactic bulge. Hence, spectroscopy of Galactic microlensing events may be used as a tool for studying the kinematics and extinction effects in the Galactic bulge. The spectroscopic work in this project involved using Kurucz model spectra to create theoretical extinction effects for various spectral classes towards the Galactic centre. These extinction effects are then used to interpret spectroscopic data taken with the 3.6 m ESO telescope. These data consist of a sample of microlensed stars towards the Galactic bulge and are used to derive the extinction offsets of the lensed source with respect to the average population and a measurement of the fraction of bulge-bulge lensing is made. Hence, it is shown statistically that the microlensed sources are generally located on the far side of the Galactic bulge. Measurements of the radial velocities of these sources are used to determine the kinematic properties of the far side of the Galactic bulge.

  13. Generic effective source for scalar self-force calculations

    NASA Astrophysics Data System (ADS)

    Wardell, Barry; Vega, Ian; Thornburg, Jonathan; Diener, Peter

    2012-05-01

    A leading approach to the modeling of extreme mass ratio inspirals involves the treatment of the smaller mass as a point particle and the computation of a regularized self-force acting on that particle. In turn, this computation requires knowledge of the regularized retarded field generated by the particle. A direct calculation of this regularized field may be achieved by replacing the point particle with an effective source and solving directly a wave equation for the regularized field. This has the advantage that all quantities are finite and require no further regularization. In this work, we present a method for computing an effective source which is finite and continuous everywhere, and which is valid for a scalar point particle in arbitrary geodesic motion in an arbitrary background spacetime. We explain in detail various technical and practical considerations that underlie its use in several numerical self-force calculations. We consider as examples the cases of a particle in a circular orbit about Schwarzschild and Kerr black holes, and also the case of a particle following a generic timelike geodesic about a highly spinning Kerr black hole. We provide numerical C code for computing an effective source for various orbital configurations about Schwarzschild and Kerr black holes.

  14. Induction heating pure vapor source of high temperature melting point materials on electron cyclotron resonance ion sourcea)

    NASA Astrophysics Data System (ADS)

    Kutsumi, Osamu; Kato, Yushi; Matsui, Yuuki; Kitagawa, Atsushi; Muramatsu, Masayuki; Uchida, Takashi; Yoshida, Yoshikazu; Sato, Fuminobu; Iida, Toshiyuki

    2010-02-01

    Multicharged ions that are needed are produced from solid pure material with high melting point in an electron cyclotron resonance ion source. We develop an evaporator by using induction heating (IH) with multilayer induction coil, which is made from bare molybdenum or tungsten wire without water cooling and surrounding the pure vaporized material. We optimize the shapes of induction coil and vaporized materials and operation of rf power supply. We conduct experiment to investigate the reproducibility and stability in the operation and heating efficiency. IH evaporator produces pure material vapor because materials directly heated by eddy currents have no contact with insulated materials, which are usually impurity gas sources. The power and the frequency of the induction currents range from 100to900W and from 48to23kHz, respectively. The working pressure is about 10-4-10-3Pa. We measure the temperature of the vaporized materials with different shapes, and compare them with the result of modeling. We estimate the efficiency of the IH vapor source. We are aiming at the evaporator's higher melting point material than that of iron.

  15. Column Number Density Expressions Through M = 0 and M = 1 Point Source Plumes Along Any Straight Path

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael

    2016-01-01

    Analytical expressions for column number density (CND) are developed for optical line of sight paths through a variety of steady free molecule point source models including directionally-constrained effusion (Mach number M = 0) and flow from a sonic orifice (M = 1). Sonic orifice solutions are approximate, developed using a fair simulacrum fitted to the free molecule solution. Expressions are also developed for a spherically-symmetric thermal expansion (M = 0). CND solutions are found for the most general paths relative to these sources and briefly explored. It is determined that the maximum CND from a distant location through directed effusion and sonic orifice cases occurs along the path parallel to the source plane that intersects the plume axis. For the effusive case this value is exactly twice the CND found along the ray originating from that point of intersection and extending to infinity along the plume's axis. For sonic plumes this ratio is reduced to about 4/3. For high Mach number cases the maximum CND will be found along the axial centerline path. Keywords: column number density, plume flows, outgassing, free molecule flow.

  16. Modification and validation of an analytical source model for external beam radiotherapy Monte Carlo dose calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidson, Scott E., E-mail: sedavids@utmb.edu

    Purpose: A dose calculation tool, which combines the accuracy of the dose planning method (DPM) Monte Carlo code and the versatility of a practical analytical multisource model, which was previously reported has been improved and validated for the Varian 6 and 10 MV linear accelerators (linacs). The calculation tool can be used to calculate doses in advanced clinical application studies. One shortcoming of current clinical trials that report dose from patient plans is the lack of a standardized dose calculation methodology. Because commercial treatment planning systems (TPSs) have their own dose calculation algorithms and the clinical trial participant who usesmore » these systems is responsible for commissioning the beam model, variation exists in the reported calculated dose distributions. Today’s modern linac is manufactured to tight specifications so that variability within a linac model is quite low. The expectation is that a single dose calculation tool for a specific linac model can be used to accurately recalculate dose from patient plans that have been submitted to the clinical trial community from any institution. The calculation tool would provide for a more meaningful outcome analysis. Methods: The analytical source model was described by a primary point source, a secondary extra-focal source, and a contaminant electron source. Off-axis energy softening and fluence effects were also included. The additions of hyperbolic functions have been incorporated into the model to correct for the changes in output and in electron contamination with field size. A multileaf collimator (MLC) model is included to facilitate phantom and patient dose calculations. An offset to the MLC leaf positions was used to correct for the rudimentary assumed primary point source. Results: Dose calculations of the depth dose and profiles for field sizes 4 × 4 to 40 × 40 cm agree with measurement within 2% of the maximum dose or 2 mm distance to agreement (DTA) for 95% of the data points tested. The model was capable of predicting the depth of the maximum dose within 1 mm. Anthropomorphic phantom benchmark testing of modulated and patterned MLCs treatment plans showed agreement to measurement within 3% in target regions using thermoluminescent dosimeters (TLD). Using radiochromic film normalized to TLD, a gamma criteria of 3% of maximum dose and 2 mm DTA was applied with a pass rate of least 85% in the high dose, high gradient, and low dose regions. Finally, recalculations of patient plans using DPM showed good agreement relative to a commercial TPS when comparing dose volume histograms and 2D dose distributions. Conclusions: A unique analytical source model coupled to the dose planning method Monte Carlo dose calculation code has been modified and validated using basic beam data and anthropomorphic phantom measurement. While this tool can be applied in general use for a particular linac model, specifically it was developed to provide a singular methodology to independently assess treatment plan dose distributions from those clinical institutions participating in National Cancer Institute trials.« less

  17. TRIPPy: Python-based Trailed Source Photometry

    NASA Astrophysics Data System (ADS)

    Fraser, Wesley C.; Alexandersen, Mike; Schwamb, Megan E.; Marsset, Michael E.; Pike, Rosemary E.; Kavelaars, JJ; Bannister, Michele T.; Benecchi, Susan; Delsanti, Audrey

    2016-05-01

    TRIPPy (TRailed Image Photometry in Python) uses a pill-shaped aperture, a rectangle described by three parameters (trail length, angle, and radius) to improve photometry of moving sources over that done with circular apertures. It can generate accurate model and trailed point-spread functions from stationary background sources in sidereally tracked images. Appropriate aperture correction provides accurate, unbiased flux measurement. TRIPPy requires numpy, scipy, matplotlib, Astropy (ascl:1304.002), and stsci.numdisplay; emcee (ascl:1303.002) and SExtractor (ascl:1010.064) are optional.

  18. MCNP-REN - A Monte Carlo Tool for Neutron Detector Design Without Using the Point Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abhold, M.E.; Baker, M.C.

    1999-07-25

    The development of neutron detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model fails to accurately predict detector response in common applications. For this reason, the general Monte Carlo N-Particle code (MCNP) was modified to simulate the pulse streams that would be generated by a neutron detector and normally analyzed by a shift register. This modified code, MCNP - Random Exponentially Distributed Neutron Source (MCNP-REN), along with the Time Analysis Program (TAP) predict neutron detector response without using the pointmore » reactor model, making it unnecessary for the user to decide whether or not the assumptions of the point model are met for their application. MCNP-REN is capable of simulating standard neutron coincidence counting as well as neutron multiplicity counting. Measurements of MOX fresh fuel made using the Underwater Coincidence Counter (UWCC) as well as measurements of HEU reactor fuel using the active neutron Research Reactor Fuel Counter (RRFC) are compared with calculations. The method used in MCNP-REN is demonstrated to be fundamentally sound and shown to eliminate the need to use the point model for detector performance predictions.« less

  19. Multiwavelength study of Chandra X-ray sources in the Antennae

    NASA Astrophysics Data System (ADS)

    Clark, D. M.; Eikenberry, S. S.; Brandl, B. R.; Wilson, J. C.; Carson, J. C.; Henderson, C. P.; Hayward, T. L.; Barry, D. J.; Ptak, A. F.; Colbert, E. J. M.

    2011-01-01

    We use Wide-field InfraRed Camera (WIRC) infrared (IR) images of the Antennae (NGC 4038/4039) together with the extensive catalogue of 120 X-ray point sources to search for counterpart candidates. Using our proven frame-tie technique, we find 38 X-ray sources with IR counterparts, almost doubling the number of IR counterparts to X-ray sources that we first identified. In our photometric analysis, we consider the 35 IR counterparts that are confirmed star clusters. We show that the clusters with X-ray sources tend to be brighter, Ks≈ 16 mag, with (J-Ks) = 1.1 mag. We then use archival Hubble Space Telescope (HST) images of the Antennae to search for optical counterparts to the X-ray point sources. We employ our previous IR-to-X-ray frame-tie as an intermediary to establish a precise optical-to-X-ray frame-tie with <0.6 arcsec rms positional uncertainty. Due to the high optical source density near the X-ray sources, we determine that we cannot reliably identify counterparts. Comparing the HST positions to the 35 identified IR star cluster counterparts, we find optical matches for 27 of these sources. Using Bruzual-Charlot spectral evolutionary models, we find that most clusters associated with an X-ray source are massive, and young, ˜ 106 yr.

  20. Astrophysical signatures of leptonium

    NASA Astrophysics Data System (ADS)

    Ellis, Simon C.; Bland-Hawthorn, Joss

    2018-01-01

    More than 1043 positrons annihilate every second in the centre of our Galaxy yet, despite four decades of observations, their origin is still unknown. Many candidates have been proposed, such as supernovae and low mass X-ray binaries. However, these models are difficult to reconcile with the distribution of positrons, which are highly concentrated in the Galactic bulge, and therefore require specific propagation of the positrons through the interstellar medium. Alternative sources include dark matter decay, or the supermassive black hole, both of which would have a naturally high bulge-to-disc ratio. The chief difficulty in reconciling models with the observations is the intrinsically poor angular resolution of gamma-ray observations, which cannot resolve point sources. Essentially all of the positrons annihilate via the formation of positronium. This gives rise to the possibility of observing recombination lines of positronium emitted before the atom annihilates. These emission lines would be in the UV and the NIR, giving an increase in angular resolution of a factor of 104 compared to gamma ray observations, and allowing the discrimination between point sources and truly diffuse emission. Analogously to the formation of positronium, it is possible to form atoms of true muonium and true tauonium. Since muons and tauons are intrinsically unstable, the formation of such leptonium atoms will be localised to their places of origin. Thus observations of true muonium or true tauonium can provide another way to distinguish between truly diffuse sources such as dark matter decay, and an unresolved distribution of point sources. Contribution to the Topical Issue "Low Energy Positron and Electron Interactions", edited by James Sullivan, Ron White, Michael Bromley, Ilya Fabrikant and David Cassidy.

  1. CRITICAL EVALUATION OF THE DIFFUSION HYPOTHESIS IN THE THEORY OF POROUS MEDIA VOLATILE ORGANIC COMPOUND (VOC) SOURCES AND SINKS

    EPA Science Inventory

    The paper proposes three alternative, diffusion-limited mathematical models to account for volatile organic compound (VOC) interactions with indoor sinks, using the linear isotherm model as a reference point. (NOTE: Recent reports by both the U.S. EPA and a study committee of the...

  2. Mesh-free distributed point source method for modeling viscous fluid motion between disks vibrating at ultrasonic frequency.

    PubMed

    Wada, Yuji; Kundu, Tribikram; Nakamura, Kentaro

    2014-08-01

    The distributed point source method (DPSM) is extended to model wave propagation in viscous fluids. Appropriate estimation on attenuation and boundary layer formation due to fluid viscosity is necessary for the ultrasonic devices used for acoustic streaming or ultrasonic levitation. The equations for DPSM modeling in viscous fluids are derived in this paper by decomposing the linearized viscous fluid equations into two components-dilatational and rotational components. By considering complex P- and S-wave numbers, the acoustic fields in viscous fluids can be calculated following similar calculation steps that are used for wave propagation modeling in solids. From the calculations reported the precision of DPSM is found comparable to that of the finite element method (FEM) for a fundamental ultrasonic field problem. The particle velocity parallel to the two bounding surfaces of the viscous fluid layer between two rigid plates (one in motion and one stationary) is calculated. The finite element results agree well with the DPSM results that were generated faster than the transient FEM results.

  3. Application of Water Quality Model of Jordan River to Evaluate Climate Change Effects on Eutrophication

    NASA Astrophysics Data System (ADS)

    Van Grouw, B.

    2016-12-01

    The Jordan River is a 51 mile long freshwater stream in Utah that provides drinking water to more than 50% of Utah's population. The various point and nonpoint sources introduce an excess of nutrients into the river. This excess induces eutrophication that results in an inhabitable environment for aquatic life is expected to be exacerbated due to climate change. Adaptive measures must be evaluated based on predictions of climate variation impacts on eutrophication and ecosystem processes in the Jordan River. A Water Quality Assessment Simulation Program (WASP) model was created to analyze the data results acquired from a Total Maximum Daily Load (TMDL) study conducted on the Jordan River. Eutrophication is modeled based on levels of phosphates and nitrates from point and nonpoint sources, temperature, and solar radiation. It will simulate the growth of phytoplankton and periphyton in the river. This model will be applied to assess how water quality in the Jordan River is affected by variations in timing and intensity of spring snowmelt and runoff during drought in the valley and the resulting effects on eutrophication in the river.

  4. Evaluating changes in water quality with respect to nonpoint source nutrient management strategies in the Chesapeake Bay Watershed

    NASA Astrophysics Data System (ADS)

    Keisman, J.; Sekellick, A.; Blomquist, J.; Devereux, O. H.; Hively, W. D.; Johnston, M.; Moyer, D.; Sweeney, J.

    2014-12-01

    Chesapeake Bay is a eutrophic ecosystem with periodic hypoxia and anoxia, algal blooms, diminished submerged aquatic vegetation, and degraded stocks of marine life. Knowledge of the effectiveness of actions taken across the watershed to reduce nitrogen (N) and phosphorus (P) loads to the bay (i.e. "best management practices" or BMPs) is essential to its restoration. While nutrient inputs from point sources (e.g. wastewater treatment plants and other industrial and municipal operations) are tracked, inputs from nonpoint sources, including atmospheric deposition, farms, lawns, septic systems, and stormwater, are difficult to measure. Estimating reductions in nonpoint source inputs attributable to BMPs requires compilation and comparison of data on water quality, climate, land use, point source discharges, and BMP implementation. To explore the relation of changes in nonpoint source inputs and BMP implementation to changes in water quality, a subset of small watersheds (those containing at least 10 years of water quality monitoring data) within the Chesapeake Watershed were selected for study. For these watersheds, data were compiled on geomorphology, demographics, land use, point source discharges, atmospheric deposition, and agricultural practices such as livestock populations, crop acres, and manure and fertilizer application. In addition, data on BMP implementation for 1985-2012 were provided by the Environmental Protection Agency Chesapeake Bay Program Office (CBPO) and the U.S. Department of Agriculture. A spatially referenced nonlinear regression model (SPARROW) provided estimates attributing N and P loads associated with receiving waters to different nutrient sources. A recently developed multiple regression technique ("Weighted Regressions on Time, Discharge and Season" or WRTDS) provided an enhanced understanding of long-term trends in N and P loads and concentrations. A suite of deterministic models developed by the CBPO was used to estimate expected nutrient load reductions attributable to BMPs. Further quantification of the relation of land-based nutrient sources and BMPs to water quality in the bay and its tributaries must account for inconsistency in BMP data over time and uncertainty regarding BMP locations and effectiveness.

  5. Measurement of Phased Array Point Spread Functions for Use with Beamforming

    NASA Technical Reports Server (NTRS)

    Bahr, Chris; Zawodny, Nikolas S.; Bertolucci, Brandon; Woolwine, Kyle; Liu, Fei; Li, Juan; Sheplak, Mark; Cattafesta, Louis

    2011-01-01

    Microphone arrays can be used to localize and estimate the strengths of acoustic sources present in a region of interest. However, the array measurement of a region, or beam map, is not an accurate representation of the acoustic field in that region. The true acoustic field is convolved with the array s sampling response, or point spread function (PSF). Many techniques exist to remove the PSF's effect on the beam map via deconvolution. Currently these methods use a theoretical estimate of the array point spread function and perhaps account for installation offsets via determination of the microphone locations. This methodology fails to account for any reflections or scattering in the measurement setup and still requires both microphone magnitude and phase calibration, as well as a separate shear layer correction in an open-jet facility. The research presented seeks to investigate direct measurement of the array's PSF using a non-intrusive acoustic point source generated by a pulsed laser system. Experimental PSFs of the array are computed for different conditions to evaluate features such as shift-invariance, shear layers and model presence. Results show that experimental measurements trend with theory with regard to source offset. The source shows expected behavior due to shear layer refraction when observed in a flow, and application of a measured PSF to NACA 0012 aeroacoustic trailing-edge noise data shows a promising alternative to a classic shear layer correction method.

  6. General Electromagnetic Model for the Analysis of Complex Systems (GEMACS) Computer Code Documentation (Version 3). Volume 3, Part 4.

    DTIC Science & Technology

    1983-09-01

    6ENFRAL. ELECTROMAGNETIC MODEL FOR THE ANALYSIS OF COMPLEX SYSTEMS **%(GEMA CS) Computer Code Documentation ii( Version 3 ). A the BDM Corporation Dr...ANALYSIS FnlTcnclRpr F COMPLEX SYSTEM (GmCS) February 81 - July 83- I TR CODE DOCUMENTATION (Version 3 ) 6.PROMN N.REPORT NUMBER 5. CONTRACT ORGAT97...the ti and t2 directions on the source patch. 3 . METHOD: The electric field at a segment observation point due to the source patch j is given by 1-- lnA

  7. On the assessment of spatial resolution of PET systems with iterative image reconstruction

    NASA Astrophysics Data System (ADS)

    Gong, Kuang; Cherry, Simon R.; Qi, Jinyi

    2016-03-01

    Spatial resolution is an important metric for performance characterization in PET systems. Measuring spatial resolution is straightforward with a linear reconstruction algorithm, such as filtered backprojection, and can be performed by reconstructing a point source scan and calculating the full-width-at-half-maximum (FWHM) along the principal directions. With the widespread adoption of iterative reconstruction methods, it is desirable to quantify the spatial resolution using an iterative reconstruction algorithm. However, the task can be difficult because the reconstruction algorithms are nonlinear and the non-negativity constraint can artificially enhance the apparent spatial resolution if a point source image is reconstructed without any background. Thus, it was recommended that a background should be added to the point source data before reconstruction for resolution measurement. However, there has been no detailed study on the effect of the point source contrast on the measured spatial resolution. Here we use point source scans from a preclinical PET scanner to investigate the relationship between measured spatial resolution and the point source contrast. We also evaluate whether the reconstruction of an isolated point source is predictive of the ability of the system to resolve two adjacent point sources. Our results indicate that when the point source contrast is below a certain threshold, the measured FWHM remains stable. Once the contrast is above the threshold, the measured FWHM monotonically decreases with increasing point source contrast. In addition, the measured FWHM also monotonically decreases with iteration number for maximum likelihood estimate. Therefore, when measuring system resolution with an iterative reconstruction algorithm, we recommend using a low-contrast point source and a fixed number of iterations.

  8. A simplified approach to analyze the effectiveness of NO2 and SO2 emission reduction of coal-fired power plant from OMI retrievals

    NASA Astrophysics Data System (ADS)

    Bai, Yang; Wu, Lixin; Zhou, Yuan; Li, Ding

    2017-04-01

    Nitrogen oxides (NOX) and sulfur dioxide (SO2) emissions from coal combustion, which is oxidized quickly in the atmosphere resulting in secondary aerosol formation and acid deposition, are the main resource causing China's regional fog-haze pollution. Extensive literature has estimated quantitatively the lifetimes and emissions of NO2 and SO2 for large point sources such as coal-fired power plants and cities using satellite measurements. However, rare of these methods is suitable for sources located in a heterogeneously polluted background. In this work, we present a simplified emission effective radius extraction model for point source to study the NO2 and SO2 reduction trend in China with complex polluted sources. First, to find out the time range during which actual emissions could be derived from satellite observations, the spatial distribution characteristics of mean daily, monthly, seasonal and annual concentration of OMI NO2 and SO2 around a single power plant were analyzed and compared. Then, a 100 km × 100 km geographical grid with a 1 km step was established around the source and the mean concentration of all satellite pixels covered in each grid point is calculated by the area weight pixel-averaging approach. The emission effective radius is defined by the concentration gradient values near the power plant. Finally, the developed model is employed to investigate the characteristic and evolution of NO2 and SO2 emissions and verify the effectiveness of flue gas desulfurization (FGD) and selective catalytic reduction (SCR) devices applied in coal-fired power plants during the period of 10 years from 2006 to 2015. It can be observed that the the spatial distribution pattern of NO2 and SO2 concentration in the vicinity of large coal-burning source was not only affected by the emission of coal-burning itself, but also closely related to the process of pollutant transmission and diffusion caused by meteorological factors in different seasons. Our proposed model can be used to identify the effective operation time of FGD and SCR equipped in coal-fired power plant.

  9. A Model for Selection of Eyespots on Butterfly Wings.

    PubMed

    Sekimura, Toshio; Venkataraman, Chandrasekhar; Madzvamuse, Anotida

    2015-01-01

    The development of eyespots on the wing surface of butterflies of the family Nympalidae is one of the most studied examples of biological pattern formation.However, little is known about the mechanism that determines the number and precise locations of eyespots on the wing. Eyespots develop around signaling centers, called foci, that are located equidistant from wing veins along the midline of a wing cell (an area bounded by veins). A fundamental question that remains unsolved is, why a certain wing cell develops an eyespot, while other wing cells do not. We illustrate that the key to understanding focus point selection may be in the venation system of the wing disc. Our main hypothesis is that changes in morphogen concentration along the proximal boundary veins of wing cells govern focus point selection. Based on previous studies, we focus on a spatially two-dimensional reaction-diffusion system model posed in the interior of each wing cell that describes the formation of focus points. Using finite element based numerical simulations, we demonstrate that variation in the proximal boundary condition is sufficient to robustly select whether an eyespot focus point forms in otherwise identical wing cells. We also illustrate that this behavior is robust to small perturbations in the parameters and geometry and moderate levels of noise. Hence, we suggest that an anterior-posterior pattern of morphogen concentration along the proximal vein may be the main determinant of the distribution of focus points on the wing surface. In order to complete our model, we propose a two stage reaction-diffusion system model, in which an one-dimensional surface reaction-diffusion system, posed on the proximal vein, generates the morphogen concentrations that act as non-homogeneous Dirichlet (i.e., fixed) boundary conditions for the two-dimensional reaction-diffusion model posed in the wing cells. The two-stage model appears capable of generating focus point distributions observed in nature. We therefore conclude that changes in the proximal boundary conditions are sufficient to explain the empirically observed distribution of eyespot focus points on the entire wing surface. The model predicts, subject to experimental verification, that the source strength of the activator at the proximal boundary should be lower in wing cells in which focus points form than in those that lack focus points. The model suggests that the number and locations of eyespot foci on the wing disc could be largely controlled by two kinds of gradients along two different directions, that is, the first one is the gradient in spatially varying parameters such as the reaction rate along the anterior-posterior direction on the proximal boundary of the wing cells, and the second one is the gradient in source values of the activator along the veins in the proximal-distal direction of the wing cell.

  10. A modification of the Regional Nutrient Management model (ReNuMa) to identify long-term changes in riverine nitrogen sources

    NASA Astrophysics Data System (ADS)

    Hu, Minpeng; Liu, Yanmei; Wang, Jiahui; Dahlgren, Randy A.; Chen, Dingjiang

    2018-06-01

    Source apportionment is critical for guiding development of efficient watershed nitrogen (N) pollution control measures. The ReNuMa (Regional Nutrient Management) model, a semi-empirical, semi-process-oriented model with modest data requirements, has been widely used for riverine N source apportionment. However, the ReNuMa model contains limitations for addressing long-term N dynamics by ignoring temporal changes in atmospheric N deposition rates and N-leaching lag effects. This work modified the ReNuMa model by revising the source code to allow yearly changes in atmospheric N deposition and incorporation of N-leaching lag effects into N transport processes. The appropriate N-leaching lag time was determined from cross-correlation analysis between annual watershed individual N source inputs and riverine N export. Accuracy of the modified ReNuMa model was demonstrated through analysis of a 31-year water quality record (1980-2010) from the Yongan watershed in eastern China. The revisions considerably improved the accuracy (Nash-Sutcliff coefficient increased by ∼0.2) of the modified ReNuMa model for predicting riverine N loads. The modified model explicitly identified annual and seasonal changes in contributions of various N sources (i.e., point vs. nonpoint source, surface runoff vs. groundwater) to riverine N loads as well as the fate of watershed anthropogenic N inputs. Model results were consistent with previously modeled or observed lag time length as well as changes in riverine chloride and nitrate concentrations during the low-flow regime and available N levels in agricultural soils of this watershed. The modified ReNuMa model is applicable for addressing long-term changes in riverine N sources, providing decision-makers with critical information for guiding watershed N pollution control strategies.

  11. Using Socratic Questioning in the Classroom.

    ERIC Educational Resources Information Center

    Moore, Lori; Rudd, Rick

    2002-01-01

    Describes the Socratic questioning method and discusses its use in the agricultural education classroom. Presents a four-step model: origin and source of point of view; support, reasons, evidence, and assumptions; conflicting views; and implications and consequences. (JOW)

  12. Integrated numerical modeling of a laser gun injector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, H.; Benson, S.; Bisognano, J.

    1993-06-01

    CEBAF is planning to incorporate a laser gun injector into the linac front end as a high-charge cw source for a high-power free electron laser and nuclear physics. This injector consists of a DC laser gun, a buncher, a cryounit and a chicane. The performance of the injector is predicted based on integrated numerical modeling using POISSON, SUPERFISH and PARMELA. The point-by-point method incorporated into PARMELA by McDonald is chosen for space charge treatment. The concept of ``conditioning for final bunching`` is employed to vary several crucial parameters of the system for achieving highest peak current while maintaining low emittancemore » and low energy spread. Extensive parameter variation studies show that the design will perform beyond the specifications for FEL operations aimed at industrial applications and fundamental scientific research. The calculation also shows that the injector will perform as an extremely bright cw electron source.« less

  13. [L-THIA-based management design for controlling urban non-point source pollution].

    PubMed

    Guo, Qing-Hai; Yang, Liu; Ke-Ming, Ma

    2007-11-01

    L-THIA Model was used to simulate the amounts of NPS pollutants in 2 catchments of Sanjiao watershed (Sj1, Sj2) in Hanyang district, and the total simulated amount of NPS loads in Sj1 and Sj2 were 1.82 x 10(4) kg, 1.38 x 10(5) kg, respectively. Based on the theory of resource-sink" and interaction of pattern with process, a series of BMPs, including green roof, grassland, porous pavement, infiltration trench, vegetative filter strip and wet pond, were optimized, and effects of BMPs were simulated along the surface runoff pathway. The results show that total pollutants outputs entering Sj1 and Sj2 account for 14.65% and 6.57%, respectively. Combining L-THIA model and BMPs in series is a proper measure for non-point source pollution control and urban development planning at watershed or region scale.

  14. Prediction and Warning of Transported Turbulence in Long-Haul Aircraft Operations

    NASA Technical Reports Server (NTRS)

    Ellrod, Gary P. (Inventor); Spence, Mark D. (Inventor); Shipley, Scott T. (Inventor)

    2017-01-01

    An aviation flight planning system is used for predicting and warning for intersection of flight paths with transported meteorological disturbances, such as transported turbulence and related phenomena. Sensed data and transmitted data provide real time and forecast data related to meteorological conditions. Data modelling transported meteorological disturbances are applied to the received transmitted data and the sensed data to use the data modelling transported meteorological disturbances to correlate the sensed data and received transmitted data. The correlation is used to identify transported meteorological disturbances source characteristics, and identify predicted transported meteorological disturbances trajectories from source to intersection with flight path in space and time. The correlated data are provided to a visualization system that projects coordinates of a point of interest (POI) in a selected point of view (POV) to displays the flight track and the predicted transported meteorological disturbances warnings for the flight crew.

  15. Techniques for determining physical zones of influence

    DOEpatents

    Hamann, Hendrik F; Lopez-Marrero, Vanessa

    2013-11-26

    Techniques for analyzing flow of a quantity in a given domain are provided. In one aspect, a method for modeling regions in a domain affected by a flow of a quantity is provided which includes the following steps. A physical representation of the domain is provided. A grid that contains a plurality of grid-points in the domain is created. Sources are identified in the domain. Given a vector field that defines a direction of flow of the quantity within the domain, a boundary value problem is defined for each of one or more of the sources identified in the domain. Each of the boundary value problems is solved numerically to obtain a solution for the boundary value problems at each of the grid-points. The boundary problem solutions are post-processed to model the regions affected by the flow of the quantity on the physical representation of the domain.

  16. Modeling Nitrogen Processing in Northeast US River Networks

    NASA Astrophysics Data System (ADS)

    Whittinghill, K. A.; Stewart, R.; Mineau, M.; Wollheim, W. M.; Lammers, R. B.

    2013-12-01

    Due to increased nitrogen (N) pollution from anthropogenic sources, the need for aquatic ecosystem services such as N removal has also increased. River networks provide a buffering mechanism that retains or removes anthropogenic N inputs. However, the effectiveness of N removal in rivers may decline with increased loading and, consequently, excess N is eventually delivered to estuaries. We used a spatially distributed river network N removal model developed within the Framework for Aquatic Modeling in the Earth System (FrAMES) to examine the geography of N removal capacity of Northeast river systems under various land use and climate conditions. FrAMES accounts for accumulation and routing of runoff, water temperatures, and serial biogeochemical processing using reactivity derived from the Lotic Intersite Nitrogen Experiment (LINX2). Nonpoint N loading is driven by empirical relationships with land cover developed from previous research in Northeast watersheds. Point source N loading from wastewater treatment plants is estimated as a function of the population served and the volume of water discharged. We tested model results using historical USGS discharge data and N data from historical grab samples and recently initiated continuous measurements from in-situ aquatic sensors. Model results for major Northeast watersheds illustrate hot spots of ecosystem service activity (i.e. N removal) using high-resolution maps and basin profiles. As expected, N loading increases with increasing suburban or agricultural land use area. Network scale N removal is highest during summer and autumn when discharge is low and river temperatures are high. N removal as the % of N loading increases with catchment size and decreases with increasing N loading, suburban land use, or agricultural land use. Catchments experiencing the highest network scale N removal generally have N inputs (both point and non-point sources) located in lower order streams. Model results can be used to better predict nutrient loading to the coastal ocean across a broad range of current and future climate variability.

  17. Neutron crosstalk between liquid scintillators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verbeke, J. M.; Prasad, M. K.; Snyderman, N. J.

    2015-05-01

    We propose a method to quantify the fractions of neutrons scattering between liquid scintillators. Using a spontaneous fission source, this method can be utilized to quickly characterize an array of liquid scintillators in terms of crosstalk. The point model theory due to Feynman is corrected to account for these multiple scatterings. Using spectral information measured by the liquid scintillators, fractions of multiple scattering can be estimated, and mass reconstruction of fissile materials under investigation can be improved. Monte Carlo simulations of mono-energetic neutron sources were performed to estimate neutron crosstalk. A californium source in an array of liquid scintillators wasmore » modeled to illustrate the improvement of the mass reconstruction.« less

  18. Compound simulator IR radiation characteristics test and calibration

    NASA Astrophysics Data System (ADS)

    Li, Yanhong; Zhang, Li; Li, Fan; Tian, Yi; Yang, Yang; Li, Zhuo; Shi, Rui

    2015-10-01

    The Hardware-in-the-loop simulation can establish the target/interference physical radiation and interception of product flight process in the testing room. In particular, the simulation of environment is more difficult for high radiation energy and complicated interference model. Here the development in IR scene generation produced by a fiber array imaging transducer with circumferential lamp spot sources is introduced. The IR simulation capability includes effective simulation of aircraft signatures and point-source IR countermeasures. Two point-sources as interference can move in two-dimension random directions. For simulation the process of interference release, the radiation and motion characteristic is tested. Through the zero calibration for optical axis of simulator, the radiation can be well projected to the product detector. The test and calibration results show the new type compound simulator can be used in the hardware-in-the-loop simulation trial.

  19. New Global Bathymetry and Topography Model Grids

    NASA Astrophysics Data System (ADS)

    Smith, W. H.; Sandwell, D. T.; Marks, K. M.

    2008-12-01

    A new version of the "Smith and Sandwell" global marine topography model is available in two formats. A one-arc-minute Mercator projected grid covering latitudes to +/- 80.738 degrees is available in the "img" file format. Also available is a 30-arc-second version in latitude and longitude coordinates from pole to pole, supplied as tiles covering the same areas as the SRTM30 land topography data set. The new effort follows the Smith and Sandwell recipe, using publicly available and quality controlled single- and multi-beam echo soundings where possible and filling the gaps in the oceans with estimates derived from marine gravity anomalies observed by satellite altimetry. The altimeter data have been reprocessed to reduce the noise level and improve the spatial resolution [see Sandwell and Smith, this meeting]. The echo soundings database has grown enormously with new infusions of data from the U.S. Naval Oceanographic Office (NAVO), the National Geospatial-intelligence Agency (NGA), hydrographic offices around the world volunteering through the International Hydrographic Organization (IHO), and many other agencies and academic sources worldwide. These new data contributions have filled many holes: 50% of ocean grid points are within 8 km of a sounding point, 75% are within 24 km, and 90% are within 57 km. However, in the remote ocean basins some gaps still remain: 5% of the ocean grid points are more than 85 km from the nearest sounding control, and 1% are more than 173 km away. Both versions of the grid include a companion grid of source file numbers, so that control points may be mapped and traced to sources. We have compared the new model to multi-beam data not used in the compilation and find that 50% of differences are less than 25 m, 95% of differences are less than 130 m, but a few large differences remain in areas of poor sounding control and large-amplitude gravity anomalies. Land values in the solution are taken from SRTM30v2, GTOPO30 and ICESAT data. GEBCO has agreed to adopt this model and begin updating it in 2009. Ongoing tasks include building an uncertainty model and including information from the latest IBCAO map of the Arctic Ocean.

  20. Robust numerical electromagnetic eigenfunction expansion algorithms

    NASA Astrophysics Data System (ADS)

    Sainath, Kamalesh

    This thesis summarizes developments in rigorous, full-wave, numerical spectral-domain (integral plane wave eigenfunction expansion [PWE]) evaluation algorithms concerning time-harmonic electromagnetic (EM) fields radiated by generally-oriented and positioned sources within planar and tilted-planar layered media exhibiting general anisotropy, thickness, layer number, and loss characteristics. The work is motivated by the need to accurately and rapidly model EM fields radiated by subsurface geophysical exploration sensors probing layered, conductive media, where complex geophysical and man-made processes can lead to micro-laminate and micro-fractured geophysical formations exhibiting, at the lower (sub-2MHz) frequencies typically employed for deep EM wave penetration through conductive geophysical media, bulk-scale anisotropic (i.e., directional) electrical conductivity characteristics. When the planar-layered approximation (layers of piecewise-constant material variation and transversely-infinite spatial extent) is locally, near the sensor region, considered valid, numerical spectral-domain algorithms are suitable due to their strong low-frequency stability characteristic, and ability to numerically predict time-harmonic EM field propagation in media with response characterized by arbitrarily lossy and (diagonalizable) dense, anisotropic tensors. If certain practical limitations are addressed, PWE can robustly model sensors with general position and orientation that probe generally numerous, anisotropic, lossy, and thick layers. The main thesis contributions, leading to a sensor and geophysical environment-robust numerical modeling algorithm, are as follows: (1) Simple, rapid estimator of the region (within the complex plane) containing poles, branch points, and branch cuts (critical points) (Chapter 2), (2) Sensor and material-adaptive azimuthal coordinate rotation, integration contour deformation, integration domain sub-region partition and sub-region-dependent integration order (Chapter 3), (3) Integration partition-extrapolation-based (Chapter 3) and Gauss-Laguerre Quadrature (GLQ)-based (Chapter 4) evaluations of the deformed, semi-infinite-length integration contour tails, (4) Robust in-situ-based (i.e., at the spectral-domain integrand level) direct/homogeneous-medium field contribution subtraction and analytical curbing of the source current spatial spectrum function's ill behavior (Chapter 5), and (5) Analytical re-casting of the direct-field expressions when the source is embedded within a NBAM, short for non-birefringent anisotropic medium (Chapter 6). The benefits of these contributions are, respectively, (1) Avoiding computationally intensive critical-point location and tracking (computation time savings), (2) Sensor and material-robust curbing of the integrand's oscillatory and slow decay behavior, as well as preventing undesirable critical-point migration within the complex plane (computation speed, precision, and instability-avoidance benefits), (3) sensor and material-robust reduction (or, for GLQ, elimination) of integral truncation error, (4) robustly stable modeling of scattered fields and/or fields radiated from current sources modeled as spatially distributed (10 to 1000-fold compute-speed acceleration also realized for distributed-source computations), and (5) numerically stable modeling of fields radiated from sources within NBAM layers. Having addressed these limitations, are PWE algorithms applicable to modeling EM waves in tilted planar-layered geometries too? This question is explored in Chapter 7 using a Transformation Optics-based approach, allowing one to model wave propagation through layered media that (in the sensor's vicinity) possess tilted planar interfaces. The technique leads to spurious wave scattering however, whose induced computation accuracy degradation requires analysis. Mathematical exhibition, and exhaustive simulation-based study and analysis of the limitations of, this novel tilted-layer modeling formulation is Chapter 7's main contribution.

  1. UNMIX Methods Applied to Characterize Sources of Volatile Organic Compounds in Toronto, Ontario

    PubMed Central

    Porada, Eugeniusz; Szyszkowicz, Mieczysław

    2016-01-01

    UNMIX, a sensor modeling routine from the U.S. Environmental Protection Agency (EPA), was used to model volatile organic compound (VOC) receptors in four urban sites in Toronto, Ontario. VOC ambient concentration data acquired in 2000–2009 for 175 VOC species in four air quality monitoring stations were analyzed. UNMIX, by performing multiple modeling attempts upon varying VOC menus—while rejecting the results that were not reliable—allowed for discriminating sources by their most consistent chemical characteristics. The method assessed occurrences of VOCs in sources typical of the urban environment (traffic, evaporative emissions of fuels, banks of fugitive inert gases), industrial point sources (plastic-, polymer-, and metalworking manufactures), and in secondary sources (releases from water, sediments, and contaminated urban soil). The remote sensing and robust modeling used here produces chemical profiles of putative VOC sources that, if combined with known environmental fates of VOCs, can be used to assign physical sources’ shares of VOCs emissions into the atmosphere. This in turn provides a means of assessing the impact of environmental policies on one hand, and industrial activities on the other hand, on VOC air pollution. PMID:29051416

  2. Improving bioaerosol exposure assessments of composting facilities — Comparative modelling of emissions from different compost ages and processing activities

    NASA Astrophysics Data System (ADS)

    Taha, M. P. M.; Drew, G. H.; Tamer, A.; Hewings, G.; Jordinson, G. M.; Longhurst, P. J.; Pollard, S. J. T.

    We present bioaerosol source term concentrations from passive and active composting sources and compare emissions from green waste compost aged 1, 2, 4, 6, 8, 12 and 16 weeks. Results reveal that the age of compost has little effect on the bioaerosol concentrations emitted for passive windrow sources. However emissions from turning compost during the early stages may be higher than during the later stages of the composting process. The bioaerosol emissions from passive sources were in the range of 10 3-10 4 cfu m -3, with releases from active sources typically 1-log higher. We propose improvements to current risk assessment methodologies by examining emission rates and the differences between two air dispersion models for the prediction of downwind bioaerosol concentrations at off-site points of exposure. The SCREEN3 model provides a more precautionary estimate of the source depletion curves of bioaerosol emissions in comparison to ADMS 3.3. The results from both models predict that bioaerosol concentrations decrease to below typical background concentrations before 250 m, the distance at which the regulator in England and Wales may require a risk assessment to be completed.

  3. Evaluation of total phosphorus mass balance in the lower Boise River and selected tributaries, southwestern Idaho

    USGS Publications Warehouse

    Etheridge, Alexandra B.

    2013-01-01

    he U.S. Geological Survey (USGS), in cooperation with Idaho Department of Environmental Quality, developed spreadsheet mass-balance models for total phosphorus using results from three synoptic sampling periods conducted in the lower Boise River watershed during August and October 2012, and March 2013. The modeling reach spanned 46.4 river miles (RM) along the Boise River from Veteran’s Memorial Parkway in Boise, Idaho (RM 50.2), to Parma, Idaho (RM 3.8). The USGS collected water-quality samples and measured streamflow at 14 main-stem Boise River sites, two Boise River north channel sites, two sites on the Snake River upstream and downstream of its confluence with the Boise River, and 17 tributary and return-flow sites. Additional samples were collected from treated effluent at six wastewater treatment plants and two fish hatcheries. The Idaho Department of Water Resources quantified diversion flows in the modeling reach. Total phosphorus mass-balance models were useful tools for evaluating sources of phosphorus in the Boise River during each sampling period. The timing of synoptic sampling allowed the USGS to evaluate phosphorus inputs to and outputs from the Boise River during irrigation season, shortly after irrigation ended, and soon before irrigation resumed. Results from the synoptic sampling periods showed important differences in surface-water and groundwater distribution and phosphorus loading. In late August 2012, substantial streamflow gains to the Boise River occurred from Middleton (RM 31.4) downstream to Parma (RM 3.8). Mass-balance model results indicated that point and nonpoint sources (including groundwater) contributed phosphorus loads to the Boise River during irrigation season. Groundwater exchange within the Boise River in October 2012 and March 2013 was not as considerable as that measured in August 2012. However, groundwater discharge to agricultural tributaries and drains during non-irrigation season was a large source of discharge and phosphorus in the lower Boise River in October 2012 and March 2013. Model results indicate that point sources represent the largest contribution of phosphorus to the Boise River year round, but that reductions in point and nonpoint source phosphorus loads may be necessary to achieve seasonal total phosphorus concentration targets at Parma (RM 3.8) from May 1 through September 30, as set by the 2004 Snake River-Hells Canyon Total Maximum Daily Load document. The mass-balance models do not account for biological or depositional instream processes, but are useful indicators of locations where appreciable phosphorus uptake or release by aquatic plants may occur.

  4. Probabilistic Analysis of Earthquake-Led Water Contamination: A Case of Sichuan, China

    NASA Astrophysics Data System (ADS)

    Yang, Yan; Li, Lin; Benjamin Zhan, F.; Zhuang, Yanhua

    2016-06-01

    The objective of this paper is to evaluate seismic-led point source and non-point source water pollution, under the seismic hazard of 10 % probability of exceedance in 50 years, and with the minimum value of the water quality standard in Sichuan, China. The soil conservation service curve number method of calculating the runoff depth in the single rainfall event combined with the seismic damage index were applied to estimate the potential degree of non-point source water pollution. To estimate the potential impact of point source water pollution, a comprehensive water pollution evaluation framework is constructed using a combination of Water Quality Index and Seismic Damage Index methods. The four key findings of this paper are: (1) The water catchment that has the highest factory concentration does not have the highest risk of non-point source water contamination induced by the outbreak of potential earthquake. (2) The water catchment that has the highest numbers of cumulative water pollutants types are typically located in the south western parts of Sichuan where the main river basins in the regions flow through. (3) The most common pollutants in sample factories studied is COD and NH3-N which are found in all catchments. The least common pollutant is pathogen—found present in W1 catchment which has the best rating in the water quality index. (4) Using water quality index as a standardization parameter, parallel comparisons is made among the 16 water catchments. Only catchment W1 reaches level II water quality status which has the rating of moderately polluted in events of earthquake induced water contamination. All other areas suffer from severe water contamination with multiple pollution sources. The results from the data model are significant to urban planning commissions and businesses to strategically choose their factory locations in order to minimize potential hazardous impact during the outbreak of earthquake.

  5. Separating Turbofan Engine Noise Sources Using Auto and Cross Spectra from Four Microphones

    NASA Technical Reports Server (NTRS)

    Miles, Jeffrey Hilton

    2008-01-01

    The study of core noise from turbofan engines has become more important as noise from other sources such as the fan and jet were reduced. A multiple-microphone and acoustic-source modeling method to separate correlated and uncorrelated sources is discussed. The auto- and cross spectra in the frequency range below 1000 Hz are fitted with a noise propagation model based on a source couplet consisting of a single incoherent monopole source with a single coherent monopole source or a source triplet consisting of a single incoherent monopole source with two coherent monopole point sources. Examples are presented using data from a Pratt& Whitney PW4098 turbofan engine. The method separates the low-frequency jet noise from the core noise at the nozzle exit. It is shown that at low power settings, the core noise is a major contributor to the noise. Even at higher power settings, it can be more important than jet noise. However, at low frequencies, uncorrelated broadband noise and jet noise become the important factors as the engine power setting is increased.

  6. IKT 16: the first X-ray confirmed composite SNR in the SMC

    NASA Astrophysics Data System (ADS)

    Maitra, C.; Ballet, J.; Filipović, M. D.; Haberl, F.; Tiengo, A.; Grieve, K.; Roper, Q.

    2015-12-01

    Aims: IKT 16 is an X-ray and radio-faint supernova remnant (SNR) in the Small Magellanic Cloud (SMC). A detailed X-ray study of this SNR with XMM-Newton confirmed the presence of a hard X-ray source near its centre, indicating the detection of the first composite SNR in the SMC. With a dedicated Chandra observation we aim to resolve the point source and confirm its nature. We also acquire new ATCA observations of the source at 2.1 GHz with improved flux density estimates and resolution. Methods: We perform detailed spatial and spectral analysis of the source. With the highest resolution X-ray and radio image of the centre of the SNR available today, we resolve the source and confirm its pulsar wind nebula (PWN) nature. Further, we constrain the geometrical parameters of the PWN and perform spectral analysis for the point source and the PWN separately. We also test for the radial variations of the PWN spectrum and its possible east west asymmetry. Results: The X-ray source at the centre of IKT 16 can be resolved into a symmetrical elongated feature centring a point source, the putative pulsar. Spatial modelling indicates an extent of 5.2'' of the feature with its axis inclined at 82° east from north, aligned with a larger radio feature consisting of two lobes almost symmetrical about the X-ray source. The picture is consistent with a PWN which has not yet collided with the reverse shock. The point source is about three times brighter than the PWN and has a hard spectrum of spectral index 1.1 compared to a value 2.2 for the PWN. This points to the presence of a pulsar dominated by non-thermal emission. The expected Ė is ~1037 erg s-1 and spin period <100 ms. However, the presence of a compact nebula unresolved by Chandra at the distance of the SMC cannot completely be ruled out. The reduced images (FITS files) are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/584/A41

  7. RAiSE II: resolved spectral evolution in radio AGN

    NASA Astrophysics Data System (ADS)

    Turner, Ross J.; Rogers, Jonathan G.; Shabala, Stanislav S.; Krause, Martin G. H.

    2018-01-01

    The active galactic nuclei (AGN) lobe radio luminosities modelled in hydrodynamical simulations and most analytical models do not address the redistribution of the electron energies due to adiabatic expansion, synchrotron radiation and inverse-Compton scattering of cosmic microwave background photons. We present a synchrotron emissivity model for resolved sources that includes a full treatment of the loss mechanisms spatially across the lobe, and apply it to a dynamical radio source model with known pressure and volume expansion rates. The bulk flow and dispersion of discrete electron packets is represented by tracer fields in hydrodynamical simulations; we show that the mixing of different aged electrons strongly affects the spectrum at each point of the radio map in high-powered Fanaroff & Riley type II (FR-II) sources. The inclusion of this mixing leads to a factor of a few discrepancy between the spectral age measured using impulsive injection models (e.g. JP model) and the dynamical age. The observable properties of radio sources are predicted to be strongly frequency dependent: FR-II lobes are expected to appear more elongated at higher frequencies, while jetted FR-I sources appear less extended. The emerging FR0 class of radio sources, comprising gigahertz peaked and compact steep spectrum sources, can potentially be explained by a population of low-powered FR-Is. The extended emission from such sources is shown to be undetectable for objects within a few orders of magnitude of the survey detection limit and to not contribute to the curvature of the radio spectral energy distribution.

  8. Detection and modeling of the acoustic perturbation produced by the launch of the Space Shuttle using the Global Positioning System

    NASA Astrophysics Data System (ADS)

    Bowling, T. J.; Calais, E.; Dautermann, T.

    2010-12-01

    Rocket launches are known to produce infrasonic pressure waves that propagate into the ionosphere where coupling between electrons and neutral particles induces fluctuations in ionospheric electron density observable in GPS measurements. We have detected ionospheric perturbations following the launch of space shuttle Atlantis on 11 May 2009 using an array of continually operating GPS stations across the Southeastern coast of the United States and in the Caribbean. Detections are prominent to the south of the westward shuttle trajectory in the area of maximum coupling between the acoustic wave and Earth’s magnetic field, move at speeds consistent with the speed of sound, and show coherency between stations covering a large geographic range. We model the perturbation as an explosive source located at the point of closest approach between the shuttle path and each sub-ionospheric point. The neutral pressure wave is propagated using ray tracing, resultant changes in electron density are calculated at points of intersection between rays and satellite-to-reciever line-of-sight, and synthetic integrated electron content values are derived. Arrival times of the observed and synthesized waveforms match closely, with discrepancies related to errors in the apriori sound speed model used for ray tracing. Current work includes the estimation of source location and energy.

  9. Optimization of the transition path of the head hardening with using the genetic algorithms

    NASA Astrophysics Data System (ADS)

    Wróbel, Joanna; Kulawik, Adam

    2016-06-01

    An automated method of choice of the transition path of the head hardening in heat treatment process for the plane steel element is proposed in this communication. This method determines the points on the path of moving heat source using the genetic algorithms. The fitness function of the used algorithm is determined on the basis of effective stresses and yield point depending on the phase composition. The path of the hardening tool and also the area of the heat affected zone is determined on the basis of obtained points. A numerical model of thermal phenomena, phase transformations in the solid state and mechanical phenomena for the hardening process is implemented in order to verify the presented method. A finite element method (FEM) was used for solving the heat transfer equation and getting required temperature fields. The moving heat source is modeled with a Gaussian distribution and the water cooling is also included. The macroscopic model based on the analysis of the CCT and CHT diagrams of the medium-carbon steel is used to determine the phase transformations in the solid state. A finite element method is also used for solving the equilibrium equations giving us the stress field. The thermal and structural strains are taken into account in the constitutive relations.

  10. Proceedings of the Second Annual Conference GeoComputation 97, University of Otago, Dunedin, New Zealand, 26-29 August 1997

    DTIC Science & Technology

    1997-08-29

    examines the argument that the GIS revolu- puter. GIS in particular, missing the point in what the prac- tion has run its course. An approach is used...tat nura nework hae geat points : Skelton, located just north of York on the River potential as substitutes for rainfall-runoff models (Abrahart Ons: en...reference point is re- photographs. rivers provide this source. Site abandonment lated back to a location which has a specific x, y coordi- is often linked

  11. Atmospheric observations and inverse modelling for quantifying emissions of point-source synthetic greenhouse gases in East Asia

    NASA Astrophysics Data System (ADS)

    Arnold, Tim; Manning, Alistair; Li, Shanlan; Kim, Jooil; Park, Sunyoung; Muhle, Jens; Weiss, Ray

    2017-04-01

    The fluorinated species carbon tetrafluoride (CF4; PFC-14), nitrogen trifluoride (NF3) and trifluoromethane (CHF3; HFC-23) are potent greenhouse gases with 100-year global warming potentials of 6,630, 16,100 and 12,400, respectively. Unlike the majority of CFC-replacements that are emitted from fugitive and mobile emission sources, these gases are mostly emitted from large single point sources - semiconductor manufacturing facilities (all three), aluminium smelting plants (CF4) and chlorodifluoromethane (HCFC-22) factories (HFC-23). In this work we show that atmospheric measurements can serve as a basis to calculate emissions of these gases and to highlight emission 'hotspots'. We use measurements from one Advanced Global Atmospheric Gases Experiment (AGAGE) long term monitoring sites at Gosan on Jeju Island in the Republic of Korea. This site measures CF4, NF3 and HFC-23 alongside a suite of greenhouse and stratospheric ozone depleting gases every two hours using automated in situ gas-chromatography mass-spectrometry instrumentation. We couple each measurement to an analysis of air history using the regional atmospheric transport model NAME (Numerical Atmospheric dispersion Modelling Environment) driven by 3D meteorology from the Met Office's Unified Model, and use a Bayesian inverse method (InTEM - Inversion Technique for Emission Modelling) to calculate yearly emission changes over seven years between 2008 and 2015. We show that our 'top-down' emission estimates for NF3 and CF4 are significantly larger than 'bottom-up' estimates in the EDGAR emissions inventory (edgar.jrc.ec.europa.eu). For example we calculate South Korean emissions of CF4 in 2010 to be 0.29±0.04 Gg/yr, which is significantly larger than the Edgar prior emissions of 0.07 Gg/yr. Further, inversions for several separate years indicate that emission hotspots can be found without prior spatial information. At present these gases make a small contribution to global radiative forcing, however, given that the impact of these long-lived gases could rise significantly and that point sources of such gases can be mitigated, atmospheric monitoring could be an important tool for aiding emissions reduction policy.

  12. TU-AB-BRC-11: Moving a GPU-OpenCL-Based Monte Carlo (MC) Dose Engine Towards Routine Clinical Use: Automatic Beam Commissioning and Efficient Source Sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Z; Folkerts, M; Jiang, S

    Purpose: We have previously developed a GPU-OpenCL-based MC dose engine named goMC with built-in analytical linac beam model. To move goMC towards routine clinical use, we have developed an automatic beam-commissioning method, and an efficient source sampling strategy to facilitate dose calculations for real treatment plans. Methods: Our commissioning method is to automatically adjust the relative weights among the sub-sources, through an optimization process minimizing the discrepancies between calculated dose and measurements. Six models built for Varian Truebeam linac photon beams (6MV, 10MV, 15MV, 18MV, 6MVFFF, 10MVFFF) were commissioned using measurement data acquired at our institution. To facilitate dose calculationsmore » for real treatment plans, we employed inverse sampling method to efficiently incorporate MLC leaf-sequencing into source sampling. Specifically, instead of sampling source particles control-point by control-point and rejecting the particles blocked by MLC, we assigned a control-point index to each sampled source particle, according to MLC leaf-open duration of each control-point at the pixel where the particle intersects the iso-center plane. Results: Our auto-commissioning method decreased distance-to-agreement (DTA) of depth dose at build-up regions by 36.2% averagely, making it within 1mm. Lateral profiles were better matched for all beams, with biggest improvement found at 15MV for which root-mean-square difference was reduced from 1.44% to 0.50%. Maximum differences of output factors were reduced to less than 0.7% for all beams, with largest decrease being from1.70% to 0.37% found at 10FFF. Our new sampling strategy was tested on a Head&Neck VMAT patient case. Achieving clinically acceptable accuracy, the new strategy could reduce the required history number by a factor of ∼2.8 given a statistical uncertainty level and hence achieve a similar speed-up factor. Conclusion: Our studies have demonstrated the feasibility and effectiveness of our auto-commissioning approach and new efficient source sampling strategy, implying the potential of our GPU-based MC dose engine goMC for routine clinical use.« less

  13. An improved export coefficient model to estimate non-point source phosphorus pollution risks under complex precipitation and terrain conditions.

    PubMed

    Cheng, Xian; Chen, Liding; Sun, Ranhao; Jing, Yongcai

    2018-05-15

    To control non-point source (NPS) pollution, it is important to estimate NPS pollution exports and identify sources of pollution. Precipitation and terrain have large impacts on the export and transport of NPS pollutants. We established an improved export coefficient model (IECM) to estimate the amount of agricultural and rural NPS total phosphorus (TP) exported from the Luanhe River Basin (LRB) in northern China. The TP concentrations of rivers from 35 selected catchments in the LRB were used to test the model's explanation capacity and accuracy. The simulation results showed that, in 2013, the average TP export was 57.20 t at the catchment scale. The mean TP export intensity in the LRB was 289.40 kg/km 2 , which was much higher than those of other basins in China. In the LRB topographic regions, the TP export intensity was the highest in the south Yanshan Mountains and was followed by the plain area, the north Yanshan Mountains, and the Bashang Plateau. Among the three pollution categories, the contribution ratios to TP export were, from high to low, the rural population (59.44%), livestock husbandry (22.24%), and land-use types (18.32%). Among all ten pollution sources, the contribution ratios from the rural population (59.44%), pigs (14.40%), and arable land (10.52%) ranked as the top three sources. This study provides information that decision makers and planners can use to develop sustainable measures for the prevention and control of NPS pollution in semi-arid regions.

  14. Counting the dead to determine the source and transmission of the marine herpesvirus OsHV-1 in Crassostrea gigas.

    PubMed

    Whittington, Richard J; Paul-Pont, Ika; Evans, Olivia; Hick, Paul; Dhand, Navneet K

    2018-04-10

    Marine herpesviruses are responsible for epizootics in economically, ecologically and culturally significant taxa. The recent emergence of microvariants of Ostreid herpesvirus 1 (OsHV-1) in Pacific oysters Crassostrea gigas has resulted in socioeconomic losses in Europe, New Zealand and Australia however, there is no information on their origin or mode of transmission. These factors need to be understood because they influence the way the disease may be prevented and controlled. Mortality data obtained from experimental populations of C. gigas during natural epizootics of OsHV-1 disease in Australia were analysed qualitatively. In addition we compared actual mortality data with those from a Reed-Frost model of direct transmission and analysed incubation periods using Sartwell's method to test for the type of epizootic, point source or propagating. We concluded that outbreaks were initiated from an unknown environmental source which is unlikely to be farmed oysters in the same estuary. While direct oyster-to-oyster transmission may occur in larger oysters if they are in close proximity (< 40 cm), it did not explain the observed epizootics, point source exposure and indirect transmission being more common and important. A conceptual model is proposed for OsHV-1 index case source and transmission, leading to endemicity with recurrent seasonal outbreaks. The findings suggest that prevention and control of OsHV-1 in C. gigas will require multiple interventions. OsHV-1 in C. gigas, which is a sedentary animal once beyond the larval stage, is an informative model when considering marine host-herpesvirus relationships.

  15. The moving confluence route technology with WAD scheme for 3D hydrodynamic simulation in high altitude inland waters

    NASA Astrophysics Data System (ADS)

    Wang, Yonggui; Yang, Yinqun; Chen, Xiaolong; Engel, Bernard A.; Zhang, Wanshun

    2018-04-01

    For three-dimensional hydrodynamic simulations in inland waters, the rapid changes with moving boundary and various input conditions should be considered. Some models are developed with moving boundary but the dynamic change of discharges is unresolved or ignored. For better hydrodynamic simulation in inland waters, the widely used 3D model, ECOMSED, has been improved by moving confluence route (MCR) method with a wetting and drying scheme (WAD). The fixed locations of water and pollutants inputs from tributaries, point sources and non-point sources have been changed to dynamic confluence routes as the boundary moving. The improved model was applied in an inland water area, Qingshuihai reservoir, Kunming City, China, for a one-year hydrodynamic simulation. The results were verified by water level, flow velocity and water mass conservation. Detailed water level variation analysis and velocity field comparison at different times showed that the improved model has better performance for simulating the boundary moving phenomenon and moving discharges along with water level changing than the original one. The improved three-dimensional model is available for hydrodynamics simulation in water bodies where water boundary shifts along with change of water level and have various inlets.

  16. Modeling susceptibility difference artifacts produced by metallic implants in magnetic resonance imaging with point-based thin-plate spline image registration.

    PubMed

    Pauchard, Y; Smith, M; Mintchev, M

    2004-01-01

    Magnetic resonance imaging (MRI) suffers from geometric distortions arising from various sources. One such source are the non-linearities associated with the presence of metallic implants, which can profoundly distort the obtained images. These non-linearities result in pixel shifts and intensity changes in the vicinity of the implant, often precluding any meaningful assessment of the entire image. This paper presents a method for correcting these distortions based on non-rigid image registration techniques. Two images from a modelled three-dimensional (3D) grid phantom were subjected to point-based thin-plate spline registration. The reference image (without distortions) was obtained from a grid model including a spherical implant, and the corresponding test image containing the distortions was obtained using previously reported technique for spatial modelling of magnetic susceptibility artifacts. After identifying the nonrecoverable area in the distorted image, the calculated spline model was able to quantitatively account for the distortions, thus facilitating their compensation. Upon the completion of the compensation procedure, the non-recoverable area was removed from the reference image and the latter was compared to the compensated image. Quantitative assessment of the goodness of the proposed compensation technique is presented.

  17. A New Global Anthropogenic SO2 Emission Inventory for the Last Decade: A Mosaic of Satellite-derived and Bottom-up Emissions

    NASA Astrophysics Data System (ADS)

    Liu, F.; Joiner, J.; Choi, S.; Krotkov, N. A.; Li, C.; Fioletov, V. E.; McLinden, C. A.

    2017-12-01

    Sulfur dioxide (SO2) measurements from the Ozone Monitoring Instrument (OMI) satellite sensor have been used to detect emissions from large point sources using an innovative estimation technique. Emissions from about 500 sources have been quantified individually based on OMI observations, accounting for about a half of total reported anthropogenic SO2 emissions. We developed a new emission inventory, OMI-HTAP, by combining these OMI-based emission estimates and the conventional bottom-up inventory. OMI-HTAP includes OMI-based estimates for over 400 point sources and is gap-filled with the emission grid map of the latest available global bottom-up emission inventory (HTAP v2.2) for the rest of sources. We have evaluated the OMI-HTAP inventory by performing simulations with the Goddard Earth Observing System version 5 (GEOS-5) model. The GEOS-5 simulated SO2 concentrations driven by both the HTAP and the OMI-HTAP inventory were compared against in-situ and satellite measurements. Results show that the OMI-HTAP inventory improves the model agreement with observations, in particular over the US, India and the Middle East. Additionally, simulations with the OMI-HTAP inventory capture the major trends of anthropogenic SO2 emissions over the world and highlight the influence of missing sources in the bottom-up inventory.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Torcellini, Paul A.; Bonnema, Eric; Goldwasser, David

    Building energy consumption can only be measured at the site or at the point of utility interconnection with a building. Often, to evaluate the total energy impact, this site-based energy consumption is translated into source energy, that is, the energy at the point of fuel extraction. Consistent with this approach, the U.S. Department of Energy's (DOE) definition of zero energy buildings uses source energy as the metric to account for energy losses from the extraction, transformation, and delivery of energy. Other organizations, as well, use source energy to characterize the energy impacts. Four methods of making the conversion from sitemore » energy to source energy were investigated in the context of the DOE definition of zero energy buildings. These methods were evaluated based on three guiding principles--improve energy efficiency, reduce and stabilize power demand, and use power from nonrenewable energy sources as efficiently as possible. This study examines relative trends between strategies as they are implemented on very low-energy buildings to achieve zero energy. A typical office building was modeled and variations to this model performed. The photovoltaic output that was required to create a zero energy building was calculated. Trends were examined with these variations to study the impacts of the calculation method on the building's ability to achieve zero energy status. The paper will highlight the different methods and give conclusions on the advantages and disadvantages of the methods studied.« less

  19. Volume 2 - Point Sources

    EPA Pesticide Factsheets

    Point source emission reference materials from the Emissions Inventory Improvement Program (EIIP). Provides point source guidance on planning, emissions estimation, data collection, inventory documentation and reporting, and quality assurance/quality contr

  20. [Analysis on nitrogen and phosphorus loading of non-point sources in Shiqiao river watershed based on L-THIA model].

    PubMed

    Li, Kai; Zeng, Fan-Tang; Fang, Huai-Yang; Lin, Shu

    2013-11-01

    Based on the Long-term Hydrological Impact Assessment (L-THIA) model, the effect of land use and rainfall change on nitrogen and phosphorus loading of non-point sources in Shiqiao river watershed was analyzed. The parameters in L-THIA model were revised according to the data recorded in the scene of runoff plots, which were set up in the watershed. The results showed that the distribution of areas with high pollution load was mainly concentrated in agricultural land and urban land. Agricultural land was the biggest contributor to nitrogen and phosphorus load. From 1995 to 2010, the load of major pollutants, namely TN and TP, showed an obviously increasing trend with increase rates of 17.91% and 25.30%, respectively. With the urbanization in the watershed, urban land increased rapidly and its area proportion reached 43.94%. The contribution of urban land to nitrogen and phosphorus load was over 40% in 2010. This was the main reason why pollution load still increased obviously while the agricultural land decreased greatly in the past 15 years. The rainfall occurred in the watershed was mainly concentrated in the flood season, so the nitrogen and phosphorus load of the flood season was far higher than that of the non-flood season and the proportion accounting for the whole year was over 85%. Pearson regression analysis between pollution load and the frequency of different patterns of rainfall demonstrated that rainfall exceeding 20 mm in a day was the main rainfall type causing non-point source pollution.

  1. D Semantic Labeling of ALS Data Based on Domain Adaption by Transferring and Fusing Random Forest Models

    NASA Astrophysics Data System (ADS)

    Wu, J.; Yao, W.; Zhang, J.; Li, Y.

    2018-04-01

    Labeling 3D point cloud data with traditional supervised learning methods requires considerable labelled samples, the collection of which is cost and time expensive. This work focuses on adopting domain adaption concept to transfer existing trained random forest classifiers (based on source domain) to new data scenes (target domain), which aims at reducing the dependence of accurate 3D semantic labeling in point clouds on training samples from the new data scene. Firstly, two random forest classifiers were firstly trained with existing samples previously collected for other data. They were different from each other by using two different decision tree construction algorithms: C4.5 with information gain ratio and CART with Gini index. Secondly, four random forest classifiers adapted to the target domain are derived through transferring each tree in the source random forest models with two types of operations: structure expansion and reduction-SER and structure transfer-STRUT. Finally, points in target domain are labelled by fusing the four newly derived random forest classifiers using weights of evidence based fusion model. To validate our method, experimental analysis was conducted using 3 datasets: one is used as the source domain data (Vaihingen data for 3D Semantic Labelling); another two are used as the target domain data from two cities in China (Jinmen city and Dunhuang city). Overall accuracies of 85.5 % and 83.3 % for 3D labelling were achieved for Jinmen city and Dunhuang city data respectively, with only 1/3 newly labelled samples compared to the cases without domain adaption.

  2. A numerical experiment on light pollution from distant sources

    NASA Astrophysics Data System (ADS)

    Kocifaj, M.

    2011-08-01

    To predict the light pollution of the night-time sky realistically over any location or measuring point on the ground presents quite a difficult calculation task. Light pollution of the local atmosphere is caused by stray light, light loss or reflection of artificially illuminated ground objects or surfaces such as streets, advertisement boards or building interiors. Thus it depends on the size, shape, spatial distribution, radiative pattern and spectral characteristics of many neighbouring light sources. The actual state of the atmospheric environment and the orography of the surrounding terrain are also relevant. All of these factors together influence the spectral sky radiance/luminance in a complex manner. Knowledge of the directional behaviour of light pollution is especially important for the correct interpretation of astronomical observations. From a mathematical point of view, the light noise or veil luminance of a specific sky element is given by a superposition of scattered light beams. Theoretical models that simulate light pollution typically take into account all ground-based light sources, thus imposing great requirements on CPU and MEM. As shown in this paper, a contribution of distant sources to the light pollution might be essential under specific conditions of low turbidity and/or Garstang-like radiative patterns. To evaluate the convergence of the theoretical model, numerical experiments are made for different light sources, spectral bands and atmospheric conditions. It is shown that in the worst case the integration limit is approximately 100 km, but it can be significantly shortened for light sources with cosine-like radiative patterns.

  3. Measurements of scalar released from point sources in a turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Talluru, K. M.; Hernandez-Silva, C.; Philip, J.; Chauhan, K. A.

    2017-04-01

    Measurements of velocity and concentration fluctuations for a horizontal plume released at several wall-normal locations in a turbulent boundary layer (TBL) are discussed in this paper. The primary objective of this study is to establish a systematic procedure to acquire accurate single-point concentration measurements for a substantially long time so as to obtain converged statistics of long tails of probability density functions of concentration. Details of the calibration procedure implemented for long measurements are presented, which include sensor drift compensation to eliminate the increase in average background concentration with time. While most previous studies reported measurements where the source height is limited to, {{s}z}/δ ≤slant 0.2 , where s z is the wall-normal source height and δ is the boundary layer thickness, here results of concentration fluctuations when the plume is released in the outer layer are emphasised. Results of mean and root-mean-square (r.m.s.) profiles of concentration for elevated sources agree with the well-accepted reflected Gaussian model (Fackrell and Robins 1982 J. Fluid. Mech. 117). However, there is clear deviation from the reflected Gaussian model for source in the intermittent region of TBL particularly at locations higher than the source itself. Further, we find that the plume half-widths are different for the mean and r.m.s. concentration profiles. Long sampling times enabled us to calculate converged probability density functions at high concentrations and these are found to exhibit exponential distribution.

  4. VDES J2325-5229 a z = 2.7 gravitationally lensed quasar discovered using morphology-independent supervised machine learning

    NASA Astrophysics Data System (ADS)

    Ostrovski, Fernanda; McMahon, Richard G.; Connolly, Andrew J.; Lemon, Cameron A.; Auger, Matthew W.; Banerji, Manda; Hung, Johnathan M.; Koposov, Sergey E.; Lidman, Christopher E.; Reed, Sophie L.; Allam, Sahar; Benoit-Lévy, Aurélien; Bertin, Emmanuel; Brooks, David; Buckley-Geer, Elizabeth; Carnero Rosell, Aurelio; Carrasco Kind, Matias; Carretero, Jorge; Cunha, Carlos E.; da Costa, Luiz N.; Desai, Shantanu; Diehl, H. Thomas; Dietrich, Jörg P.; Evrard, August E.; Finley, David A.; Flaugher, Brenna; Fosalba, Pablo; Frieman, Josh; Gerdes, David W.; Goldstein, Daniel A.; Gruen, Daniel; Gruendl, Robert A.; Gutierrez, Gaston; Honscheid, Klaus; James, David J.; Kuehn, Kyler; Kuropatkin, Nikolay; Lima, Marcos; Lin, Huan; Maia, Marcio A. G.; Marshall, Jennifer L.; Martini, Paul; Melchior, Peter; Miquel, Ramon; Ogando, Ricardo; Plazas Malagón, Andrés; Reil, Kevin; Romer, Kathy; Sanchez, Eusebio; Santiago, Basilio; Scarpine, Vic; Sevilla-Noarbe, Ignacio; Soares-Santos, Marcelle; Sobreira, Flavia; Suchyta, Eric; Tarle, Gregory; Thomas, Daniel; Tucker, Douglas L.; Walker, Alistair R.

    2017-03-01

    We present the discovery and preliminary characterization of a gravitationally lensed quasar with a source redshift zs = 2.74 and image separation of 2.9 arcsec lensed by a foreground zl = 0.40 elliptical galaxy. Since optical observations of gravitationally lensed quasars show the lens system as a superposition of multiple point sources and a foreground lensing galaxy, we have developed a morphology-independent multi-wavelength approach to the photometric selection of lensed quasar candidates based on Gaussian Mixture Models (GMM) supervised machine learning. Using this technique and gi multicolour photometric observations from the Dark Energy Survey (DES), near-IR JK photometry from the VISTA Hemisphere Survey (VHS) and WISE mid-IR photometry, we have identified a candidate system with two catalogue components with IAB = 18.61 and IAB = 20.44 comprising an elliptical galaxy and two blue point sources. Spectroscopic follow-up with NTT and the use of an archival AAT spectrum show that the point sources can be identified as a lensed quasar with an emission line redshift of z = 2.739 ± 0.003 and a foreground early-type galaxy with z = 0.400 ± 0.002. We model the system as a single isothermal ellipsoid and find the Einstein radius θE ˜ 1.47 arcsec, enclosed mass Menc ˜ 4 × 1011 M⊙ and a time delay of ˜52 d. The relatively wide separation, month scale time delay duration and high redshift make this an ideal system for constraining the expansion rate beyond a redshift of 1.

  5. Fluxes of Greenhouse Gases from the Baltimore-Washington Area: Results from WINTER 2015 Aircraft Observations

    NASA Astrophysics Data System (ADS)

    Dickerson, R. R.; Ren, X.; Shepson, P. B.; Salmon, O. E.; Brown, S. S.; Thornton, J. A.; Whetstone, J. R.; Salawitch, R. J.; Sahu, S.; Hall, D.; Grimes, C.; Wong, T. M.

    2015-12-01

    Urban areas are responsible for a major component of the anthropogenic greenhouse gas (GHG) emissions. Quantification of urban GHG fluxes is important for establishing scientifically sound and cost-effective policies for mitigating GHGs. Discrepancies between observations and model simulations of GHGs suggest uncharacterized sources in urban environments. In this work, we analyze and quantify fluxes of CO2, CH4, CO (and other trace species) from the Baltimore-Washington area based on the mass balance approach using the two-aircraft observations conducted in February-March 2015. Estimated fluxes from this area were 110,000±20,000 moles s-1 for CO2, 700±330 moles s-1 for CH4, and 535±188 moles s-1 for CO. This implies that methane is responsible for ~20% of the climate forcing from these cities. Point sources of CO2 from four regional power plants and one point source of CH4 from a landfill were identified and the emissions from these point sources were quantified based on the aircraft observation and compared to the emission inventory data. Methane fluxes from the Washington area were larger than from the Baltimore area, indicating a larger leakage rate in the Washington area. The ethane-to-methane ratios, with a mean of 3.3%, in the limited canister samples collected during the flights indicate that natural gas leaks and the upwind oil and natural gas operations are responsible for a substantial fraction of the CH4 flux. These observations will be compared to models using Ensemble Kalman Filter Assimilation techniques.

  6. Using CSLD Method to Calculate COD Pollution Load of Wei River Watershed above Huaxian Section, China

    NASA Astrophysics Data System (ADS)

    Zhu, Lei; Song, JinXi; Liu, WanQing

    2017-12-01

    Huaxian Section is the last hydrological and water quality monitoring section of Weihe River Watershed. Weihe River Watershed above Huaxian Section is taken as the research objective in this paper and COD is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a new method to estimate pollution loads—characteristic section load(CSLD) method is suggested and point source pollution and non-point source pollution loads of Weihe River Watershed above Huaxian Section are calculated in the rainy, normal and dry season in the year 2007. The results show that the monthly point source pollution loads of Weihe River Watershed above Huaxian Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above Huaxian Section change greatly and the non-point source pollution load proportions of total pollution load of COD decrease in the normal, rainy and wet period in turn.

  7. Calculating NH3-N pollution load of wei river watershed above Huaxian section using CSLD method

    NASA Astrophysics Data System (ADS)

    Zhu, Lei; Song, JinXi; Liu, WanQing

    2018-02-01

    Huaxian Section is the last hydrological and water quality monitoring section of Weihe River Watershed. So it is taken as the research objective in this paper and NH3-N is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a new method to estimate pollution loads—characteristic section load (CSLD)method is suggested and point source pollution and non-point source pollution loads of Weihe River Watershed above Huaxian Section are calculated in the rainy, normal and dry season in the year 2007. The results show that the monthly point source pollution loads of Weihe River Watershed above Huaxian Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above Huaxian Section change greatly. The non-point source pollution load proportions of total pollution load of NH3-N decrease in the normal, rainy and wet period in turn.

  8. Resolving the structure of the Galactic foreground using Herschel measurements and the Kriging technique

    NASA Astrophysics Data System (ADS)

    Pinter, S.; Bagoly, Z.; Balázs, L. G.; Horvath, I.; Racz, I. I.; Zahorecz, S.; Tóth, L. V.

    2018-05-01

    Investigating the distant extragalactic Universe requires a subtraction of the Galactic foreground. One of the major difficulties deriving the fine structure of the galactic foreground is the embedded foreground and background point sources appearing in the given fields. It is especially so in the infrared. We report our study subtracting point sources from Herschel images with Kriging, an interpolation method where the interpolated values are modelled by a Gaussian process governed by prior covariances. Using the Kriging method on Herschel multi-wavelength observations the structure of the Galactic foreground can be studied with much higher resolution than previously, leading to a better foreground subtraction at the end.

  9. Quantifying stream channel sediment contributions for the Paradise Creek Watershed in northern Idaho

    NASA Astrophysics Data System (ADS)

    Rittenburg, R.; Squires, A.; Boll, J.; Brooks, E. S.

    2012-12-01

    Excess sediment from agricultural areas has been a major source of impairment for water bodies around the world, resulting in the implementation of mitigation measures across landscapes. Watershed scale reductions often target upland erosion as key non-point sources for sediment loading. Stream channel dynamics, however, also play a contributing role in sediment loading in the form of legacy sediments, channel erosion and deposition, and buffering during storm events. Little is known about in-stream contributions, a potentially important consideration for Total Maximum Daily Loads (TMDLs). The objective of this study is to identify where and when sediment is delivered to the stream and the spatial and temporal stream channel contributions to the overall watershed scale sediment load. The study area is the Paradise Creek Watershed in northern Idaho. We modeled sediment yield to the channel system using the Water Erosion Prediction Project (WEPP) model, and subsequent channel erosion and deposition using CONCEPTs. Field observations of cross-sections along the channel system over a 5-year period were collected to verify model simulations and to test the hypothesis that the watershed load was made up predominantly of legacy sediments. Our modeling study shows that stream channels contributed to 50% of the total annual sediment load for the basin, with a 19 year time lag between sediments entering the stream to leaving the watershed outlet. Observations from long-term data in the watershed will be presented to indicate if the main source of the sediment is from either rural and urban non-point sources or the channel system.

  10. Support of Multidimensional Parallelism in the OpenMP Programming Model

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Jost, Gabriele

    2003-01-01

    OpenMP is the current standard for shared-memory programming. While providing ease of parallel programming, the OpenMP programming model also has limitations which often effect the scalability of applications. Examples for these limitations are work distribution and point-to-point synchronization among threads. We propose extensions to the OpenMP programming model which allow the user to easily distribute the work in multiple dimensions and synchronize the workflow among the threads. The proposed extensions include four new constructs and the associated runtime library. They do not require changes to the source code and can be implemented based on the existing OpenMP standard. We illustrate the concept in a prototype translator and test with benchmark codes and a cloud modeling code.

  11. CCD photometry of 1218+304 1219+28 and 1727+50: Point sources, associated nebulosity and broadband spectra

    NASA Technical Reports Server (NTRS)

    Weistrop, D.; Shaffer, D. B.; Mushotzky, R. F.; Reitsma, H. J.; Smith, B. A.

    1981-01-01

    Visual and far red surface photometry were obtained of two X-ray emitting BL Lacertae objects, 1218+304 (2A1219+305) and 1727+50 (Izw 187), as well as the highly variable object 1219+28 (ON 231, W Com). The intensity distribution for 1727+50 can be modeled using a central point source plus a de Vaucouleurs intensity law for an underlying galaxy. The broad band spectral energy distribution so derived is consistent with what is expected for an elliptical galaxy. The spectral index of the point source is alpha = 0.97. Additional VLBI and X-ray data are also reported for 1727+50. There is nebulosity associated with the recently discovered object 1218+304. No nebulosity is found associated with 1219+28. A comparison of the results with observations at X-ray and radio frequencies suggests that all the emission from 1727+50 and 1218+304 can be interpreted as due solely to direct synchrotron emission. If this is the case, the data further imply the existence of relativistic motion effects and continuous particle injection.

  12. Estimation of dynamic load of mercury in a river with BASINS-HSPF model

    Treesearch

    Ying Ouyang; John Higman; Jeff Hatten

    2012-01-01

    Purpose Mercury (Hg) is a naturally occurring element and a pervasive toxic pollutant. This study investigated the dynamic loads of Hg from the Cedar-Ortega Rivers watershed into the Lower St. Johns River (LSJR), Florida, USA, using the better assessment science integrating point and nonpoint sources (BASINS)-hydrologic simulation program - FORTRAN (HSPF) model....

  13. A Parametric Study of Fine-scale Turbulence Mixing Noise

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James; Freund, Jonathan B.

    2002-01-01

    The present paper is a study of aerodynamic noise spectra from model functions that describe the source. The study is motivated by the need to improve the spectral shape of the MGBK jet noise prediction methodology at high frequency. The predicted spectral shape usually appears less broadband than measurements and faster decaying at high frequency. Theoretical representation of the source is based on Lilley's equation. Numerical simulations of high-speed subsonic jets as well as some recent turbulence measurements reveal a number of interesting statistical properties of turbulence correlation functions that may have a bearing on radiated noise. These studies indicate that an exponential spatial function may be a more appropriate representation of a two-point correlation compared to its Gaussian counterpart. The effect of source non-compactness on spectral shape is discussed. It is shown that source non-compactness could well be the differentiating factor between the Gaussian and exponential model functions. In particular, the fall-off of the noise spectra at high frequency is studied and it is shown that a non-compact source with an exponential model function results in a broader spectrum and better agreement with data. An alternate source model that represents the source as a covariance of the convective derivative of fine-scale turbulence kinetic energy is also examined.

  14. The source mechanisms of low frequency events in volcanoes - a comparison of synthetic and real seismic data on Soufriere Hills Volcano, Montserrat

    NASA Astrophysics Data System (ADS)

    Karl, S.; Neuberg, J. W.

    2012-04-01

    Low frequency seismic signals are one class of volcano seismic earthquakes that have been observed at many volcanoes around the world, and are thought to be associated with resonating fluid-filled conduits or fluid movements. Amongst others, Neuberg et al. (2006) proposed a conceptual model for the trigger of low frequency events at Montserrat involving the brittle failure of magma in the glass transition in response to high shear stresses during the upwards movement of magma in the volcanic edifice. For this study, synthetic seismograms were generated following the proposed concept of Neuberg et al. (2006) by using an extended source modelled as an octagonal arrangement of double couples approximating a circular ringfault. For comparison, synthetic seismograms were generated using single forces only. For both scenarios, synthetic seismograms were generated using a seismic station distribution as encountered on Soufriere Hills Volcano, Montserrat. To gain a better quantitative understanding of the driving forces of low frequency events, inversions for the physical source mechanisms have become increasingly common. Therefore, we perform moment tensor inversions (Dreger, 2003) using the synthetic data as well as a chosen set of seismograms recorded on Soufriere Hills Volcano. The inversions are carried out under the (wrong) assumption to have an underlying point source rather than an extended source as the trigger mechanism of the low frequency seismic events. We will discuss differences between inversion results, and how to interpret the moment tensor components (double couple, isotropic, or CLVD), which were based on a point source, in terms of an extended source.

  15. Source counting in MEG neuroimaging

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Dell, John; Magee, Ralphy; Roberts, Timothy P. L.

    2009-02-01

    Magnetoencephalography (MEG) is a multi-channel, functional imaging technique. It measures the magnetic field produced by the primary electric currents inside the brain via a sensor array composed of a large number of superconducting quantum interference devices. The measurements are then used to estimate the locations, strengths, and orientations of these electric currents. This magnetic source imaging technique encompasses a great variety of signal processing and modeling techniques which include Inverse problem, MUltiple SIgnal Classification (MUSIC), Beamforming (BF), and Independent Component Analysis (ICA) method. A key problem with Inverse problem, MUSIC and ICA methods is that the number of sources must be detected a priori. Although BF method scans the source space on a point-to-point basis, the selection of peaks as sources, however, is finally made by subjective thresholding. In practice expert data analysts often select results based on physiological plausibility. This paper presents an eigenstructure approach for the source number detection in MEG neuroimaging. By sorting eigenvalues of the estimated covariance matrix of the acquired MEG data, the measured data space is partitioned into the signal and noise subspaces. The partition is implemented by utilizing information theoretic criteria. The order of the signal subspace gives an estimate of the number of sources. The approach does not refer to any model or hypothesis, hence, is an entirely data-led operation. It possesses clear physical interpretation and efficient computation procedure. The theoretical derivation of this method and the results obtained by using the real MEG data are included to demonstrates their agreement and the promise of the proposed approach.

  16. Source Process of the 2007 Niigata-ken Chuetsu-oki Earthquake Derived from Near-fault Strong Motion Data

    NASA Astrophysics Data System (ADS)

    Aoi, S.; Sekiguchi, H.; Morikawa, N.; Ozawa, T.; Kunugi, T.; Shirasaka, M.

    2007-12-01

    The 2007 Niigata-ken Chuetsu-oki earthquake occurred on July 16th, 2007, 10:13 JST. We performed a multi- time window linear waveform inversion analysis (Hartzell and Heaton, 1983) to estimate the rupture process from the near fault strong motion data of 14 stations from K-NET, KiK-net, F-net, JMA, and Niigata prefecture. The fault plane for the mainshock has not been clearly determined yet from the aftershock distribution, so that we performed two waveform inversions for north-west dipping fault (Model A) and south-east dipping fault (Model B). Their strike, dip, and rake are set to those of the moment tensor solutions by F-net. Fault plane model of 30 km length by 24 km width is set to cover aftershock distribution within 24 hours after the mainshock. Theoretical Green's functions were calculated by the discrete wavenumber method (Bouchon, 1981) and the R/T matrix method (Kennett, 1983) with the different stratified medium for each station based on the velocity structure including the information form the reflection survey and borehole logging data. Convolution of moving dislocation was introduced to represent the rupture propagation in an each subfault (Sekiguchi et al., 2002). The observed acceleration records were integrated into velocity except of F-net velocity data, and bandpass filtered between 0.1 and 1.0 Hz. We solved least-squared equation to obtain slip amount of each time window on each subfault to minimize squared residual of the waveform fitting between observed and synthetic waveforms. Both models provide moment magnitudes of 6.7. Regarding Model A, we obtained large slip in the south-west deeper part of the rupture starting point, which is close to Kashiwazaki-city. The second or third velocity pulses of observed velocity waveforms seem to be composed of slip from the asperity. Regarding Model B, we obtained large slip in the southwest shallower part of the rupture starting point, which is also close to Kashiwazaki-city. In both models, we found small slip near the rupture starting point, and largest slip at about ten kilometer in the south-west of the rupture starting point with the maximum slip of 2.3 and 2.5 m for Models A and B, respectively. The difference of the residual between observed and synthetic waveforms for both models is not significant, therefore it is difficult to conclude which fault plane is appropriate to explain. The estimated large-slip regions in the inverted source models with the Models A and B are located near the cross point of the two fault plane models, which should have similar radiation pattern. This situation may be one of the reasons why judgment of the fault plane orientation is such difficult. We need careful examinations not only strong motion data but also geodetic data to further explore the fault orientation and the source process of this earthquake.

  17. An integrated modeling approach for estimating the water quality benefits of conservation practices at the river basin scale.

    PubMed

    Santhi, C; Kannan, N; White, M; Di Luzio, M; Arnold, J G; Wang, X; Williams, J R

    2014-01-01

    The USDA initiated the Conservation Effects Assessment Project (CEAP) to quantify the environmental benefits of conservation practices at regional and national scales. For this assessment, a sampling and modeling approach is used. This paper provides a technical overview of the modeling approach used in CEAP cropland assessment to estimate the off-site water quality benefits of conservation practices using the Ohio River Basin (ORB) as an example. The modeling approach uses a farm-scale model, Agricultural Policy Environmental Extender (APEX), and a watershed scale model (the Soil and Water Assessment Tool [SWAT]) and databases in the Hydrologic Unit Modeling for the United States system. Databases of land use, soils, land use management, topography, weather, point sources, and atmospheric depositions were developed to derive model inputs. APEX simulates the cultivated cropland, Conserve Reserve Program land, and the practices implemented on them, whereas SWAT simulates the noncultivated land (e.g., pasture, range, urban, and forest) and point sources. Simulation results from APEX are input into SWAT. SWAT routes all sources, including APEX's, to the basin outlet through each eight-digit watershed. Each basin is calibrated for stream flow, sediment, and nutrient loads at multiple gaging sites and turned in for simulating the effects of conservation practice scenarios on water quality. Results indicate that sediment, nitrogen, and phosphorus loads delivered to the Mississippi River from ORB could be reduced by 16, 15, and 23%, respectively, due to current conservation practices. Modeling tools are useful to provide science-based information for assessing existing conservation programs, developing future programs, and developing insights on load reductions necessary for hypoxia in the Gulf of Mexico. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  18. Biomonitoring of airborne particulate matter emitted from a cement plant and comparison with dispersion modelling results

    NASA Astrophysics Data System (ADS)

    Abril, Gabriela A.; Wannaz, Eduardo D.; Mateos, Ana C.; Pignata, María L.

    2014-01-01

    The influence of a cement plant that incinerates industrial waste on the air quality of a region in the province of Córdoba, Argentina, was assessed by means of biomonitoring studies (effects of immission) and atmospheric dispersion (effects of emission) of PM10 with the application of the ISC3 model (Industrial Source Complex) developed by the USEPA (Environmental Protection Agency). For the biomonitoring studies, samples from the epiphyte plant Tillandsia capillaris Ruíz & Pav. f. capillaris were transplanted to the vicinities of the cement plant in order to determine the physiological damage and heavy metal accumulation (Ca, Mn, Fe, Co, Ni, Cu, Zn, Cd and Pb). For the application of the ISC3 model, point and area sources from the cement plant were considered to obtain average PM10 concentration results from the biomonitoring exposure period. This model permitted it to be determined that the emissions from the cement plant (point and area sources) were confined to the vicinities, without significant dispersion in the study area. This was also observed in the biomonitoring study, which identified Ca, Cd and Pb, pH and electric conductivity (EC) as biomarkers of this cement plant. Vehicular traffic emissions and soil re-suspension could be observed in the biomonitors, giving a more complete scenario. In this study, biomonitoring studies along with the application of atmospheric dispersion models, allowed the atmospheric pollution to be assessed in more detail.

  19. Checking the validity of superimposing analytical deformation models and implications for numerical modelling of dikes and magma chambers

    NASA Astrophysics Data System (ADS)

    Pascal, K.; Neuberg, J. W.; Rivalta, E.

    2011-12-01

    The displacement field due to magma movements in the subsurface is commonly modelled using the solutions for a point source (Mogi, 1958), a finite spherical source (McTigue, 1987), or a dislocation source (Okada, 1992) embedded in a homogeneous elastic half-space. When the magmatic system is represented by several sources, their respective deformation fields are summed, and the assumption of homogeneity in the half-space is violated. We have investigated the effects of neglecting the interaction between sources on the surface deformation field. To do so, we calculated the vertical and horizontal displacements for models with adjacent sources and we tested them against the solutions of corresponding numerical 3D finite element models. We implemented several models combining spherical pressure sources and dislocation sources, varying the pressure or opening of the sources and their relative position. We also investigated various numerical methods to model a dike as a dislocation tensile source or as a pressurized tabular crack. In the former case, the dike opening was either defined as two boundaries displaced from a central location, or as one boundary displaced relative to the other. We finally considered two case studies based on Soufrière Hills Volcano (Montserrat, West Indies) and the Dabbahu rift segment (Afar, Ethiopia) magmatic systems. We found that the discrepancies between simple superposition of the displacement field and a fully interacting numerical solution depend mostly on the source types and on their spacing. Their magnitude may be comparable with the errors due to neglecting the topography, the inhomogeneities in crustal properties or more realistic rheologies. In the models considered, the errors induced when neglecting the source interaction can be neglected (<5%) when the sources are separated by at least 4 radii for two combined Mogi sources and by at least 3 radii for juxtaposed Mogi and Okada sources. Furthermore, this study underlines fundamental issues related to the numerical method chosen to model a dike or a magma chamber. It clearly demonstrates that, while the magma compressibility can be neglected to model the deformation due to one source or distant sources, it is necessary to take it into account in models combining close sources.

  20. Study of landscape patterns of variation and optimization based on non-point source pollution control in an estuary.

    PubMed

    Jiang, Mengzhen; Chen, Haiying; Chen, Qinghui; Wu, Haiyan

    2014-10-15

    Appropriate increases in the "sink" of a landscape can reduce the risk of non-point source pollution (NPSP) to the sea at relatively lower costs and at a higher efficiency. Based on high-resolution remote sensing image data taken between 2003 and 2008, we analyzed the "source" and "sink" landscape pattern variations of nitrogen and phosphorus pollutants in the Jiulongjiang estuary region. The contribution to the sea and distribution of each pollutant in the region was calculated using the LCI and mGLCI models. The results indicated that an increased amount of pollutants was contributed to the sea, and the "source" area of the nitrogen NPSP in the study area increased by 32.75 km(2). We also propose a landscape pattern optimization to reduce pollution in the Jiulongjiang estuary in 2008 through the conversion of cultivated land with slopes greater than 15° and paddy fields near rivers, and an increase in mangrove areas. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Real-time determination of the worst tsunami scenario based on Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Furuya, Takashi; Koshimura, Shunichi; Hino, Ryota; Ohta, Yusaku; Inoue, Takuya

    2016-04-01

    In recent years, real-time tsunami inundation forecasting has been developed with the advances of dense seismic monitoring, GPS Earth observation, offshore tsunami observation networks, and high-performance computing infrastructure (Koshimura et al., 2014). Several uncertainties are involved in tsunami inundation modeling and it is believed that tsunami generation model is one of the great uncertain sources. Uncertain tsunami source model has risk to underestimate tsunami height, extent of inundation zone, and damage. Tsunami source inversion using observed seismic, geodetic and tsunami data is the most effective to avoid underestimation of tsunami, but needs to expect more time to acquire the observed data and this limitation makes difficult to terminate real-time tsunami inundation forecasting within sufficient time. Not waiting for the precise tsunami observation information, but from disaster management point of view, we aim to determine the worst tsunami source scenario, for the use of real-time tsunami inundation forecasting and mapping, using the seismic information of Earthquake Early Warning (EEW) that can be obtained immediately after the event triggered. After an earthquake occurs, JMA's EEW estimates magnitude and hypocenter. With the constraints of earthquake magnitude, hypocenter and scaling law, we determine possible multi tsunami source scenarios and start searching the worst one by the superposition of pre-computed tsunami Green's functions, i.e. time series of tsunami height at offshore points corresponding to 2-dimensional Gaussian unit source, e.g. Tsushima et al., 2014. Scenario analysis of our method consists of following 2 steps. (1) Searching the worst scenario range by calculating 90 scenarios with various strike and fault-position. From maximum tsunami height of 90 scenarios, we determine a narrower strike range which causes high tsunami height in the area of concern. (2) Calculating 900 scenarios that have different strike, dip, length, width, depth and fault-position. Note that strike is limited with the range obtained from 90 scenarios calculation. From 900 scenarios, we determine the worst tsunami scenarios from disaster management point of view, such as the one with shortest travel time and the highest water level. The method was applied to a hypothetical-earthquake, and verified if it can effectively search the worst tsunami source scenario in real-time, to be used as an input of real-time tsunami inundation forecasting.

  2. Laser interferometer space antenna dynamics and controls model

    NASA Astrophysics Data System (ADS)

    Maghami, Peiman G.; Tupper Hyde, T.

    2003-05-01

    A 19 degree-of-freedom (DOF) dynamics and controls model of a laser interferometer space antenna (LISA) spacecraft has been developed. This model is used to evaluate the feasibility of the dynamic pointing and positioning requirements of a typical LISA spacecraft. These requirements must be met for LISA to be able to successfully detect gravitational waves in the frequency band of interest (0.1-100 mHz). The 19-DOF model includes all rigid-body degrees of freedom. A number of disturbance sources, both internal and external, are included. Preliminary designs for the four control systems that comprise the LISA disturbance reduction system (DRS) have been completed and are included in the model. Simulation studies are performed to demonstrate that the LISA pointing and positioning requirements are feasible and can be met.

  3. Reducing epistemic errors in water quality modelling through high-frequency data and stakeholder collaboration: the case of an industrial spill

    NASA Astrophysics Data System (ADS)

    Krueger, Tobias; Inman, Alex; Paling, Nick

    2014-05-01

    Catchment management, as driven by legislation such as the EU WFD or grassroots initiatives, requires the apportionment of in-stream pollution to point and diffuse sources so that mitigation measures can be targeted and costs and benefits shared. Source apportionment is typically done via modelling. Given model imperfections and input data errors, it has become state-of-the-art to employ an uncertainty framework. However, what is not easily incorporated in such a framework, and currently much discussed in hydrology, are epistemic uncertainties, i.e. those uncertainties that relate to lack of knowledge about processes and data. For example, what if an otherwise negligible source suddenly matters because of an accidental pollution incident? In this paper we present such a case of epistemic error, an industrial spill ignored in a water quality model, demonstrate the bias of the resulting model simulations, and show how the error was discovered somewhat incidentally by auxiliary high-frequency data and finally corrected through the collective intelligence of a stakeholder network. We suggest that accidental pollution incidents like this are a wide-spread, though largely ignored, problem. Hence our discussion will reflect on the practice of catchment monitoring, modelling and management in general. The case itself occurred as part of ongoing modelling support in the Tamar catchment, one of the priority catchments of the UK government's new approach to managing water resources more decentralised and collaboratively. An Extended Export Coefficient Model (ECM+) had been developed with stakeholders to simulate transfers of nutrients (N & P), sediment and Faecal Coliforms from land to water and down the river network as a function of sewage treatment options, land use, livestock densities and farm management practices. In the process of updating the model for the hydrological years 2008-2012 an over-prediction of the annual average P concentration by the model was found at one sub-catchment outlet compared to high-frequency measurements at this point that had become available through another UK government initiative, the Demonstration Test Catchments. This discrepancy had gone unnoticed when calibrating the model in a probabilistic framework against the statutory monitoring data due to the high uncertainties associated with their low-frequency monitoring regime. According to these data what turned out to be an over-prediction seemed possible, albeit with low probability. It was only through the well-established contacts with the local stakeholders that this anomaly could be connected to an industrial spill elsewhere in the catchment, and the model eventually corrected for this additional source. Failing to account for this source would have resulted in drastic over-estimation of the contributions of other sources, in particular agriculture, and eventually wrong targeting of catchment restoration funds and collateral damage of stakeholder relations. The paper will conclude with a discussion of the following general points: the pretence of uncertainty frameworks in the light of epistemic errors; the value of high-frequency data; the value of stakeholder collaboration, particularly in the light of sharing sensitive information; the (somewhat incidental) synergies of various pieces of information and policy initiatives.

  4. Wave propagation in anisotropic medium due to an oscillatory point source with application to unidirectional composites

    NASA Technical Reports Server (NTRS)

    Williams, J. H., Jr.; Marques, E. R. C.; Lee, S. S.

    1986-01-01

    The far-field displacements in an infinite transversely isotropic elastic medium subjected to an oscillatory concentrated force are derived. The concepts of velocity surface, slowness surface and wave surface are used to describe the geometry of the wave propagation process. It is shown that the decay of the wave amplitudes depends not only on the distance from the source (as in isotropic media) but also depends on the direction of the point of interest from the source. As an example, the displacement field is computed for a laboratory fabricated unidirectional fiberglass epoxy composite. The solution for the displacements is expressed as an amplitude distribution and is presented in polar diagrams. This analysis has potential usefulness in the acoustic emission (AE) and ultrasonic nondestructive evaluation of composite materials. For example, the transient localized disturbances which are generally associated with AE sources can be modeled via this analysis. In which case, knowledge of the displacement field which arrives at a receiving transducer allows inferences regarding the strength and orientation of the source, and consequently perhaps the degree of damage within the composite.

  5. The Herschel Virgo Cluster Survey. XVII. SPIRE point-source catalogs and number counts

    NASA Astrophysics Data System (ADS)

    Pappalardo, Ciro; Bendo, George J.; Bianchi, Simone; Hunt, Leslie; Zibetti, Stefano; Corbelli, Edvige; di Serego Alighieri, Sperello; Grossi, Marco; Davies, Jonathan; Baes, Maarten; De Looze, Ilse; Fritz, Jacopo; Pohlen, Michael; Smith, Matthew W. L.; Verstappen, Joris; Boquien, Médéric; Boselli, Alessandro; Cortese, Luca; Hughes, Thomas; Viaene, Sebastien; Bizzocchi, Luca; Clemens, Marcel

    2015-01-01

    Aims: We present three independent catalogs of point-sources extracted from SPIRE images at 250, 350, and 500 μm, acquired with the Herschel Space Observatory as a part of the Herschel Virgo Cluster Survey (HeViCS). The catalogs have been cross-correlated to consistently extract the photometry at SPIRE wavelengths for each object. Methods: Sources have been detected using an iterative loop. The source positions are determined by estimating the likelihood to be a real source for each peak on the maps, according to the criterion defined in the sourceExtractorSussextractor task. The flux densities are estimated using the sourceExtractorTimeline, a timeline-based point source fitter that also determines the fitting procedure with the width of the Gaussian that best reproduces the source considered. Afterwards, each source is subtracted from the maps, removing a Gaussian function in every position with the full width half maximum equal to that estimated in sourceExtractorTimeline. This procedure improves the robustness of our algorithm in terms of source identification. We calculate the completeness and the flux accuracy by injecting artificial sources in the timeline and estimate the reliability of the catalog using a permutation method. Results: The HeViCS catalogs contain about 52 000, 42 200, and 18 700 sources selected at 250, 350, and 500 μm above 3σ and are ~75%, 62%, and 50% complete at flux densities of 20 mJy at 250, 350, 500 μm, respectively. We then measured source number counts at 250, 350, and 500 μm and compare them with previous data and semi-analytical models. We also cross-correlated the catalogs with the Sloan Digital Sky Survey to investigate the redshift distribution of the nearby sources. From this cross-correlation, we select ~2000 sources with reliable fluxes and a high signal-to-noise ratio, finding an average redshift z ~ 0.3 ± 0.22 and 0.25 (16-84 percentile). Conclusions: The number counts at 250, 350, and 500 μm show an increase in the slope below 200 mJy, indicating a strong evolution in number of density for galaxies at these fluxes. In general, models tend to overpredict the counts at brighter flux densities, underlying the importance of studying the Rayleigh-Jeans part of the spectral energy distribution to refine the theoretical recipes of the models. Our iterative method for source identification allowed the detection of a family of 500 μm sources that are not foreground objects belonging to Virgo and not found in other catalogs. Herschel is an ESA space observatory with science instruments provided by a European-led principal investigator consortia and with an important participation from NASA.The 250, 350, 500 μm, and the total catalogs are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/573/A129

  6. The geometrical precision of virtual bone models derived from clinical computed tomography data for forensic anthropology.

    PubMed

    Colman, Kerri L; Dobbe, Johannes G G; Stull, Kyra E; Ruijter, Jan M; Oostra, Roelof-Jan; van Rijn, Rick R; van der Merwe, Alie E; de Boer, Hans H; Streekstra, Geert J

    2017-07-01

    Almost all European countries lack contemporary skeletal collections for the development and validation of forensic anthropological methods. Furthermore, legal, ethical and practical considerations hinder the development of skeletal collections. A virtual skeletal database derived from clinical computed tomography (CT) scans provides a potential solution. However, clinical CT scans are typically generated with varying settings. This study investigates the effects of image segmentation and varying imaging conditions on the precision of virtual modelled pelves. An adult human cadaver was scanned using varying imaging conditions, such as scanner type and standard patient scanning protocol, slice thickness and exposure level. The pelvis was segmented from the various CT images resulting in virtually modelled pelves. The precision of the virtual modelling was determined per polygon mesh point. The fraction of mesh points resulting in point-to-point distance variations of 2 mm or less (95% confidence interval (CI)) was reported. Colour mapping was used to visualise modelling variability. At almost all (>97%) locations across the pelvis, the point-to-point distance variation is less than 2 mm (CI = 95%). In >91% of the locations, the point-to-point distance variation was less than 1 mm (CI = 95%). This indicates that the geometric variability of the virtual pelvis as a result of segmentation and imaging conditions rarely exceeds the generally accepted linear error of 2 mm. Colour mapping shows that areas with large variability are predominantly joint surfaces. Therefore, results indicate that segmented bone elements from patient-derived CT scans are a sufficiently precise source for creating a virtual skeletal database.

  7. Quality Assessment and Comparison of Smartphone and Leica C10 Laser Scanner Based Point Clouds

    NASA Astrophysics Data System (ADS)

    Sirmacek, Beril; Lindenbergh, Roderik; Wang, Jinhu

    2016-06-01

    3D urban models are valuable for urban map generation, environment monitoring, safety planning and educational purposes. For 3D measurement of urban structures, generally airborne laser scanning sensors or multi-view satellite images are used as a data source. However, close-range sensors (such as terrestrial laser scanners) and low cost cameras (which can generate point clouds based on photogrammetry) can provide denser sampling of 3D surface geometry. Unfortunately, terrestrial laser scanning sensors are expensive and trained persons are needed to use them for point cloud acquisition. A potential effective 3D modelling can be generated based on a low cost smartphone sensor. Herein, we show examples of using smartphone camera images to generate 3D models of urban structures. We compare a smartphone based 3D model of an example structure with a terrestrial laser scanning point cloud of the structure. This comparison gives us opportunity to discuss the differences in terms of geometrical correctness, as well as the advantages, disadvantages and limitations in data acquisition and processing. We also discuss how smartphone based point clouds can help to solve further problems with 3D urban model generation in a practical way. We show that terrestrial laser scanning point clouds which do not have color information can be colored using smartphones. The experiments, discussions and scientific findings might be insightful for the future studies in fast, easy and low-cost 3D urban model generation field.

  8. Spectral Models of Neutron Star Magnetospheres

    NASA Technical Reports Server (NTRS)

    Romani, Roger W.

    1997-01-01

    We revisit the association of unidentified Galactic plane EGRET sources with tracers of recent massive star formation and death. Up-to-date catalogs of OB associations, SNR's, young pulsars, H2 regions and young open clusters were used in finding counterparts for a recent list of EGRET sources. It has been argued for some time that EGRET source positions are correlated with SNR's and OB associations as a class; we extend such analyses by finding additional counterparts and assessing the probability of individual source identifications. Among the several scenarios relating EGRET sources to massive stars, we focus on young neutron stars as the origin of the gamma-ray emission. The characteristics of the candidate identifications are compared to the known gamma-ray pulsar sample and to detailed Galactic population syntheses using our outer gap pulsar model of gamma-ray emission. Both the spatial distribution and luminosity function of the candidates are in good agreement with the model predictions; we infer that young pulsars can account for the bulk of the excess low latitude EGRET sources. We show that with this identification, the gamma-ray point sources provide an important new window into the history of recent massive star death in the solar neighborhood.

  9. Influence of Mean-Density Gradient on Small-Scale Turbulence Noise

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas

    2000-01-01

    A physics-based methodology is described to predict jet-mixing noise due to small-scale turbulence. Both self- and shear-noise source teens of Lilley's equation are modeled and the far-field aerodynamic noise is expressed as an integral over the jet volume of the source multiplied by an appropriate Green's function which accounts for source convection and mean-flow refraction. Our primary interest here is to include transverse gradients of the mean density in the source modeling. It is shown that, in addition to the usual quadrupole type sources which scale to the fourth-power of the acoustic wave number, additional dipole and monopole sources are present that scale to lower powers of wave number. Various two-point correlations are modeled and an approximate solution to noise spectra due to multipole sources of various orders is developed. Mean flow and turbulence information is provided through RANS-k(epsilon) solution. Numerical results are presented for a subsonic jet at a range of temperatures and Mach numbers. Predictions indicated a decrease in high frequency noise with added heat, while changes in the low frequency noise depend on jet velocity and observer angle.

  10. Nature of the Unidentified TeV Source HESS J1614-518 Revealed by Suzaku and XMM-Newton Observations

    NASA Astrophysics Data System (ADS)

    Sakai, M.; Yajima, Y.; Matsumoto, H.

    2013-03-01

    We report new results concerning HESS J1614-518, which exhibits two regions with intense γ-ray emission. The south and center regions of HESS J1614-518 were observed with Suzaku in 2008, while the north region with the 1st brightest peak was observed in 2006. No X-ray counterpart is found at the 2nd brightest peak; the upper limit of the X-ray flux is estimated as 1.6 × 10-13 erg cm-2 s-1 in the 2-10 keV band. A previously-known soft X-ray source, Suzaku J1614-5152, is detected at the center of HESS J1614-518. Analyzing the XMM-Newton archival data, we reveal that Suzaku J1614-5152 consists of multiple point sources. The X-ray spectrum of the brightest point source, XMMU J161406.0-515225, could be described by a power-law model with the photon index Γ = 5.2+0.6-0.5 or a blackbody model with the temperature kT = 0.38+0.04-0.04 {keV}. In the blackbody model, the estimated column density N H = 1.1+0.3-0.2 × 1022 {cm}-2 is almost the same as that of the hard extended X-ray emission in Suzaku J1614-5141, spatially coincident with the 1st peak position. In this case, XMMU J161406.0-515225 may be physically related to Suzaku J1614-5141 and HESS J1614-518.

  11. Surface-water nutrient conditions and sources in the United States Pacific Northwest

    USGS Publications Warehouse

    Wise, D.R.; Johnson, H.M.

    2011-01-01

    The SPAtially Referenced Regressions On Watershed attributes (SPARROW) model was used to perform an assessment of surface-water nutrient conditions and to identify important nutrient sources in watersheds of the Pacific Northwest region of the United States (U.S.) for the year 2002. Our models included variables representing nutrient sources as well as landscape characteristics that affect nutrient delivery to streams. Annual nutrient yields were higher in watersheds on the wetter, west side of the Cascade Range compared to watersheds on the drier, east side. High nutrient enrichment (relative to the U.S. Environmental Protection Agency's recommended nutrient criteria) was estimated in watersheds throughout the region. Forest land was generally the largest source of total nitrogen stream load and geologic material was generally the largest source of total phosphorus stream load generated within the 12,039 modeled watersheds. These results reflected the prevalence of these two natural sources and the low input from other nutrient sources across the region. However, the combined input from agriculture, point sources, and developed land, rather than natural nutrient sources, was responsible for most of the nutrient load discharged from many of the largest watersheds. Our results provided an understanding of the regional patterns in surface-water nutrient conditions and should be useful to environmental managers in future water-quality planning efforts.

  12. Constructing graph models for software system development and analysis

    NASA Astrophysics Data System (ADS)

    Pogrebnoy, Andrey V.

    2017-01-01

    We propose a concept for creating the instrumentation for functional and structural decisions rationale during the software system (SS) development. We propose to develop SS simultaneously on two models - functional (FM) and structural (SM). FM is a source code of the SS. Adequate representation of the FM in the form of a graph model (GM) is made automatically and called SM. The problem of creating and visualizing GM is considered from the point of applying it as a uniform platform for the adequate representation of the SS source code. We propose three levels of GM detailing: GM1 - for visual analysis of the source code and for SS version control, GM2 - for resources optimization and analysis of connections between SS components, GM3 - for analysis of the SS functioning in dynamics. The paper includes examples of constructing all levels of GM.

  13. Chandra Detects Enigmatic Point X-ray Sources in the Cat's Eye and the Helix Nebulae

    NASA Astrophysics Data System (ADS)

    Guerrero, M. A.; Gruendl, R. A.; Chu, Y.-H.; Kaler, J. B.; Williams, R. M.

    2000-12-01

    Central stars of planetary nebulae (PNe) with Teff greater than 100,000 K are expected to emit soft X-rays that peak below 0.1 keV. Chandra ACIS-S observations of the Cat's Eye Nebula (NGC 6543) and the Helix Nebula (NGC 7293) have detected point X-ray sources at their central stars. The point X-ray source at the central star of the Cat's Eye is both unknown previously and unexpected because the stellar temperature is only ~50,000 K. In contrast, the point X-ray source at the central star of the Helix was previously detected by ROSAT and its soft X-ray emission is expected because the stellar temperature is ~100,000 K. However, the Helix X-ray source also shows a harder X-ray component peaking at 0.8 keV that is unexpected and for which Chandra has provided the first high-resolution spectrum for detailed analysis. The spectra of the point X-ray sources in the Cat's Eye and the Helix show line features indicating an origin of thermal plasma emission. The spectrum of the Helix source can be fit by Raymond & Smith's model of plasma emission at ~9*E6 K. The spectrum of the Cat's Eye source has too few counts for a spectral fit, but appears to be consistent with plasma emission at 2-3*E6 K. The X-ray luminosities of both sources are ~5*E29 erg s-1. The observed plasma temperatures are too high for accretion disks around white dwarfs, but they could be ascribed to coronal X-ray emission. While central stars of PNe are not known to have coronae, the observed spectra are consistent with quiescent X-ray emission from dM flare stars. On the other hand, neither the central star of the Helix or the Cat's Eye are known to have a binary companion. It is possible that the X-rays from the Cat's Eye's central star originate from shocks in the stellar wind, but the central star of the Helix does not have a measurable fast stellar wind. This work is supported by the CXC grant number GO0-1004X.

  14. On precisely modelling surface deformation due to interacting magma chambers and dykes

    NASA Astrophysics Data System (ADS)

    Pascal, Karen; Neuberg, Jurgen; Rivalta, Eleonora

    2014-01-01

    Combined data sets of InSAR and GPS allow us to observe surface deformation in volcanic settings. However, at the vast majority of volcanoes, a detailed 3-D structure that could guide the modelling of deformation sources is not available, due to the lack of tomography studies, for example. Therefore, volcano ground deformation due to magma movement in the subsurface is commonly modelled using simple point (Mogi) or dislocation (Okada) sources, embedded in a homogeneous, isotropic and elastic half-space. When data sets are too complex to be explained by a single deformation source, the magmatic system is often represented by a combination of these sources and their displacements fields are simply summed. By doing so, the assumption of homogeneity in the half-space is violated and the resulting interaction between sources is neglected. We have quantified the errors of such a simplification and investigated the limits in which the combination of analytical sources is justified. We have calculated the vertical and horizontal displacements for analytical models with adjacent deformation sources and have tested them against the solutions of corresponding 3-D finite element models, which account for the interaction between sources. We have tested various double-source configurations with either two spherical sources representing magma chambers, or a magma chamber and an adjacent dyke, modelled by a rectangular tensile dislocation or pressurized crack. For a tensile Okada source (representing an opening dyke) aligned or superposed to a Mogi source (magma chamber), we find the discrepancies with the numerical models to be insignificant (<5 per cent) independently of the source separation. However, if a Mogi source is placed side by side to an Okada source (in the strike-perpendicular direction), we find the discrepancies to become significant for a source separation less than four times the radius of the magma chamber. For horizontally or vertically aligned pressurized sources, the discrepancies are up to 20 per cent, which translates into surprisingly large errors when inverting deformation data for source parameters such as depth and volume change. Beyond 8 radii however, we demonstrate that the summation of analytical sources represents adjacent magma chambers correctly.

  15. Exploring behavior of an unusual megaherbivore: A spatially explicit foraging model of the hippopotamus

    USGS Publications Warehouse

    Lewison, R.L.; Carter, J.

    2004-01-01

    Herbivore foraging theories have been developed for and tested on herbivores across a range of sizes. Due to logistical constraints, however, little research has focused on foraging behavior of megaherbivores. Here we present a research approach that explores megaherbivore foraging behavior, and assesses the applicability of foraging theories developed on smaller herbivores to megafauna. With simulation models as reference points for the analysis of empirical data, we investigate foraging strategies of the common hippopotamus (Hippopotamus amphibius). Using a spatially explicit individual based foraging model, we apply traditional herbivore foraging strategies to a model hippopotamus, compare model output, and then relate these results to field data from wild hippopotami. Hippopotami appear to employ foraging strategies that respond to vegetation characteristics, such as vegetation quality, as well as spatial reference information, namely distance to a water source. Model predictions, field observations, and comparisons of the two support that hippopotami generally conform to the central place foraging construct. These analyses point to the applicability of general herbivore foraging concepts to megaherbivores, but also point to important differences between hippopotami and other herbivores. Our synergistic approach of models as reference points for empirical data highlights a useful method of behavioral analysis for hard-to-study megafauna. ?? 2003 Elsevier B.V. All rights reserved.

  16. Application of an integrated Weather Research and Forecasting (WRF)/CALPUFF modeling tool for source apportionment of atmospheric pollutants for air quality management: A case study in the urban area of Benxi, China.

    PubMed

    Wu, Hao; Zhang, Yan; Yu, Qi; Ma, Weichun

    2018-04-01

    In this study, the authors endeavored to develop an effective framework for improving local urban air quality on meso-micro scales in cities in China that are experiencing rapid urbanization. Within this framework, the integrated Weather Research and Forecasting (WRF)/CALPUFF modeling system was applied to simulate the concentration distributions of typical pollutants (particulate matter with an aerodynamic diameter <10 μm [PM 10 ], sulfur dioxide [SO 2 ], and nitrogen oxides [NO x ]) in the urban area of Benxi. Statistical analyses were performed to verify the credibility of this simulation, including the meteorological fields and concentration fields. The sources were then categorized using two different classification methods (the district-based and type-based methods), and the contributions to the pollutant concentrations from each source category were computed to provide a basis for appropriate control measures. The statistical indexes showed that CALMET had sufficient ability to predict the meteorological conditions, such as the wind fields and temperatures, which provided meteorological data for the subsequent CALPUFF run. The simulated concentrations from CALPUFF showed considerable agreement with the observed values but were generally underestimated. The spatial-temporal concentration pattern revealed that the maximum concentrations tended to appear in the urban centers and during the winter. In terms of their contributions to pollutant concentrations, the districts of Xihu, Pingshan, and Mingshan all affected the urban air quality to different degrees. According to the type-based classification, which categorized the pollution sources as belonging to the Bengang Group, large point sources, small point sources, and area sources, the source apportionment showed that the Bengang Group, the large point sources, and the area sources had considerable impacts on urban air quality. Finally, combined with the industrial characteristics, detailed control measures were proposed with which local policy makers could improve the urban air quality in Benxi. In summary, the results of this study showed that this framework has credibility for effectively improving urban air quality, based on the source apportionment of atmospheric pollutants. The authors endeavored to build up an effective framework based on the integrated WRF/CALPUFF to improve the air quality in many cities on meso-micro scales in China. Via this framework, the integrated modeling tool is accurately used to study the characteristics of meteorological fields, concentration fields, and source apportionments of pollutants in target area. The impacts of classified sources on air quality together with the industrial characteristics can provide more effective control measures for improving air quality. Through the case study, the technical framework developed in this study, particularly the source apportionment, could provide important data and technical support for policy makers to assess air pollution on the scale of a city in China or even the world.

  17. A COMPUTATIONAL FRAMEWORK FOR EVALUATION OF NPS MANAGEMENT SCENARIOS: ROLE OF PARAMETER UNCERTAINTY

    EPA Science Inventory

    Utility of complex distributed-parameter watershed models for evaluation of the effectiveness of non-point source sediment and nutrient abatement scenarios such as Best Management Practices (BMPs) often follows the traditional {calibrate ---> validate ---> predict} procedure. Des...

  18. Developing Verification Systems for Building Information Models of Heritage Buildings with Heterogeneous Datasets

    NASA Astrophysics Data System (ADS)

    Chow, L.; Fai, S.

    2017-08-01

    The digitization and abstraction of existing buildings into building information models requires the translation of heterogeneous datasets that may include CAD, technical reports, historic texts, archival drawings, terrestrial laser scanning, and photogrammetry into model elements. In this paper, we discuss a project undertaken by the Carleton Immersive Media Studio (CIMS) that explored the synthesis of heterogeneous datasets for the development of a building information model (BIM) for one of Canada's most significant heritage assets - the Centre Block of the Parliament Hill National Historic Site. The scope of the project included the development of an as-found model of the century-old, six-story building in anticipation of specific model uses for an extensive rehabilitation program. The as-found Centre Block model was developed in Revit using primarily point cloud data from terrestrial laser scanning. The data was captured by CIMS in partnership with Heritage Conservation Services (HCS), Public Services and Procurement Canada (PSPC), using a Leica C10 and P40 (exterior and large interior spaces) and a Faro Focus (small to mid-sized interior spaces). Secondary sources such as archival drawings, photographs, and technical reports were referenced in cases where point cloud data was not available. As a result of working with heterogeneous data sets, a verification system was introduced in order to communicate to model users/viewers the source of information for each building element within the model.

  19. Advanced Acoustic Model Technical Reference and User Manual

    DTIC Science & Technology

    2009-05-01

    the source directed from the source to the receiver. Aspread = Geometrical spherical spreading loss (point source). Aatm = ANSI/ ISO atmospheric...426.1 1,013.4 27000 722 422.5 1,009.2 28000 691 418.9 1,004.9 29000 660 415.4 1,000.6 30000 631 411.8 996.4 A d v a n c e d A c o u s t i c M...sound by molecular relaxation processes in the atmosphere is computed according to the current ANSI/ ISO standard.28 Examples of the weather effects

  20. Trend analysis of a tropical urban river water quality in Malaysia.

    PubMed

    Othman, Faridah; M E, Alaa Eldin; Mohamed, Ibrahim

    2012-12-01

    Rivers play a significant role in providing water resources for human and ecosystem survival and health. Hence, river water quality is an important parameter that must be preserved and monitored. As the state of Selangor and the city of Kuala Lumpur, Malaysia, are undergoing tremendous development, the river is subjected to pollution from point and non-point sources. The water quality of the Klang River basin, one of the most densely populated areas within the region, is significantly degraded due to human activities as well as urbanization. Evaluation of the overall river water quality status is normally represented by a water quality index (WQI), which consists of six parameters, namely dissolved oxygen, biochemical oxygen demand, chemical oxygen demand, suspended solids, ammoniacal nitrogen and pH. The objectives of this study are to assess the water quality status for this tropical, urban river and to establish the WQI trend. Using monthly WQI data from 1997 to 2007, time series were plotted and trend analysis was performed by employing the first-order autocorrelated trend model on the moving average values for every station. The initial and final values of either the moving average or the trend model were used as the estimates of the initial and final WQI at the stations. It was found that Klang River water quality has shown some improvement between 1997 and 2007. Water quality remains good in the upper stream area, which provides vital water sources for water treatment plants in the Klang valley. Meanwhile, the water quality has also improved in other stations. Results of the current study suggest that the present policy on managing river quality in the Klang River has produced encouraging results; the policy should, however, be further improved alongside more vigorous monitoring of pollution discharge from various point sources such as industrial wastewater, municipal sewers, wet markets, sand mining and landfills, as well as non-point sources such as agricultural or urban runoff and commercial activity.

Top